AI Website Discovery
Optimize your website for federal buyers.
Federal procurement is moving to AI-assisted review. Score your site against what those systems actually extract — free, 8 minutes, no card.
▸ Specimen scan · ANON-23/100
◆ FAIL
- CAGE / UEI 0 ● MISS
- NAICS 0 ● MISS
- Past performance 4 ◆ THIN
- Capability PDF 7 ● HIT
- JSON-LD schema 0 ● MISS
- AI-crawler access 10 ● HIT
▸ FINDING
Identity buried in graphics. Capability statement is the only extractable proof asset. AI evaluator falls back to OCR or skip.
Sample scan · representative of typical govcon SME baseline
▸ Briefing 01 · The problem
Your website is the public proof layer.
Federal proposal review is becoming more structured, more extractive, and less forgiving of missing proof. The first pass increasingly rewards the same things every time: clear headings, matching terminology, explicit proof, text a review system can parse.
▸ FACT 01
Agencies are running AI tools to check compliance, forms, clauses, requirements, and responsibility. Humans still award. The first pass is automated.
▸ FACT 02
CAGE. UEI. NAICS. Certifications. Past performance. Capability statements. Buried in graphics, vague copy, or a slow site — you make the buyer work.
▸ FACT 03
We build the site that makes the first check easy. Where federal buyers find you first.
▸ Briefing 02 · Calibration
What humans see vs. what AI extracts.
Run against a typical govcon SME site. These are the structured signals a federal AI-assisted review pipeline attempts to pull before a contracting officer opens the proposal.
- Company name Yes HIT
- Set-aside certifications Image alt only PARTIAL
- Capability statement PDF Yes HIT
- CAGE code Footer flat text MISS
- DUNS number Footer flat text MISS
- Address, phone, email Icon-prefixed MISS · all 3
- NAICS codes Not on page MISS
- Past-performance contracts Vague only MISS
- JSON-LD schema markup Not present MISS
- UEI Not present MISS
▸ OPERATOR RECEIPT
This isn't a synthetic test. The same site was attempted by an AI-fluent operator using Claude in a coding-agent workflow with Firecrawl, image OCR, and JSON-extraction tooling — the same stack a federal procurement evaluator-LLM might be wired with.
The agent could not pull structured CAGE / DUNS / NAICS / past-performance into a usable record. The fallback was to screenshot the page and reason from the image.
▸ IMPLICATION
If a state-of-the-art operator-grade rig falls back to screenshots on this site, an automated federal review pipeline almost certainly falls back further — usually to "skip this firm."
The miss isn't theoretical. It happens during a 60-second prep window before a contracting officer opens the proposal.
▸ Briefing 03 · Source ledger
What the federal government is actually doing.
Five primary sources — government publications and documented reporting. No vendor opinion. Read them and connect the dots.
- SRC-01 ReportingFederal News Network — "Federal agencies are using AI to evaluate proposals"Apr 2026
AI tools running compliance, forms, clauses, requirements, and responsibility checks on incoming proposals. Documented across multiple federal agencies.
- SRC-02 Executive policyEO 14110 (Safe, Secure, Trustworthy AI) + OMB M-24-10Oct 2023 / Mar 2024
Standardized federal AI use across agencies. Mandates AI inventories, governance frameworks, accelerated AI adoption in the federal workforce.
- SRC-03 Agency inventoryGSA AI Use Case Inventory + per-agency AI inventories under M-24-10Annual
Concrete, named AI use cases across DoD, DHS, HHS, Treasury, others — many in acquisition-adjacent functions. Published on ai.gov + agency sites.
- SRC-04 Procurement vehicleFedRAMP AI category + GSA MAS SIN 54151SActive
AI services are a routine federal procurement category on existing contract vehicles. Not exotic infrastructure — already on schedule.
- SRC-05 Public data infraSAM.gov + USAspending.gov structured-data APIsContinuous
The federal market itself runs on machine-readable data: structured entities, NAICS codes, award records, registration data. Your website is the one piece that doesn't match the pattern.
▸ Briefing 04 · Scoring rubric
10-dimension federal-AI-readiness score.
Each dimension scored 0 / 4 / 7 / 10. Total: 0–100. You get the number, per-dimension verdicts, and a specific finding for each gap.
- DIM-01 scored 0–10
Govcon identity surfacing
CHECK · CAGE, UEI, federal address findable in page text (not just images). Bonus: linked to SAM.gov entity.
GREEN · All three in semantic HTML, near the top, with SAM.gov link.
- DIM-02 scored 0–10
NAICS code visibility
CHECK · NAICS codes listed in extractable text. Cross-reference with USAspending if found.
GREEN · 3+ NAICS codes in HTML, mapped to capability sections.
- DIM-03 scored 0–10
Set-aside / certification proof
CHECK · SDVOSB / WOSB / 8(a) / HUBZone mentioned in HTML text — not only in badge images.
GREEN · Cert names in text + image alt + linked to issuing authority.
- DIM-04 scored 0–10
Past-performance specificity
CHECK · Specific contracts findable: agency name, dollar range, period of performance.
GREEN · 3+ contracts with agency + value-band + outcome in text form.
- DIM-05 scored 0–10
Capability statement extractability
CHECK · Downloadable capability PDF, text-selectable (not flat image), mirrored in HTML.
GREEN · PDF present, text-selectable, HTML mirror exists.
- DIM-06 scored 0–10
Schema markup coverage
CHECK · JSON-LD: Organization, Service, Person (key personnel), GovernmentService.
GREEN · 2+ schema blocks present, Organization with naics, taxID, sameAs.
- DIM-07 scored 0–10
Semantic structure
CHECK · h1/h2/main/nav used correctly. Capabilities sectioned semantically.
GREEN · Single H1, hierarchical headings, semantic landmarks present.
- DIM-08 scored 0–10
AI-crawler accessibility
CHECK · robots.txt allows GPTBot, ClaudeBot, PerplexityBot, Google-Extended. llms.txt and/or agents.md present at root.
GREEN · All present, AI bots explicitly allowed, llms.txt has clean entity summary.
- DIM-09 scored 0–10
Performance + mobile
CHECK · LCP < 2.5s, mobile viewport set without user-scalable=no, CLS < 0.1.
GREEN · Lighthouse mobile score 90+.
- DIM-10 scored 0–10
Trust signals
CHECK · HTTPS, valid cert, Organization schema with verifiable sameAs (LinkedIn, SAM.gov), structured contact (phone/email/address as schema, not icon-prefixed text).
GREEN · All present, structured.
▸ Pricing + scope
AI Website Discovery.
The public proof layer federal buyers verify before trusting your proposal. Built once. Yours to keep.
▸ Build cost
● ACTIVE
Live in 14 days. Cancel monthly. You own everything.
or email web@upwindgrowth.com
▸ Not ready to talk yet?
Score your site for free — the audit is the starting point for every engagement.
▸ Maintenance covers
Uptime monitoring, dependency + security upkeep, quarterly proof refresh, small factual edits (≤30 min/mo). Bigger work scoped separately.
▸ Best fit
New entrants, specialized subs, newly certified small businesses earning under $1M federal per year.
▸ What's included · 8 deliverables
FIXED SCOPE
- D-01 Machine-readable proof map
CAGE, UEI, NAICS mapped to capability pages, past-performance snippets, certification badges, and JSON-LD Organization + Service blocks.
- D-02 Extractable capability PDF
Text-selectable (not flat image), mirrored in HTML so AI tools pull without OCR fallback.
- D-03 Proof-point inventory
For each capability, certification, past-performance entry — feature, beneficial outcome, verifiable proof.
- D-04 Solicitation-language intake
Recurring NAICS, PSCs, agency targets, set-aside language, customer terminology — drives technical-niche pages.
- D-05 Technical-niche pages
Tied to NAICS, PSCs, agencies, capabilities. Procurement vocabulary, not generic SEO.
- D-06 Pre-launch crawl audit
Critical claims, NAICS, certifications, contact routes, past performance findable in HTML — not just visible.
- D-07 Mobile + sub-1s LCP
Performance budget hits the bar a federal evaluator scores against.
- D-08 US hosting on your account
AWS or Vercel under your account. You own the keys.
▸ Briefing 06 · Constraint stack
You own everything.
Non-negotiable. The compliance constraint is built into how we work, not a policy we added afterward.
- C-01 ● ENFORCED
Your legal entity
Your CAGE, UEI, DUNS, and registrations belong to your business. We surface them on the site — we never hold or manage them.
- C-02 ● ENFORCED
Your domain
The domain stays in your registrar account. We never request transfer or hold the registrar login.
- C-03 ● ENFORCED
Your hosting account
US-based hosting (AWS or Vercel) under your account. We deploy to your environment. We never hold production credentials.
- C-04 ● ENFORCED
Your content + past performance
The proof inventory we build with you belongs to you. You can take it, move it, or hand it to a different vendor at any time.
▸ Briefing 07 · Deployment runbook
Live in 14 days.
- 1 T-01 Day 0
Kickoff
Review scope, confirm entity details, set access protocols for your hosting account.
- 2 T-02 Day 1–3
Intake
Proof-point inventory (capabilities, certifications, past-performance) + solicitation-language intake (NAICS, PSCs, agency targets, customer terminology).
- 3 T-03 Day 4–9
Build
Static site built: home, about, core competencies, past performance, contact. Machine-readable proof map wired. Capability statement PDF commissioned.
- 4 T-04 Day 10–11
Review
Draft review with you. Factual corrections + content sign-off. No design revisions for scope held under 30 min — those go in quarterly refresh.
- 5 T-05 Day 12
Pre-launch crawl audit
Automated delivery check: CAGE, UEI, NAICS, certifications, contact routes, past performance findable in HTML text — not only visible on screen.
- 6 T-06 Day 14
Live
Deploy to your hosting account. DNS pointed. Uptime monitoring active. Handoff complete.
▸ Briefing 08 · FAQ
Common questions.
- Do we choose the hosting provider?
- Yes. We build to AWS (S3 + CloudFront) or Vercel. You create the account; we deploy to it. US-based by default. If you have a specific requirement (GovCloud, FedRAMP-authorized hosting), let us know at kickoff and we'll scope accordingly.
- What does the $50/mo retainer actually cover?
- Uptime monitoring, dependency and security upkeep, quarterly proof refresh (updating past-performance entries, cert expirations, NAICS adjustments), and small factual edits of 30 minutes or less per month. New pages, new positioning, new case studies, and proposal-specific rewrites are scoped and priced separately.
- Do you build the capability statement PDF?
- Yes. The capability statement is part of scope: one-page PDF, text-selectable (not a scanned image), the same content mirrored in HTML. We run a pre-launch check to confirm AI extraction tools can pull it without OCR fallback.
- We already have a website. Can you fix it instead of rebuilding?
- Possibly. Start with the free audit — score your site and see where the gaps are. If the structure is recoverable, we can scope a remediation. If the gaps are architectural (image-heavy, no semantic HTML, no schema), a rebuild is usually faster and cleaner than a retrofit.
- Higher-volume needs — multiple sites, large teams, full capture retainers?
- This scope is for small businesses under $1M federal per year. If you're running a larger program or need ongoing capture support, email web@upwindgrowth.com and we'll scope something appropriate. Signal OS (at upwindgrowth.com) covers the opportunity-intelligence side of the house.
- Can we cancel the $50/mo retainer?
- Yes, monthly. You own the site and the hosting account, so cancellation means we stop the maintenance work — the site stays live on your infrastructure. We don't hold anything hostage.
- How is this different from a regular web agency?
- Most web agencies build for visual impression. We build for extraction: can a federal AI tool or procurement evaluator pull your identity, certifications, past performance, and contact info in under 60 seconds? That's the bar we design to. The result looks like a professional site — it's also a structured proof document.
- Will this guarantee we win contracts?
- No. The website is adjacent proof infrastructure — it helps you pass the first check, not win the award. The award depends on capture strategy, price, technical approach, and past performance. What we eliminate is the avoidable loss: the one where the evaluator can't verify who you are or what you've done.