Most marketing teams have gotten the memo about AI search. They know buyers are running ChatGPT queries before they hit your website. They’re auditing their editorial coverage, testing what Perplexity surfaces when someone asks about their category.

They’re optimizing for the first filter.

The second filter is the one eating deals.

Forrester’s 2026 Buyers’ Journey Survey found that 48% of B2B buyers now use AI to analyze RFP responses — not during discovery, but during evaluation. After you’ve already been shortlisted. After you’ve put weeks into a proposal. Forty-eight percent of those buyers are running your response through an AI tool before a human evaluator reads it. The same survey found 55% use AI to make product comparisons during evaluation, and 47% use AI to build the internal business case for whichever vendor they’re leaning toward.

This is not the same problem as AI visibility. And the playbook for fixing it is different.

The first filter — the one most teams are now aware of — happens at the top of the funnel. A prospect asks ChatGPT or Perplexity who the leading vendors are in your category. If your brand doesn’t appear, you don’t make the consideration set. We’ve covered how that works in detail.

The second filter happens after you’ve cleared the first one. You’re on the shortlist. An RFP has gone out. Your proposal is submitted. And then a buying team member runs it through Microsoft Copilot, or their company’s private GPT-4 instance, or Claude via their enterprise subscription. They paste in your response alongside two competitors’ responses. They ask the AI to compare, summarize, and flag weaknesses.

Forrester’s January 2026 report on business buying makes the private AI layer explicit: more than 61% of B2B buyers use private AI tools provided by their organizations — tools behind their firewall, not the public-facing AI engines most marketers think about. Microsoft Copilot alone reaches 68% of enterprise B2B buyers in the survey. More than half of those users run it in a private, corporate-controlled environment.

The AI evaluating your RFP response is not GPT-3.5. It’s a corporate AI instance that pulls from internal documents, previous vendor relationships, and the company’s own decision criteria. And it’s reading your proposal with the same logic it uses to read anything: Who is this vendor? What do authoritative external sources say about them? Are there consistent, credible third-party references I can use to validate this proposal’s claims?

What corporate AI actually looks for in your materials

When a buying team member asks Copilot to compare your RFP response to a competitor’s, the AI does not simply read the text you wrote. It attempts to resolve your brand against external information it has access to — analyst coverage, published research, editorial mentions in publications the model treats as credible, and anything publicly indexed that corroborates your claims.

Brands with consistent earned authority in trade press and analyst references resolve clearly. The AI can add external corroboration to the evaluation: “Vendor A’s claims about deployment speed are consistent with coverage in [trade outlet] and an analyst report from Forrester.” Brands without that external layer exist only in the text they submitted. The AI cannot corroborate them, so it falls back on what’s in the proposal — which is, by definition, self-reported.

Forrester’s research on the 2026 buying process puts the stakes directly: buying groups now average 13 internal stakeholders and 9 external influencers. Procurement professionals are decision-makers in 53% of buying cycles, engaging from the start. Every one of those stakeholders can run their own AI evaluation pass. If your brand resolves clearly across those passes, you’re consistent. If it doesn’t, you look thinner than the competitor who has the editorial coverage you don’t.

The three places your proposal loses before a human scores it

1. Category resolution failure

When an AI tool encounters your company name and category in a proposal, it tries to resolve you. If you’re well-covered in trade press and analyst reports, this goes smoothly. If you’re not, the AI produces a low-confidence resolution — or worse, associates you loosely with a competitor’s coverage rather than your own.

The fix isn’t writing a better proposal. It’s building the editorial foundation that makes you unambiguous. Named placements in the publications that cover your category, over time, in a consistent voice — so that when Copilot or a private GPT instance encounters your brand name during an RFP evaluation pass, it has clear, independent corroboration to work with.

2. Claim verification gaps

Proposals make claims. “Fastest implementation in the category.” “Highest NPS in our vertical.” An AI tool evaluating your proposal has no reason to trust those claims if it can’t find independent verification. If Forrester hasn’t mentioned your implementation speed, if no trade press has covered your NPS benchmark, if the only source for your differentiated claims is your own materials, the AI evaluator flags the claims as unverifiable. A human might give you the benefit of the doubt. The AI doesn’t.

Muck Rack’s “What is AI Reading?” analysis found that 82% of content AI engines actually cite is earned media — not owned content, not press releases, not brand-controlled material. The same dynamic operates when corporate AI tools evaluate vendor proposals: the claims that carry weight are the ones with independent editorial corroboration. Your best differentiators need to live somewhere external before they’ll survive an AI evaluation pass.

3. Business case assembly

Forrester found 47% of B2B buyers use AI to build the internal business case for a vendor decision. That means the business case your champion brings to the approval meeting is often partly AI-assembled — pulling justifications from sources the AI can find and cite, not just from your proposal.

If there’s no Forrester Wave mention, no analyst quote about your category leadership, no independent case study the AI can reference, the business case gets thin. Your champion ends up arguing from your marketing materials alone. That’s a weaker position than a competitor whose analyst coverage the AI surfaced and built into the document.

AuthorityTech’s research on how B2B buyers research vendors in AI engines maps exactly how this plays out across the evaluation stages — worth pulling up before your next proposal review.

What to audit this week

Run three checks before anything else.

Check your external corroboration layer. Open your industry’s trade press. Search for your brand name. Count the editorial mentions in the last twelve months — articles where a journalist or editor named your brand in a substantive context, not press releases or contributed content. How many appear when Copilot or Perplexity looks for independent validation of your claims?

Run an RFP simulation. Take your most recent proposal. Paste the executive summary into ChatGPT or Perplexity. Ask: “Who is this vendor? What do outside sources say about their claims?” See what comes back. If the AI can’t find corroborating coverage, your buyer’s AI can’t either.

Map your evaluation-stage materials against editorial coverage. Every major claim in your standard proposal — speed, NPS, customer outcomes — identify whether that claim is independently supported in a credible external source. If it’s only in your materials, it’s unverifiable by AI. That’s a gap.

Why this compounds back to the first filter

The second filter problem and the first filter problem have the same root cause and the same fix: earned media placements in publications that AI systems treat as authoritative.

The editorial layer that makes you visible in ChatGPT when a buyer runs a discovery query is the same layer that makes you credible when Copilot validates your RFP claims. The Forrester reference that gets you named in a “who leads this category” AI summary is the same Forrester reference a procurement AI builds into the business case your champion brings to the meeting.

You can’t solve the evaluation-stage problem independently of the discovery-stage problem. They’re the same infrastructure. Earned media placements in the right publications, placed through real editorial relationships, over time — that’s what Machine Relations defines as the new layer of earned authority. The mechanism hasn’t changed: a placement in a respected publication earned through a real relationship with an editor is still the most trusted signal that exists. What changed is that two AI filters now run before any human makes a decision. The publications that shaped human brand perception for decades are the same publications AI tools cite when they evaluate your proposals, build your buyer’s business case, and resolve your brand during procurement research.

The brands building that layer now will clear both filters. The ones that don’t will keep submitting strong proposals to rooms where the AI pre-evaluation already ran — and already found nothing.

Start with the audit. See where you actually stand.

Ready to see how you show up across AI engines? Run the visibility audit — it maps your brand’s editorial coverage against the sources AI tools actually pull from.


Sources:

  1. Forrester, “The State Of Business Buying, 2026” — January 21, 2026 — investor.forrester.com
  2. Forrester, “Zero-Click Is Only Half The AI Story” — February 12, 2026 — forrester.com
  3. Forrester, “B2B Buyers Make Zero-Click Buying Number One” — January 22, 2026 — forrester.com
  4. Muck Rack / Generative Pulse, “Earned Media Still Drives Generative AI Citations” — December 2025 — globenewswire.com
  5. Ahrefs, “ChatGPT’s Most Cited Pages” — ahrefs.com
  6. machinerelations.ai, “B2B Buyers Now Research Vendors in AI Engines Before Visiting Any Website” — machinerelations.ai/research/b2b-ai-vendor-research-2026