Something is off in a lot of B2B marketing analytics right now, and most teams are missing it because they’re looking at the wrong number.
Gushwork, a startup helping companies capture AI-driven leads, told TechCrunch that across their 300-plus customer base, roughly 20% of website traffic now comes from AI-driven search and chat platforms — but those sources account for around 40% of inbound leads. The traffic share is small. The lead share is double.
Microsoft’s own platform data confirms the direction: Copilot-assisted customer journeys are 33% shorter on average than traditional search, and high-intent conversion rates are 76% higher for AI-powered experiences compared to traditional search surfaces. Separate analysis from Microsoft Clarity, tracking AI referrals across 1,200 publisher and news sites, found users converting at up to three times the rate of traditional channels.
The problem isn’t that these numbers are unknown. The problem is that almost no B2B team has attribution in place to see them — so they keep optimizing for the channel producing worse outcomes.
This week, Informa TechTarget — a $4 billion B2B media company with 220 digital properties and 50 million permissioned audience members — launched an AI Visibility Audit and GEO Topic Planner specifically for the zero-click B2B buyer journey. Their stated reason: B2B buyers are adopting AI-powered search at three times the rate of consumers, and up to 60% of searches now end without a click. When a company TechTarget’s size moves on this, the market has already moved. The question is whether your attribution setup has too.
Here’s the four-step playbook to fix that.
Most GA4 configurations treat ChatGPT, Perplexity, Claude, and Gemini referrals as direct traffic or miscellaneous referral — which means the conversion advantage from these channels is invisible.
The fix takes about 30 minutes. In GA4, create a custom channel group that explicitly segments AI platforms:
Tag each as a separate acquisition channel, not lumped into “referral.” Set conversion events for your high-intent actions — demo requests, consultation bookings, trial signups — and let these channels report against those events separately.
Add “How did you hear about us?” to any demo or contact form with an explicit AI search option. This is your ground-truth check on whether the tracking is capturing the right traffic. Microsoft internal research shows 22% more unique chat turns per session in Copilot compared to traditional search — each turn clarifying intent and moving the buyer closer to a decision. The buyer who fills out your form has been through a more complete research process before they click.
Once you have 30 days of clean data, you’ll probably find AI search is your smallest traffic source and your best-converting one. That’s the realization that changes budget allocation.
Before you optimize for AI citation, you need to know your current position. Most brands don’t.
Run these queries manually in ChatGPT and Perplexity this week:
Record what you find. Are you mentioned? Are you described accurately? Are competitors getting recommendations you should be getting? Is there a capability gap the AI is citing that you actually address but that hasn’t surfaced in your content?
This audit tells you two things: where you already have presence (meaning existing content is being cited and should be maintained), and where you have gaps (meaning the content AI needs to recommend you either doesn’t exist yet or isn’t indexed properly).
The TechTarget data is worth noting here. Their AI-driven audience membership quadrupled in 2025 — not from algorithm optimization, but because they had 220 properties with structured, consistently updated B2B content that AI engines could reliably extract from. The goal is to become that kind of reliable source in your category, even at smaller scale.
This is where most teams get the GEO strategy wrong. They focus on awareness content — “what is [category]” pieces — which builds AI visibility but not AI-referred pipeline. When a buyer asks ChatGPT “what is customer success software,” the AI answers the question completely and the buyer never clicks through to your site.
The queries that produce demos are comparison and decision-stage queries: “best [category] for [specific use case],” “[your product] vs [competitor],” “alternatives to [incumbent] for [industry] companies.”
For each of these query types, build a dedicated page that does four things:
Opens with a direct answer in the first 50 words. AI engines pull opening passages when formulating recommendations. A long intro that buries the thesis doesn’t get extracted.
Includes specific comparisons with actual criteria. Pricing tiers, integration support, implementation time, support model. Vague claims (“we’re more user-friendly”) don’t get cited. Concrete ones (“our implementation takes two weeks versus the category average of six to eight weeks”) do.
Uses question-based H2 headings that match buyer query phrasing. This isn’t keyword stuffing — it’s structural alignment with how AI search parses and surfaces content.
Runs FAQ schema on every decision-stage page. Adding statistics improves AI citation rates by 30–40%, per the Princeton/Georgia Tech research on Generative Engine Optimization (the foundational study in this space). Schema gives AI engines clean structured data to pull from. Sites that implement it show up more reliably in category recommendation queries.
One operational rule: content updated within the last 30 days gets cited significantly more often than older content. The Ahrefs analysis of ChatGPT’s most-cited pages found 76.4% had been updated within the last 30 days. Your comparison and alternative pages need to be living documents. Refresh them when a competitor changes pricing, drops a feature, or adds an integration.
Your website is a weak citation source. AT’s research shows AI engines are 6.5x more likely to cite a brand through third-party sources than through its own domain. That gap traces back to how AI engines are trained — 82% of all links cited by AI engines are earned media, per Muck Rack’s Generative Pulse analysis, with 95% coming from non-paid sources. Press releases growing 5x still only account for 1% of citations. Paid distribution doesn’t produce citation authority — earned editorial placements do.
The Bain study found 80% of consumers now rely on AI summaries at least 40% of the time on traditional search engines — and those summaries pull from third-party editorial sources. The implication for B2B is that when your buyer asks an AI which vendors to consider, the answer is built from what’s been written about you elsewhere, not what you’ve written about yourself.
The platforms that matter most, in order of AI citation influence:
Tier 1 — G2, Capterra, TrustRadius: These are the most-cited software review platforms across ChatGPT, Perplexity, and Google AI Overviews. An up-to-date G2 profile with recent reviews produces citations in vendor comparison queries. An outdated one with stale reviews doesn’t.
Tier 2 — Editorial placements in trusted publications: TechCrunch, Forbes, Business Insider, and vertical-specific publications that AI engines index as authoritative. The Moz analysis of 40,000 queries found 88% of AI Mode citations were not in the organic top 10 — meaning traditional SEO rank doesn’t predict AI citation. What predicts it is publication authority and editorial context.
Tier 3 — Reddit and community platforms: Authentic community presence — answering questions, contributing to discussions — builds the UGC citation layer that AI treats as social proof. Brands that participate genuinely in relevant communities show up in AI recommendations. Brands that don’t are absent from that layer.
The compound effect is what matters. G2 reviews plus editorial coverage plus community mentions across multiple trusted sources all pointing to the same brand entity is how AI engines build recommendation confidence. Each layer reinforces the others. This is also the mechanism that explains why your website traffic declining doesn’t mean your brand is losing ground — if your third-party citation network is growing, your AI recommendation frequency may be growing even as Google clicks fall.
This is where Machine Relations operates as the infrastructure, not just the strategy. The mechanism: earned placements in publications that AI engines already trust become the citation source when your buyer asks ChatGPT who leads your category. The same third-party editorial credibility that built brand reputation with human readers now builds AI visibility with machine readers. Earned authority is what gets cited — brand-owned content is what gets deprioritized. Your citation architecture is the asset; your website is just where they land after the decision has already formed.
The short version: Track AI referrals separately. Audit your current position in AI answers. Build comparison content optimized for decision-stage queries. Invest in third-party citation density over owned content volume.
The conversion gap between AI-referred leads and traditional search traffic isn’t closing — it’s widening as buyers use AI for more of their research process. The teams with attribution in place to see it will also be the ones who know where to direct their next dollar.
Run your AI visibility audit to see where your brand currently stands in AI answers.