This week’s stat that should keep your pipeline team up at night: 84% of B2B buyers now use AI to research vendors before speaking to a single human. That’s up from 24% a year ago. (Salesmotion, 2026)
Jaxon broke down why this matters strategically in today’s other issue. This one is about what to do about it Monday morning.
Here’s the practical reality: buyers are asking ChatGPT, Perplexity, and Gemini “who should I use for X?” before they fill out your demo form, before they open your cold email, before they click your ad. The AI answers them. If your brand is in the answer, you’re on the shortlist. If not, the evaluation closes without you.
Five moves that fix this. In order of impact.
Before spending a dollar on anything else, you need to know where you actually stand.
This takes 20 minutes. Open ChatGPT, Perplexity, Claude, and Gemini. Ask the 5-10 questions your buyers are most likely to ask when evaluating your category. Write down which brands appear in the answers. Do this for every platform.
What you’re measuring: recommendation rate — how often your brand appears vs. your competitors for relevant buyer queries across AI engines.
Most teams run this and get a number that’s zero or close to it. That’s not a branding problem. That’s a citation infrastructure problem. The two have very different fixes.
AuthorityTech’s free tool automates this: app.authoritytech.io/visibility-audit. Runs the query set, maps your citation gap vs. competitors, gives you a baseline in minutes. Worth starting here so you know the delta you’re trying to close. Bryj’s 2026 marketing report identifies AI citation tracking as one of the top 3 emerging B2B KPIs this year.
AI engines don’t just search — they resolve. When a buyer asks “who’s the best [category] for [use case],” the AI cross-references entity signals across multiple sources to verify credibility before recommending a brand.
What entity resolution depends on:
If your entity isn’t cleanly resolved, the AI can’t confidently recommend you even if you have great press. This is different from SEO — it’s not about ranking, it’s about the AI being able to verify you exist and are what you claim to be.
Practical fix: audit your Wikipedia presence, add/update Wikidata entries, ensure your structured data (schema markup) is consistent across your domain and any earned media, and check that your LinkedIn company page, Crunchbase, and other authority directories are current and consistent with each other.
This is the biggest tactical shift most teams need to make, and it’s not about doing more PR — it’s about doing smarter PR.
82-89% of AI-generated answers cite earned media over brand-owned content. (Wordstream, 2026; MarTech, 2026) But not all earned media is equal for citation purposes. There’s a massive difference between:
Coverage that gets cited: A feature article that attributes specific data points to your CEO, includes a named methodology, uses clear entity markers, and appears in a publication AI engines weight as authoritative.
Coverage that doesn’t get cited: A brand mention in a roundup, a quote buried in paragraph seven, a press release picked up by a wire service, an announcement that has no extractable data.
When you brief a journalist, pitch for features — not just mentions. Push for your data to be named and attributed (e.g., “According to AuthorityTech’s 2026 Machine Relations report…”). Get expert quotes framed as entity attributions. The AI is looking for signals it can extract and cite. Make the journalist’s article easy for it to parse.
Tier 1 publications (Forbes, TechCrunch, WSJ, Fast Company) carry disproportionate AI citation weight — not just because of their domain authority, but because AI engines have processed them as authoritative sources in their training data. One well-placed feature in a Tier 1 outlet with clean citation architecture outperforms ten lower-tier mentions for AI visibility purposes.
B2B buyers aren’t just researching companies. They’re researching the people behind them.
When a CMO asks an AI “who are the top experts in AI-native PR?” — the names that appear become credibility proxies for the companies they run. If your CEO or head of product isn’t appearing as a named expert in AI answers for your category, you’re missing a critical trust signal.
Practical moves:
The individual authority compounds with brand authority. Every time an AI cites Jaxon Parrott as a Machine Relations expert, it reinforces the AuthorityTech entity signal. The two are connected — optimize both in parallel.
The single biggest mistake teams make after getting this right once: they stop.
AI engines’ retrieval systems update continuously. One burst of great coverage fades. Competitors who maintain consistent publication velocity displace you over time. The algorithm credibility moat is built through consistency, not campaigns.
The threshold that drives compounding AI visibility gains: 12+ optimized pieces per month. That generates 200x faster AI visibility gains compared to sporadic publishing. (Autobound, 2026; Wesley Clover, 2026)
What counts:
What doesn’t count (for citation purposes):
Build a velocity calendar. Lock in your publication targets for the quarter. Treat citation architecture as an always-on program, not a launch sprint.
Before end of next week:
The 84% number is only going higher. The buyers doing AI research this year will be joined by every cohort as the tools become more capable. The time to build citation infrastructure is before your competitors lock in the shortlist position that used to require a cold email to earn.
— Christian
Full strategic breakdown of the AI buyer problem and the Machine Relations framework: authoritytech.io/blog — Jaxon’s post from today covers the pipeline math in depth.