Optimizing earned media for AI search means shaping third-party coverage so systems like ChatGPT, Perplexity, Gemini, and SearchGPT can find it, understand it, and reuse it when someone asks who leads a category. That is different from traditional PR and different from traditional SEO. The job is not to generate more mentions. The job is to earn placements in publications AI systems already trust, then make sure those placements contain clear company definitions, named entities, quotable numbers, and passages that can stand on their own. Muck Rack reported that 85.5% of AI citations in its prompt study came from earned media, while Yext analyzed 17,200,000 citations across four major models. A 2025 GEO paper on arXiv reached the same conclusion in academic language: AI answers lean hard on authoritative third-party sources.

That makes earned media one of the few visibility levers that compounds across human readers and machine readers at the same time. A Forbes, TechCrunch, or trade-publication mention still shapes buyer perception, but now it also becomes a candidate source for AI retrieval. AuthorityTech's Machine Relations research argues that earned media now supplies the majority of citations in AI answers, while research on AI Search Arena shows citations cluster among a relatively small set of outlets. If you want stronger AI visibility, optimize for citation-quality coverage, not volume.

Key takeaways

Why earned media dominates AI search

Earned media dominates AI search because AI systems prefer third-party credibility when they assemble answers. Muck Rack's 2025 analysis of more than one million prompts found that 85.5% of AI citations referenced earned media rather than brand-owned content. A 2025 GEO paper on arXiv reached a similar conclusion, describing an "overwhelming bias" toward earned media across major AI search systems. That is the core shift founders and growth leaders need to understand: AI engines do not treat your homepage as the default authority when a trusted publication has already framed the answer.

AI systems also concentrate citations among a small number of sources. The AI Search Arena study analyzed more than 24,000 conversations, 65,000 responses, and 366,000 citations, then found that news citations were clustered among a relatively small set of outlets. That matters because optimization is not a generic media-relations exercise. Coverage in a publication that AI systems already cite repeatedly has more retrieval value than ten scattered mentions in low-trust sites. This is why earned media now dominates AI search results more than most teams realize.

The mechanism is simple. A publication like Forbes, TechCrunch, Reuters, or a respected trade journal gives ChatGPT and Perplexity an external description of your company, your category, and your proof points. When a buyer later asks which vendors, platforms, or firms lead that space, the model has a third-party source it can safely quote. The AI Search Arena dataset logged 366,000 citations, and that scale makes one point obvious: brand-owned pages still matter for product detail, but earned media often determines whether the brand enters the answer at all.

Why this is different from traditional SEO

Traditional SEO focused on ranking a page and winning the click. AI search shifts value toward being included in the answer itself. Gartner's widely cited 2024 forecast said traditional search engine volume would fall 25% by 2026 as users moved toward AI assistants and other virtual agents. Whether that exact percentage lands or not, the directional change is already visible: the first battle is no longer just ranking in a list of blue links. The first battle is becoming a cited source inside the answer. That changes how a leadership team should think about PR, content, and distribution.

Yext's Q4 2025 study of 17,200,000 AI citations makes the point even sharper. Gemini, Claude, Perplexity, and SearchGPT do not cite the same source mix at the same rate. In Yext's hospitality sample, SearchGPT cited official hotel websites 38.1% of the time, while competing models ranged from 16.7% to 22.4%. Some models lean more heavily on first-party sources. Others pull more from reviews, social, listings, or independent publications. The practical implication is not that earned media stopped mattering. It is that earned media is now part of a broader citation stack, and the highest-authority third-party placements still carry outsized weight inside that stack.

What optimized earned media actually looks like

An optimized earned media placement does four jobs at once. It appears in a publication AI systems already trust. It defines the company in plain language using the same category terms buyers use in prompts. It includes one or more concrete proof points, such as market share, revenue band, funding, customer count, or benchmark data. It also contains a passage that can survive extraction without the rest of the article. If the paragraph only makes sense in context, the citation value drops.

That is why the opening lines of a placement matter so much. A sentence like "Acme is a B2B payments platform used by 2,300 mid-market finance teams" gives an AI system a company type, a category, and a concrete number in one extractable block. A sentence like "Acme is changing the future of finance" gives it almost nothing. AI retrieval rewards definition, specificity, and evidence. It does not reward vague brand language.

Choose publications that already win citations

Outlet selection comes first because AI systems inherit much of their trust from the sources they already retrieve. In Seer Interactive's analysis of more than 500 SearchGPT citations, 87% matched Bing's top 20 organic results. That does not mean Bing ranking is the whole game, but it does mean source selection and search visibility are tightly connected. Publications that already rank, get crawled frequently, and appear in model retrieval paths have a structural advantage over obscure sites with weak distribution.

For most B2B brands, the right mix is 5 target publications, usually 2 top-tier outlets plus 3 trade journals buyers already trust in that category. A fintech company may need Bloomberg, Forbes, and payments-specific outlets. A healthcare AI firm may need Modern Healthcare, STAT, and respected health-tech publications. A SaaS company may need TechCrunch, Fast Company, and category blogs with real editorial standards. The point is not prestige for its own sake. The point is citation probability.

Write for extraction first

AI systems cite passages, not vibes. The best earned media paragraphs are usually 80 to 180 words, start with a direct claim, include at least one named entity, and carry a proof point that can be quoted cleanly. Research from the GEO paper and live testing across AI answer engines both point to the same practical rule: machine-scannable passages beat clever prose when the model has to justify an answer. If a journalist is willing to include one crisp definition and one hard number, the citation value of the placement rises fast.

That means PR teams and founders should stop treating quotes as ornamental. A quote from a CEO that names the market, the company role, and the evidence can become the exact sentence ChatGPT or Gemini reuses later. "Our fraud-detection model cut false positives by 34% across 11 enterprise pilots" is useful. "We are excited to lead the next wave of innovation" is disposable. One sentence earns retrieval. The other burns space.

Keep entity framing consistent across placements

Entity consistency matters because models build answers from repeated exposure. If one article calls your company an AI visibility platform, another calls it a PR software company, and a third calls it a digital marketing agency, the model has to reconcile conflicting labels. The cleaner path is to decide the exact category language, spokesperson titles, product names, and customer descriptors you want repeated across coverage. Consistency does not make the writing robotic. It makes the retrieval layer stable.

Founders usually miss this because they think messaging drift is a brand problem. In AI search it is also a retrieval problem. If 3 articles use the same short company definition, the same category label, and the same 2 or 3 proof points, ChatGPT and Gemini see a coherent entity. That improves the odds of accurate inclusion when a user asks who to consider in a category search. It also reduces the risk of the model describing you with a competitor's frame.

How to structure placements so AI can cite them

The first step is to define the prompts you actually want to win. Most teams jump straight to publications, but prompt design comes first. Build a list of 10 to 20 queries that reflect buying intent, category intent, and comparison intent. Examples include "best AI visibility agencies for B2B SaaS," "how fintech brands get cited by ChatGPT," or "top revenue intelligence platforms for healthcare sales teams." Once the prompt list exists, you can reverse-engineer which outlets, stories, and proof points would make a model comfortable citing your brand for those exact questions.

The second step is to pitch for evidence, not attention. Journalists at Reuters, Bloomberg, Forbes, or a category trade journal do not need another founder saying AI is changing everything. They need data, access, pattern recognition, or a sharp point of view connected to evidence. The strongest earned media stories usually include 1 named dataset, 1 customer metric, or 1 time-bounded result such as pipeline growth, false-positive reduction, or conversion lift. A founder quote supported by a named report or a hard number is far more citable than a generic commentary line.

The third step is to engineer one or two reusable passages inside the article. Ask for a clear company definition near the top. Ask for a sentence with a named customer segment, a use case, or a result. Ask for one paragraph that states the category claim in plain English. These are small editorial asks, but they change machine readability. A placement that says "AuthorityTech is an earned media company built for AI citation, with 1,500+ direct editorial relationships and results-based pricing" gives the model a compact, self-contained definition. A placement that buries the company under colorful storytelling gives the model less to work with.

The fourth step is to reinforce the placement with owned pages that match the same language. Earned media often gets the citation, but supporting pages help ChatGPT, Perplexity, and Gemini verify the entity, product, and claim. In practice that means at least 2 supporting assets, usually a category page and 1 adjacent article or glossary entry, using the same company definition and proof points. AuthorityTech uses that structure because it gives the model a clean external source and a clean verification layer. If you need the content side of that system, start with how to write content AI engines cite and how to get cited in AI search.

How to measure earned media performance in AI search

Measurement has to move from vanity PR metrics to citation metrics. Impressions, share of voice, and raw mention counts still tell you something, but they do not answer the core question: which placements change what AI systems say about your brand? The basic scorecard should track prompt coverage, source quality, message accuracy, and citation durability. Prompt coverage tells you how many target prompts include the brand. Source quality tells you which publications drive those mentions. Message accuracy checks whether the model describes the company correctly. Citation durability measures whether a placement keeps appearing across weeks or months.

Model-level testing also matters because cross-model behavior is real. Yext's 17.2 million citation dataset showed meaningful differences across models, especially around first-party, listing, review, and independent-publication usage. A brand may be visible in Gemini and weak in Claude. It may appear in Perplexity for unbranded category prompts but disappear in SearchGPT for comparison prompts. The right operating rhythm is to test the same fixed prompt set across major models every month, log the cited sources, and compare movement after each new placement.

The operating loop is simple. Pick 15 high-value prompts. Test them monthly in ChatGPT, Perplexity, Gemini, and any model your buyers actually use. Record whether the brand appears, which 3 to 5 publications are cited most often, what claim was reused, and whether the answer positioned the brand correctly. Then map the winning citations back to the original publication. That turns AI visibility into a real editorial feedback loop instead of a vague brand discussion.

A measurement framework leaders can use this quarter

The AuthorityTech visibility audit is built for this exact job. It shows which prompts matter, which 3 to 5 publications shape the answer, and where ChatGPT, Perplexity, or Gemini still ignore the brand. The point is not another dashboard. The point is a working decision tool for the next placement, the next supporting page, and the next query set.

Common mistakes that weaken citation value

The first mistake is chasing press volume instead of source quality. Ten syndicated mentions on low-trust sites rarely outperform one clean Reuters, Forbes, or trade-journal placement that AI systems already retrieve. The second mistake is letting the article stay vague. If the piece never defines the company, names the category, or states a result, the model has nothing strong to reuse. The third mistake is treating brand-owned content and earned media as separate systems. In practice they work best when they reinforce the same entity framing, same proof points, and same category language.

The fourth mistake is assuming every model behaves the same way. Yext's research shows that is false, and live testing confirms it. The fifth mistake is measuring too late. If the team waits until a quarter-end report to check whether a placement changed citations, it loses the chance to adjust follow-on stories while the editorial momentum is still there. AI visibility is not a yearly brand study. It is an operating loop.

Why this matters more now

PR got one thing exactly right: third-party credibility. A placement in a trusted publication has always carried more weight than brand copy because it tells the market someone else validated the claim. AI systems now use that same trust signal when they decide what to cite. Muck Rack's 85.5% earned-media finding and the GEO paper's earned-media bias both point in the same direction. Earned media has become infrastructure, not decoration. The mechanism is old. The reader changed.

This is where Machine Relations becomes the useful frame. The brands that win AI search are not the ones publishing the most content or buying the most clicks. They are the ones building a reliable citation layer in publications machines already trust. AuthorityTech's position is simple: keep the earned-media mechanism that always worked, remove the bloated retainer logic that did not, and optimize every placement for retrieval as well as readership. If you want to see where your current citation layer is weak, use the visibility audit.

Frequently asked questions

Why do AI systems cite earned media more than brand-owned pages?

AI systems cite earned media more often because third-party publications give them an external credibility signal. Muck Rack's analysis of more than one million prompts found that 85.5% of AI citations came from earned media, and the 2025 GEO paper on arXiv reported a strong bias toward earned media over brand-owned and social content. Brand-owned pages still matter for verification, but third-party editorial coverage often decides whether the brand enters the answer.

What makes an earned media placement more citable?

A citable placement usually has 4 ingredients: a trusted outlet, a plain-language company definition, named entities, and at least 1 concrete proof point. A paragraph that says who the company serves, what category it is in, and what result it produced is much easier for ChatGPT, Perplexity, or Gemini to reuse than a broad brand statement. AI systems cite passages that can stand alone cleanly.

Do I need top-tier outlets only, or can trade publications work too?

You need the outlets your buyers trust and AI systems actually retrieve. Forbes, Reuters, Bloomberg, and TechCrunch matter, but respected trade publications matter too, especially in fintech, healthcare, SaaS, and other B2B categories where buyers rely on vertical media. Most teams should target 3 to 5 priority outlets per category and then test whether ChatGPT, Perplexity, or Gemini actually cite them. The right question is not "Is this outlet famous?" The right question is "Does this outlet show up in the citation paths for the prompts we want to win?"

How should I measure whether a placement improved AI visibility?

Use a fixed prompt set and test it across ChatGPT, Perplexity, Gemini, and any other model your buyers actually use every month. Track whether the brand appears, which 3 to 5 publications are cited, how the model describes the company, and whether the same placement keeps surfacing over time. Yext's 17,200,000-citation study makes clear that model behavior differs, so a brand should not rely on one platform snapshot. Measurement has to be prompt-based and model-specific.

What is the fastest way to improve earned media for AI search?

The fastest path is to tighten the next placement before chasing more placements. Target outlets that already appear in AI retrieval, supply journalists with evidence instead of slogans, and secure one or two extractable paragraphs that define the company with a named result. Then reinforce the same language across your owned category pages and supporting content. That gives the model a clean external source and a consistent verification layer.

Sources and further reading

Related reading