Harvard Business Review put it on the cover of their March-April 2026 issue. Gokcen Karaca, head of digital and design at Pernod Ricard, discovered that when consumers asked AI about their flagship Ballantine’s Scotch, they got back an answer that miscategorized it as a prestige product — wrong price tier, wrong positioning, wrong competitive set. He was dismayed. He partnered with a research firm to study what AI knew about their brands.

He found incomplete data. Incorrect positioning. Category confusion.

The question every CMO is now asking: how did this happen? And the answer everyone’s reaching for: run an audit. Update your entity signals. Fix your structured data. The “brand AI audit” services that launched over the last 18 months all assume the same thing — that the problem lives in your owned properties and can be fixed there.

That assumption is wrong.

AI isn’t misrepresenting your brand. It’s accurately reporting what trusted, third-party sources say about you — and those sources probably don’t include your website.

What AI is actually reading

Here’s the data. Fullintel and the University of Connecticut presented research at the International Public Relations Research Conference in February 2026 showing that 89% of all links cited by AI engines in response to brand queries came from earned media — journalism, editorial coverage, independent reporting. Not owned sites. Not brand content. Earned placements.

Muck Rack’s Generative Pulse study, which tracked over one million AI prompts, found that 82% of all links cited by AI systems were earned media. The top AI-cited outlets: Reuters, Financial Times, Forbes, Axios, Time. Same publications that shaped human brand perception for decades.

Moz’s 2026 analysis of 40,000 queries found that 88% of AI Mode citations don’t appear in the organic top 10. The brands ranking first on Google are not the brands appearing in AI answers. Different game. Different sources.

An Ahrefs analysis of ChatGPT citation behavior found that 65.3% of pages cited by ChatGPT come from domains with DR80 or higher — the Tier 1 editorial ecosystem your brand either has coverage in or doesn’t. This is not a bug. AI engines are doing exactly what they’re designed to do: find what trusted, independent sources say and surface it. Your brand’s owned content — the website you’ve spent years optimizing — isn’t the primary input. It’s a tertiary signal at best.

When Pernod Ricard audits their AI presence and finds incomplete, incorrect brand descriptions, the problem isn’t that AI missed their FAQ page or failed to crawl their product descriptions. The root cause is that the editorial record — what credible publications have actually written about Ballantine’s, in real editorial contexts — doesn’t contain the information they wish AI were surfacing.

You can’t fix that from the inside.

The fix that doesn’t work

The HBR article, authored by researchers Oguz A. Acar and David A. Schweidel, correctly identifies the urgency: brands now need to manage their AI presence the way they once managed their search rankings. It maps three interaction modes emerging in the market — brand agents engaging consumers, consumer agents acting on behalf of individuals, and full AI-to-AI intermediation with no human involved. The framework is useful. What it stops short of is the answer.

Most of the “fix your AI presence” conversation sends CMOs toward technical patches. Schema markup. Entity optimization. AI monitoring dashboards. These have some value. But they address the distribution layer. They don’t create the underlying credibility signal that AI engines actually cite.

The question to ask before spending anything on brand AI optimization: what am I optimizing against? If AI is pulling from earned media at an 82-89% rate, and my entire strategy is improving owned content, I’m investing in the 11-18%.

AI agents don’t browse — they recall. They surface what they were trained on, and that training data weights editorial sources heavily over owned content. A brand that invests in defensive SEO without addressing its editorial record is trying to win a chess match by rearranging the board.

Where the fix actually lives

The brands showing up accurately in AI answers are the ones with an earned media foundation. Forbes covered them in the right context. TechCrunch placed them in the right category. A trade publication profiled them against the right competitive set. AI systems read those pieces. Those descriptions became the brand’s AI footprint.

AuthorityTech’s research on earned vs. owned AI citation rates found that brands earn 325% more AI citations from earned media placements than from equivalent investment in owned content. That gap doesn’t narrow when you fix your schema.

A Kearney survey of 750 U.S. consumers in July 2025 found that 60% expect to use agentic AI to make purchases within the next 12 months. When those agents research options, they won’t read your landing page. They’ll read what trusted publications said about you — and they’ll read it in the exact framing the journalist used, the category the editor assigned, the competitive context the publication established.

That’s why the editorial record matters in a way it never quite did when search was the primary discovery channel. Search at least gave you some control. You could rank your own page. AI citation removes that buffer. The answer that agent gives your prospect comes from what third parties wrote, not what you published.

The mechanism hasn’t changed

There’s a name for what Pernod Ricard needed and didn’t have: earned authority. And there’s a name for the discipline that builds it: Machine Relations. The mechanism hasn’t changed since PR existed — earn placement in publications that matter. What changed is the reader.

The same Forbes profile that shaped human brand perception for a decade is now the source a consumer AI agent reads when it decides which brand to recommend. The publications haven’t changed. The AI engines read the same sources that shaped buyer opinion for decades. What changed is that the audience is now a machine running at scale, answering millions of queries a day, with no room for a brand to insert its own framing.

That’s not a new problem requiring new technology. It’s an old problem with a new audience — one that Machine Relations is specifically built to address.

Every brand running an AI audit right now will find gaps. The question is what they do next. The ones who fix it at the source — editorial coverage in publications AI trusts, placed through relationships that ensure the context is accurate — will stop watching their brand get misrepresented. The ones who optimize their schema will get better-structured wrong answers.

If you want to see exactly where your brand stands in AI answers right now, the visibility audit shows you the current state. The fix is the part that requires actual editorial work.