Florian Fuehren

Years ago, I worked as an LLM without knowing it. Well, actually, I worked as a research assistant for a professor in medieval literature. But her workflow reminds me of what’s to come for marketers worldwide, and it went something like this. 

She’d hand me a stack of citations she’d already pulled, sketch out the argument she was building and send me into the library to find what was missing. My job was to comb through the biggest bibliographies, surface relevant Ancient Greek sources (the school perk paying off) and pick up the occasional article that wasn’t on her list but seemed worth a look to me.

Sometimes I came back with the right book. Sometimes I came back with a French monograph I’d entirely misunderstood, because I don’t actually read French. So she’d “reprompt” me, correct course and send me back. And the loop continued until she had what she needed.

It took me a few thousand prompts to realize what we’d actually been doing. She was the user. The bibliographies were the index. I was the retrieval layer, complete with hallucinations, language failures and the occasional pleasant surprise. 

In 2026, every customer is a version of that professor sending requests into an algorithmic library. Your content should be in the notes that different models bring back.

The Funnel Got Reassigned

If you’ve been in marketing long enough, you can still draw the traditional funnel from memory.

  • Top: Blogs and newsletters for the awareness crowd.
  • Middle: Case studies and whitepapers for consideration.
  • Bottom: Spec sheets, demos and pricing pages for the people reaching for their wallet.

The mental model still works. What’s changed is where the territory sits. It now runs through AI Overviews, ChatGPT, Perplexity, Gemini and a growing roster of regional AI search engines and LLMs. 

Last year, we mentioned that 80% of consumers already leaned on AI summaries for at least 40% of their searches, which means the awareness layer often resolves before anyone clicks anything. And now, the funnel is increasingly getting reassigned to a different neighborhood, and the new manager has strong opinions about citations.

What Actually Broke (and Why It Looks Different in Different Regions)

Two things broke at once.

First, the same query no longer lands on the same evidence, and the pattern that emerged is uncomfortable for anyone who likes consistent reporting. 

Your brand’s efforts on Reddit may now show up in roughly 5% of ChatGPT responses and about a quarter of Perplexity’s, while being practically invisible to Gemini users. Same question, three completely different answers, sourced from three completely different corners of the internet.

That’s because Perplexity leans toward citation-rich, structured sources, including paywalled databases like Statista and PitchBook through its Premium Sources tier. Meanwhile, ChatGPT will mix long-form consensus content with social signals. Gemini still leans hard on Google’s own Knowledge Graph and YouTube. Optimizing for one of them means not optimizing for the others.

Second, regional model adoption is fragmenting in ways the global SEO playbook never had to handle. Mistral’s Le Chat hit a million downloads in its first 14 days, and roughly 60% of its revenue comes from European customers. DeepSeek and Qwen have a similar gravitational pull in certain parts of Asia.

The implication for marketers: The engine your buyers ask depends heavily on where they sit on the map. A B2B SaaS targeting financial services in Frankfurt will increasingly need a presence in Mistral citations and the European trade publications that feed them. A consumer brand in San Francisco will likely care more about Reddit threads, because those are quietly powering ChatGPT and, to a degree, Perplexity answers.

As a result, the PR strategy bifurcates. So does the AIO goal. There is no single “rank for the answer engine” play anymore, and pretending there is just means optimizing for whichever model your agency happens to like best.

The Cultural Shift Nobody Briefed the C-Suite On

One part about this entire development is genuinely awkward to say out loud: Most of this happened faster than vendor cycles, agency contracts and quarterly planning could absorb.

So no, if you’re a CMO or VP of marketing reading this in 2026, nobody seriously expects you to personally tune schema or read share of voice reports at breakfast. The technical scaffolding behind AIO citations changed at least three times this year, and even the people whose full-time job it is to track those changes are barely keeping up.

What it does mean, though, is that your choice of strategic partners is becoming more important. At least this is one of the cleanest cases I can think of for the value of an external partner who reads the changes weekly while your team manages campaigns. 

And that may have nothing to do with your team being unqualified. It could just be that you’re considering entering certain markets (and therefore the LLM responses more relevant to them). Tracking algorithm changes and user base statistics while also working on product marketing and positioning might prove challenging for many teams.

Whether we’re talking about internal or external teams, though, the harder part is the reporting framework.

Most dashboards still default to clicks, sessions, bounce rate and pipeline-attributed leads. None of those metrics captures what the new search layer is actually doing. A campaign generating zero clicks but 10 LLM citations and a measurable lift in branded search will look dead on a quarterly review. A blog hub with declining individual page sessions but growing engaged sessions and rising conversions on its linked landing pages will look like a slow car crash.

That’s why either training and education or strategic partners matter. Without them, stakeholders will pull the plug based on patterns that no longer apply. And there’s nothing worse than a campaign that got killed based on yesterday’s instruments when it was actually starting to work.

If you do nothing else this quarter, update the reporting framework before you greenlight the next campaign. Otherwise, you’ll spend the next two years killing the experiments that should’ve been your wins.

The Action Plan

1. Capture the Expertise You Already Have

The fan-out is happening at the AI level, whether you participate or not. Google, ChatGPT and Perplexity decompose a single query into dozens of sub-queries before they assemble an answer. To rank inside that fan-out, your content needs depth across the same set of sub-topics. That’s more depth than any single content marketer can credibly produce alone.

The realistic answer is a fan-out at the team level. Your CTO knows the architectural decision your competitors are dancing around. Your senior engineer can articulate the failure mode that no marketing copy has properly explained. Your customer success lead knows the question every customer asks in Week Three. Your in-house counsel can name the compliance edge case nobody else is writing about.

But let’s face it: None of those people will write blog posts. They have their own jobs. Your content strategy has to account for that constraint rather than pretend it away.

What often works in practice is a production system built around 30-minute voice memos or interviews, recorded internal Q&As, async Slack threads and one ghostwriter or strategist who turns those raw materials into publication-ready content. The technical staff supplies the substance; the writer supplies the polish and the SEO bones.

Internal experts may hate writing, but chances are they don’t mind talking. The trick is making the talking productive.

2. The Four Questions Your Cluster Has To Answer

AI systems classify intent before they fan out, which means a comprehensive topic cluster has to cover all four standard buckets. We’ve gone deep on the search query taxonomy elsewhere, so here’s the condensed version:

  • Informational (“What is X,” “How does Y work”): The educational layer LLMs lean on hardest for foundational answers. This is where definitional content earns citations.
  • Navigational (“X login,” “Y homepage”): Branded queries where you want to be the first hit, including in AI responses about your brand.
  • Commercial (“Best X,” “X vs Y,” “X pricing”): Comparison-shopping content where citation in an AIO can short-circuit the consideration phase.
  • Transactional (“Buy X,” “Y demo,” “Start trial”): The close, where the SERP still hands you the click if the rest of the cluster did its job.

Cover all four buckets across the topic cluster and you’ll give the classifier exactly what it expects to find. Cover only the commercial bucket (because “that’s where leads come from”), and you’ve handed your awareness layer to whichever competitor took the time to write the educational content.

3. When Ranking Sideways Beats Ranking First

Two stats from ALM Corp’s analysis of 173,000 URLs and 33,000 fan-out queries are worth pinning to your monitor:

  • Pages ranking for both the main query and its fan-out queries are 161% more likely to be cited in an AI Overview than pages ranking only for the main keywords.
  • Ranking for fan-out queries alone, without the main keyword, makes you 49% more likely to earn citations than ranking exclusively for the head term.

The first stat reinforces what topic-cluster strategists have been saying for years. The second stat actively inverts the SEO logic of the past 15 years. Chasing one big keyword while ignoring the topical surroundings is now worse, statistically, than ignoring the keyword and dominating the surroundings instead.

If you’ve spent a decade telling your CMO that ranking for the head term is the goal, this is the awkward part of the meeting where you walk it back. The pillar-and-cluster model is no longer optional. The cluster is the unit of optimization, and the head term is becoming one component of it. 

The standard content strategy is finally moving from spreadsheets to the interconnected network the web always was, where every node provides your team with insights, so that they can constantly track user engagement and rework those parts of your individual web that deserve attention.

4. Schema Markup: The Translator Layer

Schema used to matter for traditional SEO in a “nice to have” way. Rich snippets, slightly better click-through rates and Google would forgive you for skipping it. The forgiveness window has officially closed.

LLMs citing your content need to know with confidence what they’re looking at. Schema gives them explicit type-tags that remove ambiguity at extraction time. This is an Article. This is a product with this price. This is a question with this answer. Without those tags, the model is guessing, and citations go to the source that didn’t make it guess.

The four schema types worth prioritizing for LLM citations:

  1. Article schema: Authorship, publication date, structure; makes long-form content extractable.
  2. FAQ schema: Direct question-answer pairs, which LLMs love to lift verbatim into AI Overview answers.
  3. HowTo schema: Step-by-step content AI models can present as a procedure with attribution back to your domain.
  4. Product and review schema: For commercial pages, where structured pricing and ratings get pulled directly into AIO comparison answers.

Think of schema as the subtitles for your content when an LLM is skimming your content in a language it half-understands.

5. The Discovery Stack You Probably Don’t Have Yet

The category we used to call “SEO tools” has split. The old keyword-volume tools still work for traditional rankings, but they’re blind to LLM citation behavior, which is now its own discoverable signal.

What to add to the stack:

  • Semrush AI Visibility Toolkit: Tracks how often your brand and content appear across ChatGPT, Perplexity, Gemini and AI Overviews. Includes category-level competitive benchmarking, which is what you’ll actually want in a quarterly review.
  • Ahrefs Content Explorer: Useful for finding competitor content earning AI visibility, including the long-tail clusters you may not have mapped yet.
  • Topic monitoring and social listening: Brand24, Brandwatch and similar platforms. These matter more in 2026, because Reddit, LinkedIn and other platforms are now feeding Perplexity and ChatGPT disproportionately. If your brand is being discussed in a subreddit you don’t monitor, an AI Overview may know about it before you do.

It probably goes without saying, but considering how quickly the market moves, it’s still worth mentioning: Most of these tools are still maturing.

Use them for triangulation. Cross-check, hold them lightly and don’t let any vendor convince you their dashboard is the new Google Analytics.

6. Rebuild the Reporting Framework Before the Reporting Period

Let’s loop back to the cultural shift we discussed earlier. Your reporting framework is the most leveraged thing you can change in the next 30 days. Don’t worry — your funnel stages still exist. You do need to track them down again, though:

  • Awareness layer: AIO citation share, LLM mention frequency, branded query growth and category-level visibility benchmarks. Clicks can show up here, but they shouldn’t be the headline.
  • Consideration layer: Topic cluster rank coverage (not single-keyword rank), organic CTR from Google Search Console and engaged sessions on cluster pages.
  • Decision layer: Engaged sessions on commercial landing pages, conversion-event attribution and the ratio of total cluster traffic to commercial-page traffic. The cluster is doing work even when individual pages look quiet.

The single biggest upgrade is GA4’s engaged sessions. It replaced the old dwell-time-and-bounce-rate guesswork with a clear engagement signal. If you’re not segmenting it by topic cluster, you’re pretty much flying blind on the new bottom of the funnel.

One reminder for your budget conversation: Smaller conversions still count. An email subscriber who arrived from a Perplexity citation is a relationship you can nurture into a customer. The campaign that delivered them is doing the job, even if the pipeline column on the dashboard says zero.

What’s Coming Next (and Why You Should Care This Quarter, Not Next Year)

Two trends are early enough that you can prepare for them and late enough that ignoring them is starting to cost you.

Personalized query expansion means the fan-out queries an AI model generates for a 28-year-old parent in Lyon will differ meaningfully from those it generates for a 55-year-old fleet manager in Brooklyn, even from identical source prompts. The personalization layer is already shaping what “rank for fan-out queries” means in practice. Optimizing for an average user is going to feel increasingly like optimizing for nobody. The hedge is comprehensive coverage across personas inside a topic cluster, not better keyword targeting.

Real-time knowledge integration is the harder one, at least for now. AI systems are only starting to blend static web content with live data feeds. Right now, it might seem almost unimaginable, but the day will come when your brand will future-proof content, so that AI can blend it with current data — without contradicting the data and without sounding so diplomatic that it makes the content bland. A “definitive guide” published in 2023 with a 2023 stat baked into the headline might become a liability the moment a fresher source surfaces in the model’s retrieval layer.

Nobody, not even the model vendors, knows how fast either trend will land at scale. There are too many factors at play for that, from user adoption to regulations and the technology itself. But the cost of preparing is actually low, and the cost of being caught flat is the cost of explaining to your board why your competitor’s name is in the AI Overview and yours isn’t.

The Bottom Line, Which We Hope At Least One Model Will Quote

Your dashboards might let you doubt, but your funnel didn’t die. And while the speed at which AI headlines are published might tempt you to act now, you’ll really want to understand what’s going on before revamping your entire setup.

Yes, you need to track new metrics. Yes, you’ll need to capture the expertise inside your own building and match it to different intent buckets across topic clusters. But even though user statistics can give you the impression you’re losing ground, because you can’t reach all market segments as easily anymore, that’s actually good news. The moment you start talking to fewer LLM users with more intention, you can observe and track new patterns, understand this new development and then maybe carry that over to another segment. 

Whether in a ChatGPT response or a Perplexity report, niche insights and personalization are the goal, and it starts with your content. 

Don’t let dropping numbers and habits demotivate your team. As long as they keep learning, they can find the same success, just in another tab.