Google AI · AEO

Google AI Overviews: What We've Learned After a Year of Optimization

By Ilyas Mrani|March 6, 2026|10 min read

Google AI Overviews had a rough launch. Incorrect answers. Moments of comedy (glue on pizza). A lot of content marketers writing it off as temporary.

A year later, it's not temporary. AI Overviews now appear on a large and growing share of B2B queries, and for many commercial categories they sit above the traditional results. If you're not showing up in them, Position 1 means less than it used to.

We've been tracking client AI Overview performance closely since they became meaningful. Here's what's actually changed, what works, and where the traps are.

What AI Overviews actually do

When AI Overviews appear, Google generates a synthesized answer from multiple sources, with inline citations. Users can expand the answer, click source links, or scroll past to traditional results.

Critically: the sources cited in the AI Overview aren't always the top-ranking pages. Google is explicitly picking which sources to synthesize from, often prioritizing pages with specific formatting characteristics (direct answers, structured content) over pages that simply rank highest organically.

This matters because it means a new game is being played on top of traditional SEO. You can rank well and still not get cited in the AI Overview. You can rank less well and still get featured. The rules are adjacent to SEO, but not identical.

What we've seen work

Across our client work and testing over the past year, a few patterns keep repeating:

Content that directly answers the query in the first paragraph gets extracted most often. This is the biggest single tactical change. If your article spends 300 words on setup, Google's AI often skips you for a more direct competitor.

Specific numerical data gets pulled heavily. "Our benchmark shows X" performs better than "in our experience, this is fast." AI Overviews love quantifiable claims they can cite.

Recent content outperforms older content, all else equal. Google appears to weight freshness more in AI Overview selection than in traditional rankings. Update your high-value pages.

Schema markup helps, especially FAQ and HowTo schema. We've seen meaningful lift from implementing comprehensive schema on pages that were already ranking.

Content that cites multiple third-party sources gets favored. Pages that function as research syntheses (linking out to other authoritative sources within the content) often outperform pure first-party content. Counterintuitive, but real.

What doesn't work (but feels like it should)

Some things we've tested that produced surprisingly little lift:

Aggressive content expansion. Writing a 6,000-word mega-guide for every topic. Google does not appear to systematically favor length for AI Overview selection. In some cases, concise pages win.

Pure keyword targeting. Writing explicitly for "AI Overview inclusion keywords" (yes, people sell these) doesn't consistently work. Google's selection process weights utility more than keyword match.

E-E-A-T signals in isolation. Author bios and expertise credentials help, but don't move the needle alone. They're part of a broader trust picture — necessary, not sufficient.

Schema without content quality. Putting FAQ schema on thin content doesn't rescue it. Schema amplifies good content, it doesn't create it.

The traffic question

A real tension: AI Overview inclusion is prestigious, but does it actually drive traffic? The honest answer: less than traditional Position 1, but more than expected.

What we've observed: when clients get cited in AI Overviews for commercial queries, traffic from those queries typically drops 15–30% vs. what Position 1 would have gotten pre-AI-Overview. Some users get their answer from the overview and don't click. Some do — especially for complex or commercial queries where they want to verify.

But — and this matters — cited brands in AI Overviews appear to get stronger branded search lift. The implicit endorsement of being the AI-cited source seems to feed into how buyers remember the brand.

So the real calculation isn't "AI Overview citation vs. Position 1 clicks." It's "AI Overview citation plus reduced-but-still-meaningful clicks plus brand lift" versus "Position 1 clicks but no AI citation." The former usually wins for B2B, especially for considered purchases.

Practical optimization workflow

If you're trying to improve AI Overview performance on specific queries, here's the approach that's working for us:

Identify queries where AI Overview appears. Use Google Search Console's AI Overview filter (available in 2025 and improved in 2026) to see which of your pages are being shown in AI Overview results.

For each query, inspect the AI Overview. See what's being cited. See how your page compares.

Rewrite to lead with the direct answer. If your current page buries the answer, move it to the first paragraph. This alone often produces movement.

Add specific data where possible. Benchmarks. Percentages. Version numbers. Concrete specifics that an AI can quote.

Update freshness. Change your page's updated date. Refresh stats. Add recent context.

Add or improve FAQ schema for the key questions around the topic.

Then wait 2–6 weeks. Google's AI Overview selection updates on its own cadence. Don't expect overnight changes.

Where this is headed

AI Overviews aren't going to stay as they are. Google has been shipping rapid changes — expanded query coverage, new content types (video, product comparisons), richer citation formats. By late 2026 the feature will look meaningfully different from today.

But the underlying direction is stable: Google is making AI-synthesized answers more prominent, and traditional organic listings less prominent, for more query types. The brands preparing for that world — optimizing for citation and entity authority, not just ranking — will be dramatically better positioned than brands running a pure 2023 SEO playbook.

Most of our client work in this area pays for itself through the brand lift and still-meaningful referral traffic. Almost nobody regrets starting early. Many companies waiting to see if AI Overviews "stick" are now playing catch-up on the same fundamentals we've been implementing for a year.

If you're on the fence about investing in AI Overview optimization: the fundamentals are the same fundamentals that help ChatGPT and Perplexity citation. Any work you do here compounds across AEO broadly. You're not optimizing for one feature — you're building capability that applies across every AI answer surface, which is where search is going.

If your brand\'s citation presence in AI engines is weaker than it should be — or you\'re not sure where you stand — we run a free audit that tests your top buyer queries live across ChatGPT, Perplexity, and Google AI.

Book a Free Audit →