A client asked us last month: "how do I know AEO is actually driving pipeline versus just showing up in reports because it sounds good?"
That's the right question. And the honest answer is that most AI citation attribution right now is guesswork — some of it reasonable, some of it wishful thinking. If you're going to invest in AEO, you need to know the difference.
Here's the framework we've been using to measure it honestly, including what it does measure, what it doesn't, and where you'll have to accept some fuzzy data.
The attribution problem is real
AI-sourced traffic is harder to attribute than almost any other channel. A buyer can see your brand name in ChatGPT, remember it, and Google you three days later. Your analytics will say "organic search." Your CRM will say "unknown source." The AI gets no credit.
This is worse than social, worse than podcast — at least those channels sometimes leave a UTM or a referral. AI citations often leave nothing. And they're the highest-intent moment in the buyer's research, so the invisible attribution hurts the most.
You can't solve this problem completely. You can get pretty close with the right combination of direct and proxy measurement. That's what this framework is.
What you can measure directly
Start with the data that's actually unambiguous:
Referral traffic. Filter Google Analytics or your analytics tool for referrers from chat.openai.com, perplexity.ai, bing.com (for Copilot), and specific AI-adjacent domains. This undercounts but it's real data.
Survey responses. Add "how did you find us?" to your signup or demo request form with AI assistants as an option. People will tell you. You'll be surprised how quickly this number grows.
Direct traffic patterns. If direct traffic (people typing your URL) spikes after a citation campaign, that's often AI doing its work — buyers seeing you in AI and typing your domain later. Correlation isn't proof, but it's signal.
Branded search volume. Same idea. If branded searches for your company increase month-over-month while nothing else changed, AI exposure is often the cause.
Citation tracking is your leading indicator
Before you can measure revenue impact, you need to measure whether you're actually getting cited more. This is the input metric. If this isn't moving, nothing downstream will move either.
Pick 30–50 queries that represent high-intent buyer questions in your category. Run them weekly against ChatGPT, Perplexity, Google AI Overviews, Bing Copilot, Claude. Track:
Whether your brand appears at all. Position if cited (primary recommendation vs. mentioned in a list). Accuracy of how you're described. Share-of-voice versus your top 3 competitors.
This is tedious work, but it's real. Automate where you can, but don't skip it. This is the closest thing AEO has to a clean performance metric.
Connecting citations to pipeline
The hard part. Here's what we've found actually works:
Tag AI-suspected leads in the CRM. When a lead comes in via "unknown" source but branded search volume just jumped, or referral traffic from AI platforms spiked the same week, flag those leads for tracking. Over 90 days, you'll have a cohort.
Look at the conversion behavior. Leads influenced by AI research typically convert faster and ask more specific product questions on demo calls. Sales will notice before you do — ask them "are you hearing buyers mention ChatGPT or Perplexity more?" That qualitative data is almost as valuable as the quantitative.
Match timing. If AEO campaigns started in January and your unattributed-but-converting pipeline starts climbing in March, that's the fingerprint of AI attribution.
It's not airtight. But combined with citation tracking moving in the right direction, it's enough signal to make budget decisions.
What NOT to measure
A few vanity metrics that show up in AEO reporting and shouldn't drive decisions:
"Number of AI platforms we're cited on." Appearing once on Perplexity doesn't mean you're winning AEO. Consistency and share-of-voice matter more than breadth.
"Total AI-sourced impressions." Nobody knows what this number actually is. Any tool claiming to give it to you is guessing.
"AI citation volume." Raw count doesn't mean much without context. Ten citations for a long-tail query nobody asks is worth less than one citation for your category's top buyer query.
If a metric sounds impressive and you can't connect it to a business decision, it's a vanity metric.
When to keep spending, when to stop
The decision framework: if citation frequency and share-of-voice are climbing after 90 days, keep going. If they're flat, something's wrong with the plan — either execution is off or the category isn't ready for AEO yet.
If after 6 months you have rising citations but no pipeline impact showing up anywhere (direct, branded search, sales-reported mentions), you may have an attribution infrastructure problem, or your ICP isn't using AI for research as much as you thought. Both are fixable, but worth diagnosing honestly.
If after 12 months you have citations AND pipeline lift, you're in the compounding zone. This is where AEO economics start looking very good compared to paid.
AEO measurement will get cleaner as tooling matures. Right now it's a mix of hard data, reasonable proxies, and honest qualitative signal from sales. Use all three.
The companies winning at AEO right now aren't the ones with the most sophisticated attribution models. They're the ones taking their best honest read of the data and making decisions from it. Perfect attribution isn't on the menu. Directional honesty is.