A Marketing Leader's Guide to Measuring and Improving AI Visibility
Tom Fry

Your brand's next customer might never visit your website. They might never see your Google ad or read your blog post. Instead, they'll ask ChatGPT, Perplexity, or Claude: "What's the best solution for [your category]?" – and make their decision based on what the AI tells them.
This is the new reality of B2B buying. AI answer engines are becoming the first touchpoint in the buyer journey, and if your brand isn't visible in those responses, you're invisible at the moment that matters most.
The challenge? Traditional marketing metrics don't capture this. Your Google Analytics can't tell you whether ChatGPT mentioned you. Your SEO tools can't show you if Perplexity is citing your competitors instead of you.
That's why measuring and improving your AI visibility requires a fundamentally different approach – one that treats AI answer engines as a distinct channel with its own metrics, analysis, and optimisation strategies.
Here's how to do it.
1. Set a Baseline Metric So You Know Your Starting Point
Simply running the same handful of prompts again and again won't work – every time you run the prompt, you'll get a different answer, so the key insight with AI answer engines is to aggregate the results across lots of relevant prompts. This builds a baseline metric that you can compare against.
We build an AI Visibility Score using several key metrics:
The Agentcy Visibility Score Breakdown:
- 40% Mention Rate – % of prompts where your brand is mentioned
- 25% Position Score – Average position (CTR-weighted)
- 25% Citation Score – % of responses citing your website
- 10% Share of Voice – Your mentions vs all brand mentions
The prompts need to be relevant and run in the context of your prospective buyer – mapped to their journey stage (awareness, consideration, decision), their role, and their industry.
2. Run the Prompts on Your Target Models – With Web Search Enabled
Once you have the prompts and the context, you need to systematically run them and analyse the results. Within the results, you're looking for these key metrics:
- Are you mentioned? If so, what position is your mention? Are your competitor brands mentioned?
- Is your website cited? Is a competitor website cited? What non-brand websites are influential in the answers?
Although I would respect anyone trying to do this manually, clearly this is the type of job that's built to be automated.
Something to consider is the location and language – not every set of prompts needs to be run with a global/US location, especially if your buyer is geographically located in a specific market.
3. Analyse the Results
Once your analysis completes, you'll have a wealth of data to explore. Although the individual answers are interesting, and sometimes worth a deep dive to see how different models refer to your brand, the key areas to focus on:
Performance by Buyer Journey Stage
Break down your visibility across awareness, consideration, and decision-stage queries. You might find you're strong in early-stage educational queries but invisible when buyers are actively comparing solutions – or vice versa.
Citation Analysis
Understand which publications and websites are being cited alongside (or instead of) your brand. This reveals:
- Which trade publications, analysts, and industry sources have the most influence on AI responses
- Which of your website pages are being cited – and which are being ignored
- What content gaps exist where competitors are getting cited but you're not
Competitive Intelligence
Track how often competitors appear in the same responses as you, their average position versus yours, and whether you're winning or losing the share of voice battle.
Model-by-Model Breakdown
Different AI models behave differently. You might rank well in ChatGPT but be invisible in Perplexity. Understanding these differences helps you prioritise where to focus your optimisation efforts.
4. Iterate: Make Changes and Rerun the Analysis
Any analysis shouldn't just tell you where you stand – it should tell you what to do next. There's a real opportunity in these early days of GEO/AEO to have a major impact through careful use of content strategy and technical website edits.
The core insights come from thinking about the AI answer engines themselves:
- They already have a huge amount of embedded knowledge so they don't need to cite what they already know. However, what they don't have is up-to-date evidence to back up their embedded knowledge – this is where citations shine.
- Tokens cost money, so all the model providers use strategies to reduce how many tokens it takes to generate a response. Therefore they use indicators to calculate whether your web page is worth citing – from the structure of the URL itself (called a "slug") to metadata in the HTML.
Building an Iteration Cycle
The key to improving your AI visibility is treating it like any other performance marketing channel:
- Baseline – Establish your starting metrics
- Analyse – Understand the gaps and opportunities
- Act – Implement the recommended changes (content updates, website optimisation, PR outreach)
- Measure – Rerun the analysis to track improvement
- Repeat – Continue the cycle as AI models evolve
Action these insights consistently, and your visibility score will improve.
Conclusion
AI visibility isn't a nice-to-have anymore – it's becoming as fundamental as SEO was a decade ago. The brands that start measuring and optimising now will build a compounding advantage as AI answer engines become an increasingly dominant part of the buyer journey.
The good news? Unlike the early days of SEO, you don't have to figure this out through trial and error. The methodology is clear: establish your baseline, run systematic analysis, identify the gaps, and iterate.
The brands winning in AI aren't necessarily the biggest or the best-known. They're the ones creating citable content, building presence in the publications that AI models trust, and treating AI visibility as a measurable, improvable channel.