Webinar

How to Make Sure AI Includes Your Brand in B2B Buying Decisions

When ChatGPT crossed 900 million weekly users at the end of 2025 — more than doubling in a single year — it confirmed what many B2B marketers had been quietly suspecting: AI answer engines are no longer an experiment. They are the new front door to buyer discovery.

This webinar, jointly hosted by Agentcy and ScaleWise, brought together practitioners who are navigating this shift daily to answer the question that now keeps revenue leaders awake: if a buyer asked an LLM to shortlist vendors in your category this morning, would your brand appear — and would you like how it was described?

The Rise of AI Answer Engines

◷ 08:04

The session opened with a live audience poll that immediately crystallised the state of play. Nearly half of attendees were not yet measuring AI visibility at all. A quarter were evaluating or piloting tools. Just 15% had a dedicated visibility platform in place. On pipeline influence, over half estimated that less than 5% of their pipeline currently comes from AI — but when asked where that number would be in three years, 42% predicted AI would influence more than half of all pipeline. The gap between current adoption and expected impact could hardly be wider.

This tracks with broader market data. Practical guidance and information-seeking now account for nearly half of all ChatGPT interactions. Forrester reports that 94% of B2B buyers are already using AI somewhere in their purchasing process. The old buying journey — search, click, compare, read, shortlist, talk to sales — is being compressed into single AI interactions where the model produces the narrative, the criteria, and often the shortlist in one go.

That compression is what makes this different from the rise of social media or even the early days of Google. Those channels added new touchpoints to the buying journey. AI answer engines are collapsing the journey itself. And crucially, they are doing it behind a conversational interface so intuitive that adoption barriers are essentially zero — if an 80-year-old can use ChatGPT daily, the financial sector certainly will.

The consensus among the panel was unambiguous: this is a paradigm shift, not an incremental channel addition. The signals all point towards a heavy transformation cycle, and the 18-to-24-month window before volume meaningfully crosses from traditional search to LLMs is already narrowing.

Why Marketing Teams Were Caught Off Guard

◷ 16:52

The early narrative around LLMs was dominated by hallucination — AI's tendency to fabricate facts and present them with unearned confidence. For many marketers, this was enough to dismiss the technology as unsuitable for serious research or purchasing decisions. If the models could not be trusted to get basic facts right, how could they possibly influence B2B buying?

That assessment underestimated the pace of improvement. Even Google did not take ChatGPT seriously at first. But the gap between "interesting curiosity" and "genuinely capable research tool" closed far faster than anyone predicted. The hallucination problems have been largely solved, and the models now ground their answers in real sources — which means the brands and content they reference matter enormously.

The other accelerant was LLMs moving behind corporate firewalls. Once organisations could deploy AI tools on proprietary data without confidentiality concerns, the use cases exploded. Go-to-market teams began using LLMs not just for content creation but for market research, competitive analysis, and vendor evaluation — exactly the activities that feed into buying decisions. The practical experience at FE fundinfo illustrates the point: when their team first tested prompts about their own business in ChatGPT and Claude, they found responses where nobody was controlling the narrative. That discovery triggered an early investment in getting ahead of the trend.

The Shift from Clicks to Answers

◷ 20:25

For twenty years, B2B marketing has run on the click economy. Rank in Google, earn the click, bring buyers to your site, measure impact through traffic and conversions. Attribution models, marketing automation platforms, and entire career specialisms were built on this feedback loop.

AI answer engines break the loop. Buyers can now reach the point of shortlisting vendors without generating a single click. Research published by Datos (part of Semrush) found that Google searches per user fell 20% year over year — not because people stopped using Google, but because they stopped needing to click through multiple results. They are finding what they need in AI-powered summaries.

This has a direct parallel in PR, where a media mention does not generate a click but absolutely influences sales. That similarity is why PR professionals may be best positioned to own GEO and AI answer engine optimisation. The challenges are structurally identical: measuring influence that does not manifest as a direct conversion event. Instead of share of voice, the new currency is share of model — what proportion of relevant LLM queries mention your brand, and how does that vary across different AI platforms?

The critical nuance is that AI visibility does not behave like SEO visibility. Three differences stand out:

  • No keyword planner equivalent. The prompts buyers use are conversational, varied, and proprietary to each platform. You cannot see them at scale.
  • Non-deterministic answers. The same question asked twice can produce different results. Even small changes in wording can generate completely different vendor recommendations.
  • Inclusion is a spectrum. Unlike Google's binary page-one-or-not dynamic, in an LLM response your brand might be prominently featured, mentioned in passing, framed narrowly, described inaccurately, or omitted entirely.

The practical implication is sobering: AEO represents a separate channel with its own measurement requirements. Traditional channels — email, events, organic, inbound SEO — still drive most volume today. But without LLM visibility tools, any dip in inbound leads becomes a blind spot. You might be losing consideration in AI-driven discovery and have no way of knowing it. Your analytics will show nothing at all.

Building an AEO-First Strategy

◷ 28:47

The experience at FE fundinfo provides a compelling proof point for early movers. After implementing a visibility platform and completing a corporate website refresh, their team saw a 30% month-on-month increase in visibility score — achieved not through months of gradual SEO gains but within weeks. The speed of impact is one of AEO's most striking characteristics: where SEO typically takes months to show results, well-executed AEO optimisations can produce measurable changes in as little as two to three weeks.

The approach combined several practical elements: adding FAQ content blocks to approximately 50 pages, removing outdated and duplicate content, implementing interlinking, ensuring concise content with genuine depth, and optimising meta information. One particularly effective tactic was simply asking the LLMs themselves how to optimise for AI visibility — a meta approach that proved surprisingly productive. The models will tell you exactly what they look for in content structure and presentation.

On the measurement side, the key differentiator is running prompts that reflect real buyer behaviour rather than guessing at keywords. This starts with defining the ICP — the ideal customer profile — and building prompts around their genuine challenges, needs, and the contexts in which they would turn to an LLM. The prompts should then be run across multiple models, because visibility varies dramatically between platforms. A brand might be well-represented in ChatGPT but entirely absent from Claude or Perplexity. Without multi-model testing, you are only seeing a fraction of the picture.

Want to see how your brand shows up in AI?

Agentcy measures your visibility across ChatGPT, Gemini, Claude, Perplexity and more — so you can see exactly where you stand and what to fix.

Apply for a Free Trial

The Five Visibility Indicators

◷ 35:20

The tactical heart of the webinar laid out a measurement framework built around five core indicators that together provide a comprehensive picture of AI visibility:

  1. Mentions — How often your brand is named in AI-generated responses. The most basic signal: do you show up at all?
  2. Citations — Whether AI cites your content as a source. This includes both on-site citations (links to your own pages) and off-site citations (references to third-party content that mentions you). Citations signal authority and source trust.
  3. Share of Mentions — Your competitive share within AI responses. How often do you appear relative to competitors in the same answer set?
  4. Share of Model — How your visibility varies across different LLMs. Each model has different training data and retrieval behaviour, so your performance will differ between ChatGPT, Gemini, Claude, and Perplexity.
  5. Context Quality — The qualitative dimension: sentiment, placement within the response, depth of description, and accuracy of framing. A negative or inaccurate mention can be worse than no mention at all.

To improve across these indicators, the framework identifies four pillars of action:

Technical Foundations — Make your content machine-readable. Clean page structure, correct metadata, JSON-LD structured data, and accessible architecture help AI systems find, interpret, and reference your content. This is typically the lowest-hanging fruit: often a one-off effort with outsized and immediate impact.

Ground Your Brand — Ensure AI can recognise who you are and what you represent. Consistent brand descriptors, clear value propositions, and trust markers across digital touchpoints teach models to correctly associate your name with your areas of expertise.

Answer the Questions — Create content that directly addresses what buyers are asking. Here, a critical difference from SEO emerged: the long-form "10x pillar page" that worked brilliantly for Google actually works against you with LLMs. Answer engines are optimised to minimise token usage, so they prefer concise, authoritative content that gets to the point quickly. Short, direct answers outperform comprehensive but rambling guides.

Build External Citations — Earn references from trusted third-party sources. AI models give more weight to brands corroborated by reputable outlets, and this is where PR becomes a fundamental component of any GEO strategy.

Why Niche Publications Beat Tier-One Media

◷ 43:40

One of the session's most counterintuitive insights challenged conventional PR wisdom. When it comes to influencing LLM outputs, niche industry publications consistently outperform big-name outlets like the BBC or the Financial Times. The reason comes back to how AI models combat hallucination: they look for backup from sources that demonstrate deep domain expertise. Industry publications know their subject in granular detail, which is exactly what models need to ground factual answers. A thorough analysis in a specialist trade publication will be cited far more often than a passing mention in a national newspaper.

This has significant strategic implications. Rather than chasing vanity placements in mass-market outlets, brands focused on AI visibility should invest in consistent, high-quality coverage in the publications their buyers — and the AI models those buyers use — actually trust as domain authorities. The same principle extends to analyst reports, Gartner Peer Insights, and community platforms like Reddit, though the specific mix varies considerably between industries and even between sub-sectors.

For teams building an AEO capability from scratch, the practical advice was to start by hiring for adjacent skills. Direct AEO experience is still too rare to recruit for, but content strategists with strong SEO backgrounds can transition effectively when paired with a visibility measurement platform. The key is having a clear owner for the initiative and establishing baselines from day one, even while processes are still being refined.

The 30-60-90 Day Action Plan

◷ 47:04

For teams ready to move from understanding to action, the session closed with a practical roadmap:

Days 1–30: Establish your baseline. Define a minimum of 150 ICP-relevant prompts that mirror real buyer questions — category definitions, vendor comparisons, "best solution for X" queries, implementation concerns. Run them across ChatGPT, Gemini, Claude, and Perplexity. Document where your brand currently stands across all five visibility indicators. Set up weekly tracking to balance meaningful trend data against the cost of running prompts at scale.

Days 30–60: Technical fixes and content optimisation. Implement the low-hanging fruit first: structured data, clean site architecture, FAQ content on key pages, concise and authoritative copy. Break up long-form pillar pages into focused, answer-ready content. Remove duplicate and outdated pages. The evidence shows these changes can produce visible results within two to three weeks — far faster than traditional SEO timelines.

Days 60–90: PR and citation-building. Audit which third-party sources are being cited by LLMs in your category. Build a targeted PR strategy focused on the publications, analyst firms, and community platforms that models actually reference. Create original research and data assets that are inherently citable. This is the longest-term investment but also the most defensible — earned authority compounds over time.

On the question of which LLMs to prioritise, the practical answer is all of them. ChatGPT and Gemini have the most traffic today, but Claude, Perplexity, and emerging models are growing fast. The platforms exist to run prompts across multiple models simultaneously, so there is no reason to pick favourites — and doing so would leave blind spots.

One of the most encouraging aspects of the discussion was the consensus that AEO does not require teams to abandon their existing strategies. It extends what already works — good content, strong technical foundations, credible reputation — into a measurable AI visibility discipline. The foundations of SEO, content marketing, and PR remain relevant; they just need to be adapted for an ecosystem where machines, not just humans, are reading and recommending your content.

Key Takeaways

  • AI is reshaping B2B buying now, not in the future. With 900 million weekly ChatGPT users and LLMs deployed behind corporate firewalls, buyers are already using AI to research, shortlist, and compare vendors — often without clicking a single link.
  • GEO is not SEO with a new name. There is no keyword planner. Answers are non-deterministic. Visibility operates on a spectrum. It requires its own strategy, metrics, and measurement cadence.
  • Measure across multiple models with ICP-driven prompts. Start with at least 150 prompts that mirror real buyer questions. Track five indicators: mentions, citations, share of mentions, share of model, and context quality. Run weekly.
  • Niche publications often matter more than tier-one media. LLMs prioritise domain-specific sources that demonstrate deep expertise. Invest PR effort in the publications that buyers and their AI tools trust as authorities.
  • AEO results come faster than SEO. Well-executed optimisations can produce measurable impact in weeks, not months. Concise, answer-ready content and clean technical foundations are the fastest path to improved visibility.
  • Follow a 30-60-90 day plan: baseline measurement first, then technical fixes and content optimisation, then PR and citation-building. The investment compounds over time, and early movers have a significant advantage while the playbook is still being written.