How to Check if ChatGPT, Claude and Perplexity Mention Your Brand

Your competitors are showing up in ChatGPT answers. You might not be.

Here is the uncomfortable reality. When a potential customer asks ChatGPT “who are the best [your category] in the UK?”, one of two things happens. Either your brand gets mentioned by name, or a competitor’s does. There is no third option. The AI does not say “and several other great companies.” It picks three to five names, and those are the ones that get the enquiry.

 

Most business owners have never checked which side of that line they fall on.

 

This article shows you exactly how to check, in under ten minutes, without any paid tools. If you find out you are invisible to AI search, you will at least know what to fix. If you find out you are already being cited, you will know which platforms are working for you and which ones need attention.

Why this matters now, not in six months

AI Overviews are no longer a fringe experiment.

BrightEdge’s 12-month analysis found that AI Overviews now trigger on 48% of tracked Google queries, up from 31% a year earlier. A separate UK-focused study of 1,000 keywords found AI Overviews appearing on roughly 42% of UK searches, concentrated in informational and advice-based queries. In specific sectors the jump is brutal: B2B tech queries went from 36% to 82% AI Overview exposure in 12 months.

When an AI Overview appears, clicks collapse.

Ahrefs’ December 2025 analysis found organic click-through rates drop by 58% for position-one content when an AI Overview is present. Seer Interactive’s earlier study put the fall even steeper at 61%. That traffic does not disappear. It goes to the sources AI platforms decide to cite inside the overview.

Ranking on Google does not mean being cited by AI.

This is the finding most SEOs get wrong. Only 12% of URLs cited by ChatGPT, Perplexity and Copilot rank in Google’s top 10 for the original query. 80% of LLM citations don’t even rank in Google’s top 100. ChatGPT Search specifically cites lower-ranking pages (position 21+) about 90% of the time. In other words, a site ranking position 3 on Google can be entirely absent from ChatGPT, Perplexity and Claude for the exact same query. I see this every week in client audits.

UK buyers have already shifted.

Over 70% of UK procurement teams now report using AI tools during vendor evaluation, and an estimated 30% of UK consumers have used an AI chatbot to research a product or service before purchasing, up from around 12% in early 2025.

The question is not whether this matters. The question is how far behind you already are.

Get the checklist:

I have turned this into a printable PDF with the exact prompts and a scoring grid. 
 

What you need before you start

Ten minutes, a clean browser window (or a private tab), and access to the following five platforms:
  • ChatGPT (free tier is fine, but if you have Plus, use the “Search the web” option)
  • Claude (claude.ai, free or Pro)
  • Perplexity (perplexity.ai, free tier)
  • Google Gemini (gemini.google.com, free)
  • Google AI Overviews (just run the search on google.co.uk)
Important: use a private browsing window or a fresh account. If you run these prompts logged into your usual account with personalised history, the results will be skewed by your own browsing patterns. We want the view a stranger gets.
 
Have a blank document or spreadsheet open to log the answers. You will forget what you found by platform three if you do not write it down.

The five prompts that tell you everything

These are the five prompt types that map to how real buyers search now. Run each one on all five platforms. That is 25 queries total, which sounds like a lot but runs in about ten minutes because you are copying and pasting.
 
Replace the square brackets with your specifics before you run them.

Prompt 1: the direct brand query

“Tell me about [your brand name].”

What you are checking:

does the AI know who you are at all? If the answer is generic, wrong, or “I could not find information on this company,” you have a foundational visibility problem. AI platforms do not know you exist.

What a good result looks like:

the AI describes your business accurately, mentions what you do, names your founder or key people if relevant, and cites at least one source link back to your site.

Prompt 2: the category query

“Who are the top [your category] in [your location]?”
 
Example: “Who are the top commercial property lawyers in Cambridge?” or “What are the best bookkeeping software tools for UK freelancers?”

What you are checking:

are you recommended when someone searches for your category without already knowing your name? This is the most commercially valuable query type. These are new customers actively looking for a provider.

What a good result looks like:

your brand appears in the top three to five recommendations, with a description that aligns with your actual positioning.

Prompt 3: the problem query

 “I am a [your customer type] struggling with [the problem you solve]. What should I do?”
 
Example: “I run a £5m professional services firm and our marketing agency isn’t showing us any AI search strategy. What should I do?” Or, for a product business: “I am a SaaS founder losing organic traffic to AI overviews. What should I do?”

What you are checking:

do AI platforms surface your brand as a solution to the specific pain points you solve? This is how a lot of buyers actually search now. They describe the problem in their own words, not the solution category.

What a good result looks like:

the AI’s recommended next steps or tools include your brand, your category, or at least a pathway that could lead to you.

Prompt 4: the competitor comparison

“How does [your brand] compare to [main competitor]?”

What you are checking:

when a prospect is in active evaluation mode, which version of your story does the AI tell? If the comparison is inaccurate, outdated, or missing key differentiators, you have a narrative problem as well as a visibility problem.

What a good result looks like:

a factually accurate comparison that reflects your current positioning, not an old one from three years ago.

Prompt 5: the recommendation query

“What is the best [your category] for [specific use case]?”
 
Example: “What is the best accounting firm for UK ecommerce businesses?”

What you are checking:

AI platforms love specificity. They give more concrete, confident recommendations when the query narrows down to a use case. This is where niche players often appear even if they do not rank on Google.

What a good result looks like:

your brand is named, and the AI gives a one-line justification that sounds like something you would actually say about yourself.
 

How to score what you find

For each of the 25 combinations of prompt and platform, mark one of four outcomes:

ScoreMeaningWhat it tells you
CitedYour brand is named and linkedYou are visible on this platform for this query
MentionedYour brand is named but not linkedYou are partially visible, but AI is not passing users to your site
Described but wrongAI talks about you but gets it wrongYou have a misinformation problem worse than being invisible
AbsentNo mention at allYou are invisible for this query on this platform

Total up your scores. Here is how to read the result.

20 or more “Cited”:

you are in the top 5% of businesses in your category for AI visibility. Protect the position. The next job is making sure you stay there as the platforms update their training data.

10 to 19 “Cited”:

solid but inconsistent. Usually this means you are strong on one or two platforms (often Perplexity, because it leans on fresh web data) and weak elsewhere. The fix is platform-specific and methodical.

4 to 9 “Cited”:

you have foundational AI visibility but are being beaten by competitors on most queries. This is the most common score for businesses with decent SEO but no AEO strategy. Fixable in 90 days with the right framework.

Fewer than 4 “Cited”:

you are functionally invisible. Every time a prospect asks AI about your category, someone else gets the enquiry. This is fixable, but it needs to start now. Every week you delay is a week competitors consolidate their position.

Any “Described but wrong” scores at all:

prioritise fixing these before anything else. Misinformation in AI outputs compounds over time because other AI models train on each other’s outputs. A wrong description today becomes a canonicalised wrong description in six months.

What the common failure patterns look like

After running this audit for dozens of businesses, the same three patterns come up again and again.

Pattern 1: Google-visible, AI-invisible.

The site ranks in positions 1-5 on Google for its main commercial keywords but gets zero citations in ChatGPT or Claude. Usually caused by content that reads well to humans but has no structured entity signals, no clear topical authority, and no citation-friendly paragraph structure. Fixable with content restructuring and schema markup.

Pattern 2: Cited for the wrong thing.

The brand is mentioned, but for a side service or old positioning. A firm that rebranded from “tax accountants” to “outsourced CFO services” still gets cited as “tax accountants.” This is a training data lag problem. Fix it by publishing aggressive, well-structured new positioning content and building external citations that reinforce the new identity.

Pattern 3: Cited everywhere except Perplexity.

Perplexity weights recent web freshness heavily. If your content is over 18 months old on average, Perplexity will skip you in favour of newer sources. Fix with a republishing and freshness strategy.
If your audit surfaces any of these patterns, you have a clear starting point.

What to do with your results

Three things, in this order.
 
  1. First, screenshot everything. The exact wording of AI answers changes week to week. You want a time-stamped record of where you started so you can measure progress in 30, 60 and 90 days. This is also useful for the conversation you are about to have with whoever runs your marketing.
  2. Second, identify your biggest gap. Is it one platform (you are invisible on Perplexity only)? One prompt type (you show up for direct brand queries but not category queries)? One competitor (the same rival appears in every answer where you do not)? Your biggest gap is your first fix.
  3. Third, decide who owns this. AI visibility is no longer a “nice to have” line item in an SEO report. It needs an owner, a monthly review, and a roadmap. If nobody in your business is doing this right now, your competitors are pulling ahead every week.

What this audit cannot tell you

Be honest about the limits.
This is a sampling audit, not a comprehensive one. Five prompts give you a directional read, not a statistically significant one. For a full picture, you need to run 30 to 50 prompts across multiple variations and track them weekly. That is what a proper AI visibility audit looks like, and it is the first thing I do with every new consulting client.
It also does not tell you why you are absent or cited. Knowing you are invisible on ChatGPT for category queries is useful. Knowing that the reason is weak entity signals in your schema markup, thin topical content, or missing citations from trusted third-party sources is the actionable version. That requires deeper diagnostic work.
 
And it does not tell you what your competitors are doing right. To understand that, you need to reverse-engineer the sites AI platforms are consistently citing in your category and identify the common signals. That is the basis of the Layered Ranking System.
 
But as a first check, it is enough. If you have never done this before, running the 10-minute audit will tell you more about your real visibility than any SEO report you have read in the last 12 months.

Your next step

Run the audit this week. Not next month. Not when you have more time. This week.
 
If you want to do it properly, download the 10-minute AI visibility audit checklist.  It has the prompts, the scoring grid, and a simple summary page you can share with your team.
 
If the results scare you, and they scare most people the first time, that is actually the useful outcome. It means you now have evidence. Evidence is what gets AI visibility onto the strategic agenda instead of being dismissed as “something the SEO agency will handle.”
 
And if you want someone to run the proper version, the 30-prompt, multi-platform, weekly-tracked, competitor-benchmarked audit, that is what we do at AI Ranking Academy. No 40-page report. A clear picture of where you stand, a ranked list of what to fix, and a 90-day roadmap to close the gap.
 
Most businesses find out they are invisible to AI search by accident, usually when a customer says “I asked ChatGPT and it recommended your competitor.”
 
Find out on purpose instead. Ten minutes, starting now.

Leave a Reply

Your email address will not be published. Required fields are marked *