85.7% of companies score 0-20 on AI visibility. The top 5.2% average 93.4. The bottom 85.5% average 1.6. That's a 58x gap. I pulled this from 2,014 completed brand reports in the Loamly database (as of Feb 15, 2026). The distribution is not a bell curve. It's a cliff.
I covered the competitive position breakdown in detail in the 85-5 Rule post. This post goes one layer deeper: the audit-level findings from that dataset: what the scores mean, what separates leaders from the invisible 85.7%, and what I've found running individual brand audits.
Want to see where your company sits? Run a free check at loamly.ai/check. It takes 3 minutes.
The dataset
Source: Loamly brand_reports table
Total completed reports: 2,014
Date: As of February 15, 2026
Platforms queried per report: ChatGPT (GPT-4o), Claude, Gemini, Perplexity
Prompts per report: 48 queries across all four platforms
One honest limitation: this dataset skews toward companies that cared enough about AI visibility to run a check. Companies with strong existing visibility don't feel urgency: they're underrepresented. The 85.5% invisible figure might be an underestimate of the real market.
The score distribution
| Score Tier | Companies | % |
|---|---|---|
| 0-20 (Invisible) | 1,726 | 85.7% |
| 21-40 (Low) | 115 | 5.7% |
| 41-60 (Moderate) | 46 | 2.3% |
| 61-80 (Good) | 41 | 2.0% |
| 81-100 (Excellent) | 86 | 4.3% |
202 companies sit in the 21-80 range. That is 10% of the dataset. The rest are either at the bottom or at the top. There is no middle class.
The ChatGPT citation rate data makes this concrete: across 2,014 companies, the average ChatGPT citation rate is 0.704. But 83.7% of companies sit at exactly the 0.6875 baseline: the level where AI only cites them for branded queries (when someone asks about them by name). They are not getting recommended. AI knows they exist. That is not the same as being in the consideration set.
This isn't surprising given what Ahrefs' AI traffic research shows about AI referral patterns: AI traffic is growing 700% year-over-year but it concentrates heavily on a small number of high-authority brands. The distribution in the Ahrefs data matches what I see in the Loamly dataset.
Being at the 0.6875 baseline means you appear when someone types your company name. It does not mean AI recommends you when a buyer asks "best [category] tool for [use case]."
What separates the 5.2% from the 85.5%
I looked at every signal I have data on. This is what the numbers show:
| Factor | Leaders (104 companies) | Emerging (1,721 companies) |
|---|---|---|
| Avg AI Visibility Score | 93.4 | 1.6 |
| Avg Brand Authority | 90.9+ | ~15 |
| Wikipedia presence | ~90% | ~2% |
| 50+ Reddit mentions | ~85% | ~10% |
| YouTube presence | ~95% | ~40% |
| Avg GEO Score | ~65 | ~50 |
GEO technical score is not the differentiator. Leaders average ~65, emerging companies ~50. A 30% difference on the technical layer. Brand authority differs by 6x. Wikipedia presence differs by 45x.
The technical SEO instinct, optimize schemas, publish llms.txt, fix crawl issues, matters, but it is table stakes. It gets you into the game. It does not win the game.
The path from 1.6 to 93.4 runs through Wikipedia, Reddit coverage, press mentions, and the kind of brand authority that takes years to accumulate. That is the uncomfortable truth in this dataset.
What the audit layer shows (qualitative)
The score data covers 2,014 companies. The audit data, verbatim responses, citation chains, individual fact checks: I have from the paid audits I've run so far.
Two things appear in nearly every audit:
The decision-stage gap. Companies appear in awareness-stage prompts ("what is [category]") but vanish at consideration and decision ("best [category] tool for startups", "compare [Brand] vs [Competitor]"). AI knows they exist. It does not put them in the shortlist. This is the gap the score doesn't measure well: you can score 12 overall and still be invisible for every prompt a buyer actually uses when choosing a vendor.
Factual errors. In every paid audit I've run, I've found at least one thing AI is saying about the company that is factually wrong: outdated pricing, deprecated features still cited as selling points, competitor claims repeated as fact. The source is almost always a cached article or a competitor comparison page that AI trusts more than the company's own site. Most companies have never seen these errors because they've never read the verbatim AI responses. The Princeton GEO study (KDD 2024) confirmed that the sources AI cites shape what it says, which means inaccurate third-party content about your brand directly causes inaccurate AI outputs.
I don't have a clean percentage on hallucination rates from the free check data, that data doesn't include the fact-checking layer. But qualitatively, I have not yet audited a company where everything AI says is accurate.
Platform breakdown
The three platforms in the Loamly free check have different citation rates:
| Platform | Avg Citation Rate | % at 0.6875 Baseline |
|---|---|---|
| ChatGPT | 0.704 | 83.7% |
| Claude | 0.639 | 79.6% |
| Gemini | 0.576 | 44.9% |
Gemini has a smaller percentage at the baseline, it's more variable, which means more upside for companies that invest in Google-specific signals (Google Knowledge Graph, structured data, Google Business Profile). SE Ranking's AI traffic research shows Gemini referral traffic grew 388% year-over-year in 2025, the fastest of any platform, which makes that variability increasingly important. ChatGPT and Claude are harder to move and more concentrated among a small number of high-authority brands.
Perplexity data shows zero in the current dataset due to a data collection issue I'm still investigating. I'm not publishing Perplexity-specific figures until I can verify them.
What actually moves AI visibility
The data shows what signals predict high scores. What I don't have yet is clean before/after data on interventions at scale: the paid audit product is new and I haven't collected enough follow-up scores to publish reliable improvement rates.
What I can say from the signal data: the interventions that make sense given the correlation patterns are:
- Build off-site authority first. Wikipedia, Reddit presence, press coverage. The 45x Wikipedia gap between leaders and emerging is the clearest signal in the dataset.
- Publish answer-first content targeting the prompts you're losing. Sub-query data from the audit shows exactly what those prompts are.
- Fix AI-cited inaccuracies at the source. Not on your homepage: at the specific pages AI is pulling from.
- Technical GEO as baseline. llms.txt, FAQPage schema, Organization schema. Necessary but not sufficient.
I'll publish improvement rate data once I have enough follow-up audits to be statistically honest about it.
Methodology and limitations
Data source: Loamly brand_reports table, 2,014 completed reports as of Feb 15, 2026.
AI visibility score: Derived from citation rates across ChatGPT, Claude, Gemini, and Perplexity on 48 standardized queries per report. Each report runs both branded and unbranded queries.
Competitive position classification: Emerging (0-20), Niche (21-60), Challenger (61-80), Leader (81-100): based on AI visibility score combined with brand authority.
Limitations:
- Sample bias toward companies already interested in AI visibility
- Perplexity data excluded due to collection issue
- Audit-layer qualitative findings (fact errors, decision-stage gaps) are based on paid audits, not the full 2,014-report dataset
- Before/after improvement data not published, sample too small to be reliable
The 85-5 Rule post covers the competitive position breakdown in detail. For platform-level AI traffic data, the State of AI Traffic 2026 benchmark has the full picture.
If you want to see where your company sits in this distribution, run the free check. No marketing spin. Just real data about your AI visibility.
Last updated: February 19, 2026
Stay Updated on AI Visibility
Get weekly insights on GEO, AI traffic trends, and how to optimize for AI search engines.
No spam. Unsubscribe anytime.
Related Articles
The 85-5 Rule: Why AI Recommends the Same 5% of Companies
85.5% of companies are 'emerging' (avg score 1.6). 5.2% are 'leaders' (avg 93.4). AI creates winner-take-all dynamics.
The Agent Economy Is Here. Your Brand Is Probably Invisible In It
85.7% of 2,014 companies score near zero on AI visibility. AI agents spend money autonomously. Same discovery layer that powers ChatGPT today.
Check Your AI Visibility
See what ChatGPT, Claude, and Perplexity say about your brand. Free, no signup.
Get Free Report