Short answer: 92.2% of companies (1,652 out of 1,791) cluster at a visibility floor of 0.6875—acknowledged but not recommended by AI. Brand authority (r=0.389) predicts visibility 2.3x more strongly than on-site GEO optimization (r=0.168). Wikipedia presence shows the strongest lift (+14.6 points), but YouTube (+11.1) and Reddit (+10.8) are more accessible paths for most companies. Platform fragmentation is substantial: ChatGPT-Claude correlation is only 0.461, requiring platform-specific strategies. This report analyzes 1,792 companies across ChatGPT, Claude, Gemini, and Perplexity to identify what actually predicts AI recommendations.
Abstract
This study presents the largest comprehensive analysis of AI visibility to date, examining 1,792 companies across four major AI platforms (ChatGPT, Claude, Gemini, and Perplexity) from January 18-25, 2026. Using statistical analysis of citation rates, brand authority signals, and Generative Engine Optimization (GEO) scores, we identify the factors that predict AI recommendation patterns and quantify platform-specific differences in visibility dynamics.
Key findings: (1) 92.2% of companies cluster at a visibility floor of 0.6875, representing acknowledgment without recommendation; (2) Brand authority (r=0.389) predicts visibility 2.3x more strongly than on-site GEO optimization (r=0.168); (3) Platform fragmentation is substantial, with ChatGPT-Claude correlation of only 0.461; (4) Authority signal composition matters: Wikipedia presence (+14.6 point lift), YouTube (+11.1), and Reddit (+10.8) show the strongest effects; (5) The overall visibility distribution is highly skewed, with median visibility at zero and mean at 9.8 points.
These findings have immediate strategic implications for companies seeking to optimize AI visibility. Authority signals from distributed web presence (YouTube, Reddit, news) outperform on-site optimization alone, suggesting that traditional SEO approaches must be supplemented with off-site authority building to achieve AI recommendation status.
Introduction
I spent four months building Loamly because I couldn't find reliable data on AI visibility. Every tool I tried missed 80%+ of AI referrals. Now I'm sharing what I've learned from analyzing 1,792 companies—the largest AI visibility benchmark dataset currently available.
AI visibility is not a soft metric anymore. Citations drive traffic, and traffic compounds into brand and revenue. The Princeton GEO study shows citations directly change model outputs and rankings (KDD 2024). Research from Ahrefs analyzing 75,000 brands found that brand mentions and off-site signals correlate more strongly with AI visibility than traditional SEO metrics (Ahrefs study). The problem is that most companies are flying blind. They don't know what "good" looks like or which levers actually move visibility.
This report is the benchmark I wish existed a year ago. It's based on 1,792 company reports (Jan 18-25, 2026), with comprehensive visibility data across ChatGPT, Claude, Gemini, and Perplexity. Unlike previous studies that relied on smaller samples or single-platform analysis, this dataset provides the statistical power needed to identify robust patterns and quantify relationships between authority signals, optimization efforts, and AI recommendation outcomes. This builds on earlier research including the Princeton GEO study (KDD 2024), Ahrefs' brand visibility analysis (Ahrefs), and Profound's platform citation pattern research (Profound).
This isn't marketing fluff. These are real numbers you can verify, combined with patterns I've observed building AI traffic detection. Every statistic in this report comes from direct analysis of our brand_reports database, which stores the results of systematic queries to AI platforms for each company.
If you care about demand, this matters for a simple reason: recommendations are the choke point. AI traffic only appears after you're recommended. If you're mentioned but not preferred, you get noise, not pipeline. This report is about what separates "mentioned" from "recommended."
If you only read one section, read the Executive Summary. It's the shortest path to the core truths.
Executive Summary
This study analyzes 1,792 companies to identify what predicts AI visibility across ChatGPT, Claude, Gemini, and Perplexity. Five findings stand out:
1. The visibility floor is more extreme than previously documented. 92.2% of companies (1,652 out of 1,791) cluster at exactly 0.6875 visibility—the default "acknowledged but not recommended" state. Only 7.8% break above this floor. This represents a more severe concentration than earlier analyses suggested, indicating that breaking above baseline acknowledgment is the primary competitive challenge.
2. Brand authority predicts visibility 2.3x more strongly than GEO optimization. Brand authority correlation with overall visibility is 0.389, while GEO score correlation is 0.168. This 2.3x difference confirms that off-site authority signals dominate on-site optimization in predicting AI recommendations. However, both matter: the combination of authority and optimization outperforms either alone.
3. Authority signal composition reveals clear hierarchy. Wikipedia presence shows the strongest lift (+14.6 points), followed by YouTube (+11.1) and Reddit (+10.8). Despite Wikipedia's strength, only 17% of companies have it, while 85% have YouTube presence and 81% have Reddit. This suggests YouTube and Reddit represent more accessible paths to visibility improvement for most companies.
4. Platform fragmentation is substantial but predictable. ChatGPT-Claude correlation is 0.461, indicating moderate alignment but insufficient for single-platform strategies. Platform-specific patterns emerge: ChatGPT shows the tightest clustering (92% at 0.6875 baseline), while Gemini shows the highest variance (stdDev 0.319). Claude sits between these extremes.
5. The visibility distribution is highly skewed. Median overall visibility is 0, mean is 9.8, and the 90th percentile is 33. This extreme right-skew indicates that most companies achieve minimal visibility, while a small subset achieves substantial visibility. The distribution suggests winner-take-most dynamics rather than gradual improvement curves.
How to Use This Report
This benchmark is not just for reading. It is a baseline you can use to set targets, prioritize work, and measure progress. If you already track AI traffic, use the visibility distribution to diagnose why that traffic is flat. If you do not track AI traffic yet, use visibility as the leading indicator. Traffic only shows up after recommendations.
The easiest way to apply the data is to answer three questions:
- Are we above the baseline? If not, your goal is to move above 0.6875. That is the first unlock.
- Are we recommended for categories? If your brand visibility is high but category visibility is low, you are stuck in the discovery gap.
- Are we consistent across platforms? A single-platform win is good, but it is not a strategy. You need repeatable visibility across ChatGPT, Claude, and Gemini.
If you can answer those three questions for your brand, you can use the rest of the report as a playbook. The findings are not abstract. They translate directly into actions you can take in the next 90 days.
Methodology
Data Collection and Sample
This study analyzes 1,792 completed brand reports generated between January 18-25, 2026 using Loamly's /check tool. Each report represents a systematic analysis of a company's AI visibility across multiple platforms and query types.
Report generation process:
- Each company receives 48 prompts (16 per platform) across brand, category, and comparison query types
- Platforms queried: ChatGPT, Claude, Gemini, and Perplexity
- Brand authority signals collected via Brave Search API (Wikipedia, Reddit, YouTube, news mentions)
- GEO scores calculated from on-site technical analysis (schema markup, structured data, content quality)
- All data stored in PostgreSQL database with full audit trail
Methodology alignment: This approach aligns with best practices from the Princeton GEO study, which used systematic querying across multiple platforms to measure visibility (Princeton GEO). The 48-prompt structure ensures comprehensive coverage of brand, category, and comparison queries, similar to methodologies used in academic research on AI citation patterns (KDD 2024).
Data completeness:
- Overall visibility scores: 100% (1,792/1,792)
- Brand authority scores: 100% (1,792/1,792)
- GEO scores: 99.94% (1,791/1,792)
- Platform visibility data: 100% coverage across all four platforms
Variables and Metrics
Dependent variable: Overall AI visibility score (0-100), calculated as the average citation rate across platforms, weighted by platform query success rates.
Independent variables:
- Brand authority score (0-100): Composite metric derived from web mentions, Wikipedia presence, Reddit activity, YouTube presence, and news coverage
- GEO score (0-100): On-site optimization metric based on schema markup, structured data, content quality, and technical implementation
- Platform-specific citation rates (0-1): Citation frequency for each platform (ChatGPT, Claude, Gemini, Perplexity)
- Authority signal indicators: Binary and count variables for Wikipedia, Reddit, YouTube, news, infobox presence
Statistical Analysis
Correlation analysis: Pearson correlation coefficients calculated between brand authority, GEO scores, and overall visibility. Platform-specific correlations computed to assess cross-platform alignment.
Distribution analysis: Percentiles (p10, p25, p50, p75, p90), means, and standard deviations calculated for all continuous variables to characterize visibility distributions.
Signal lift analysis: Mean visibility scores compared between companies with and without specific authority signals (Wikipedia, Reddit, YouTube) to quantify the impact of each signal type.
Baseline analysis: Frequency analysis of companies clustering at the 0.6875 visibility floor to quantify the baseline acknowledgment state.
Data Quality and Limitations
Strengths:
- Large sample size (n=1,792) provides statistical power for robust conclusions
- Complete platform coverage enables cross-platform comparisons
- Real-time data collection ensures current relevance
- Systematic query structure enables consistent measurement
Limitations:
- Temporal scope: Data collected over one week (Jan 18-25, 2026) represents a snapshot rather than longitudinal analysis
- Perplexity data issue: All Perplexity visibility scores show zero in this dataset, indicating a data collection or storage issue that requires investigation
- Industry classification: Industry labels are inconsistent and may not reflect true competitive categories
- Causality: Correlational analysis cannot establish causal relationships; observed associations may reflect unmeasured confounds
- Generalizability: Sample may not represent all companies equally; bias toward companies that have used the
/checktool
Data verification: All statistics in this report can be verified by querying the brand_reports database. The analysis scripts used to generate these findings are available in the project repository. Readers can run their own reports at loamly.ai/check to see the methodology in action. This transparency aligns with best practices for research reproducibility, as recommended in academic research on AI visibility (KDD 2024).
Finding #1: The Visibility Floor—92% of Companies Stuck at Baseline
The most striking finding in this dataset is the extreme concentration at a single visibility value: 0.6875. This appears to represent a "tie" state where AI systems acknowledge a brand's existence but do not clearly recommend it. In practice, this manifests as the model recognizing the brand name in response to direct queries but not preferring it in lists, comparisons, or category prompts.
The scale of the problem: 92.2% of companies (1,652 out of 1,791) cluster at exactly 0.6875 visibility. This is even more extreme than earlier analyses suggested. Only 7.8% of companies break above this floor, making baseline escape the primary competitive challenge in AI visibility.
Platform-Specific Baseline Patterns
| Platform | Mean | p25 | Median | p75 | p90 | Std Dev | At 0.6875 |
|---|---|---|---|---|---|---|---|
| ChatGPT | 0.704 | 0.6875 | 0.6875 | 0.6875 | 0.8125 | 0.126 | ~92% |
| Claude | 0.640 | 0.6875 | 0.6875 | 0.6875 | 0.6875 | 0.203 | ~75% |
| Gemini | 0.562 | 0.500 | 0.6875 | 0.6875 | 0.875 | 0.319 | ~60% |
| Perplexity | 0.000 | 0 | 0 | 0 | 0 | 0 | 100%* |
*Perplexity shows all zeros—this is a data collection issue requiring investigation.
Key observations:
- ChatGPT shows the tightest clustering: 92% of companies at 0.6875, with minimal variance (stdDev 0.126)
- Claude shows moderate variance: More companies break above baseline, but still 75% cluster at 0.6875
- Gemini shows highest variance: Only 60% at baseline, with wider distribution (stdDev 0.319) suggesting more differentiation
- Platform differences matter: The same company can be at baseline on ChatGPT while breaking above on Gemini
Overall Visibility Distribution
The overall visibility distribution is highly skewed:
| Metric | Value |
|---|---|
| Mean | 9.8 |
| Median | 0 |
| p25 | 0 |
| p75 | 0 |
| p90 | 33 |
| Standard Deviation | 23.5 |
Interpretation: Most companies achieve zero or minimal visibility. The median is zero, meaning half of all companies have no meaningful AI visibility. The mean (9.8) is pulled upward by a small subset of high-visibility companies. The 90th percentile (33) indicates that even top performers achieve moderate visibility rather than dominance.
This distribution suggests winner-take-most dynamics rather than gradual improvement curves. Companies either break above the baseline and achieve meaningful visibility, or they remain stuck at the floor. There is little middle ground.
Why the Baseline Matters
AI traffic behaves fundamentally differently than search traffic. In traditional search, you can survive on low rankings and still capture long-tail clicks. In AI, you usually get one recommendation. If you are not preferred, you get nothing. This is why the baseline is a floor, not a milestone.
The 0.6875 value likely represents a default "acknowledgment" state where AI systems recognize the brand from training data but lack sufficient signals to confidently recommend it. Breaking above this floor requires signals that make the model confident enough to recommend you by name—typically third-party proof, not on-site tweaks.
What separates the 7.8% who break above it? The next sections show the pattern: authority signals, not on-site tweaks; distributed web presence, not single-channel dominance; and platform-specific coverage, not a single best platform.
How to read a visibility score
The benchmark is continuous, but you can interpret it in bands:
- 0.00 to 0.50: low visibility. The brand is rarely cited, even on brand prompts.
- 0.6875: baseline acknowledgment. You appear, but you are not preferred.
- 0.75+ (above p75): active recommendation tier. You are cited consistently.
- 0.90+ (top decile): category dominance. You are the default in most prompts.
These are not rigid cutoffs. They are the simplest way to explain where you sit in the distribution.
Finding #2: Brand Authority Predicts Visibility 2.3x More Strongly Than GEO
This is the most consistent result across every cut of the data. Brand authority is the strongest predictor of AI visibility. GEO matters, but it is not the primary driver.
These correlations use the full valid dataset (n=1,791 companies with complete data).
| Correlation (n=1,791) | Value | r² (Variance Explained) | Interpretation |
|---|---|---|---|
| Brand authority ↔ Overall visibility | 0.389 | 15.1% | Moderate positive correlation |
| GEO score ↔ Overall visibility | 0.168 | 2.8% | Weak positive correlation |
| Authority advantage | 2.3x | 5.4x | Authority explains 5.4x more variance |
Statistical significance: Both correlations are statistically significant (p < 0.001), but the magnitude difference is substantial. Brand authority explains approximately 15.1% of variance in overall visibility, while GEO explains only 2.8%. This 5.4x difference in variance explained confirms that authority signals are the primary driver of AI visibility.
The practical meaning: authority signals explain visibility far better than on-site optimization. That lines up with the broader ecosystem. Ahrefs' 75k-brand analysis of AI visibility shows that brand mentions and off-site signals out-corroborate classic SEO metrics (Ahrefs study). Semrush's AI visibility research similarly found that backlinks have moderate influence but real gains appear only after reaching certain authority levels (Semrush). The Princeton GEO study demonstrated that improving citation surfaces can boost visibility by up to 40% in model responses (Princeton GEO).
This is not a "SEO is dead" argument. It is a prioritization argument. If you have a fixed budget, the data says you should push authority signals first and treat GEO as the baseline. It is table stakes, not the differentiator.
You can see this in the correlation gap itself. A 0.389 correlation is meaningfully predictive—not perfect, but substantial. A 0.168 correlation is closer to noise. This does not mean GEO is useless. It means GEO without authority rarely wins. The pattern is simple: if a company is never mentioned off-site, the model does not feel safe recommending it, even if the on-site content is flawless.
Authority Signal Lift Analysis
Which authority signals actually move the needle? We compared mean overall visibility scores between companies with and without each signal:
| Signal | With Signal | Without Signal | Lift | Signal Prevalence |
|---|---|---|---|---|
| Wikipedia | 21.9 (n=305) | 7.3 (n=1,486) | +14.6 | 17% |
| YouTube | 11.4 (n=1,531) | 0.4 (n=260) | +11.1 | 85% |
| 11.9 (n=1,449) | 1.1 (n=342) | +10.8 | 81% | |
| News | Requires analysis | Requires analysis | Requires analysis | 58% |
Key findings:
-
Wikipedia shows the strongest lift (+14.6 points) despite only 17% of companies having it. This suggests Wikipedia presence is a strong differentiator, but it's not a requirement for visibility.
-
YouTube and Reddit show similar, substantial lifts (+11.1 and +10.8 respectively) and are far more accessible—85% and 81% of companies have these signals. This makes them the most practical optimization targets for most companies.
-
Signal prevalence matters: While Wikipedia has the strongest lift, its low prevalence (17%) means most companies cannot pursue it as a primary strategy. YouTube and Reddit offer better accessibility-to-impact ratios.
The counterintuitive insight: Reviews alone do not predict visibility in our dataset. This does not mean reviews are useless; it means they are not a primary AI recommendation signal by themselves. AI models appear to prioritize diverse, independent sources over concentrated review corpora. This aligns with research showing that AI systems weight citation source diversity and quality over simple mention frequency (Princeton GEO). The Tow Center at Columbia Journalism Review found that AI search engines show high error rates in citations, which may explain why diverse source portfolios outperform concentrated review sites (CJR).
Signal composition hypothesis: These are single-signal averages. Real-world performance likely compounds. The companies with the highest visibility probably show up across multiple signals simultaneously: YouTube plus Reddit plus news plus Wikipedia or infobox. The lift table is a directional guide, not a recipe. Future research should examine signal interaction effects to identify optimal authority signal portfolios. Research from Beutler Ink suggests that Wikipedia and Reddit together create a "trust multiplier" where AI systems weight mentions more heavily when they appear in multiple trusted contexts (Beutler Ink).
For a deeper breakdown of authority vs GEO and platform-specific behavior, see Brand Authority vs GEO.
Finding #3: Platform Fragmentation Is Substantial and Predictable
Platform correlations reveal moderate but insufficient alignment for single-platform strategies. The data shows that while platforms share some common evaluation criteria, they differ meaningfully in how they weight signals and generate recommendations.
| Platform Pair | Correlation | Interpretation |
|---|---|---|
| ChatGPT ↔ Claude | 0.461 | Moderate alignment |
| ChatGPT ↔ Gemini | Requires analysis | Platform correlation analysis pending |
| Claude ↔ Gemini | Requires analysis | Platform correlation analysis pending |
Statistical interpretation: A correlation of 0.461 indicates that platforms share approximately 21% of variance (r² = 0.212) in their visibility patterns. This means 79% of variance is platform-specific, confirming that cross-platform optimization is necessary rather than optional. Research from Profound shows that only 11% of domains receive citations across both ChatGPT and Perplexity, highlighting the platform fragmentation challenge (Profound). The Digital Bloom AI citation report similarly found that platform-specific citation patterns differ substantially, with Perplexity showing 46.7% Reddit concentration while ChatGPT and Google AI Overviews show different source mixes (Digital Bloom).
Platform Distribution Characteristics
The platforms show distinct distribution patterns:
ChatGPT: Tightest clustering (stdDev 0.126), with 92% of companies at the 0.6875 baseline. This suggests ChatGPT applies consistent evaluation criteria but provides minimal differentiation for most brands.
Claude: Moderate variance (stdDev 0.203), with 75% at baseline. Shows stronger correlation with brand authority (0.399) than ChatGPT (0.312), suggesting Claude weights authority signals more heavily.
Gemini: Highest variance (stdDev 0.319), with only 60% at baseline. This wider distribution suggests Gemini provides more differentiation and may reward optimization efforts more readily than other platforms.
Perplexity: All zeros in this dataset—this is a data collection issue requiring investigation. Perplexity visibility data is not available for analysis in this report.
Strategic Implications
Platform alignment is not guaranteed. You can achieve high visibility on Gemini while remaining invisible on ChatGPT. This makes platform-specific coverage a real GTM requirement, not a nice-to-have.
Platform-specific optimization strategies:
- ChatGPT: Focus on authoritative sources (Wikipedia, news) and structured content
- Claude: Emphasize brand authority signals (strongest authority correlation)
- Gemini: May reward optimization efforts more readily (highest variance suggests more differentiation)
Measurement requirement: If you want consistent AI traffic, you need to track each platform separately and avoid over-generalizing from a single win. Use platform deltas as a diagnostic: when you improve YouTube presence and see Gemini move first, that's expected. The next job is to translate that into ChatGPT and Claude by adding sources they weight more heavily (editorial coverage, technical documentation, and category comparisons).
Finding #4: The Discovery Gap—Brand Recognition vs Category Recommendation
This is the most actionable insight in the dataset. AI systems know who you are (brand recognition), but they do not recommend you for category queries (category recommendation). This gap represents the difference between being "known" and being "chosen."
The gap magnitude: Based on previous analyses of category visibility data, 97.4% of companies have lower category visibility than brand visibility. This pattern appears consistent across all platforms, suggesting it's a fundamental characteristic of how AI systems evaluate brands. This "discovery gap" has been documented in earlier research, with the GEO benchmark report on 100 YC companies finding that 96% of companies are invisible in category queries (GEO Benchmark). The gap represents the difference between being recognized (brand queries) and being recommended (category queries), which is critical for AI-driven discovery.
Think of this as a funnel: brand prompts are awareness, category prompts are consideration, and comparison prompts are decision. Most brands never make it past the top of the funnel in AI responses. That is why your AI traffic stays flat even when you are mentioned.
Why this matters: Category visibility drives discovery. When someone asks "What are the best [category] tools?", you need to appear in that answer. Brand visibility alone only helps when someone already knows your name. If you're not recommended for category queries, you're invisible to most potential customers. Research from Brandastic shows that brands winning visibility in AI answers (not just rankings) see significantly higher conversion rates (Brandastic). The AIOps 2026 report found that a brand must earn "both a mention and a citation" to achieve 40% higher likelihood of consistent visibility (AIOps).
Closing the Discovery Gap
Closing the gap requires three concrete moves:
-
Category presence: Publish content that answers the category question without mentioning your brand first. Models learn category structure from these pages. Create comprehensive category guides, "best of" lists, and comparison content that establishes your expertise in the category.
-
Comparisons: Create honest comparison pages and "best of" lists with real criteria, not fluff. AI systems need structured, factual comparisons to understand where you fit in the competitive landscape. Comparison tables, feature matrices, and use-case guides help models position you correctly.
-
Third-party proof: Reinforce category authority through external sources, not just your own site. AI systems need evidence from multiple sources to confidently recommend you for categories. This means earning mentions in category discussions on Reddit, YouTube tutorials, news coverage, and industry publications.
These moves feel like classic SEO, but the difference is intent. You are not trying to rank on Google. You are trying to become the default answer in AI prompts. The content structure, citation density, and evidence requirements differ from traditional SEO. Research from Onely shows why traditional keyword research fails in AI search—AI systems understand semantic intent rather than exact keyword matches (Onely). The Princeton GEO study found that structured data, clear semantic relationships, and authoritative citations are more important for AI visibility than keyword density (Princeton GEO).
This also explains why most brands underperform. They invest in brand pages and SEO hygiene, but not in category evidence. AI systems need evidence, not just copy. That evidence is comparison tables, buyer guides, reviews from third parties, and practitioners discussing real outcomes.
For a deeper breakdown and examples, see The Discovery Gap.
Finding #6: Platform Variance Reveals Optimization Opportunities
The platforms show dramatically different variance patterns, revealing where optimization efforts are most likely to pay off:
Gemini shows the highest variance (stdDev 0.319), with only 60% of companies clustering at the baseline. This wider distribution suggests Gemini provides more differentiation and may reward optimization efforts more readily than other platforms. If you want to test whether your footprint is working, Gemini will usually show movement first.
ChatGPT shows the tightest clustering (stdDev 0.126), with 92% at baseline. This suggests ChatGPT applies consistent evaluation criteria but provides minimal differentiation for most brands. Breaking above baseline on ChatGPT requires stronger signals than on other platforms.
Claude sits between these extremes (stdDev 0.203), showing moderate variance and the strongest correlation with brand authority (0.399). This suggests Claude may be the most responsive to authority-building efforts.
Strategic implication: Platform variance differences mean you should prioritize platforms where your optimization efforts are most likely to show results. Gemini's high variance suggests it's a good testing ground for new strategies. ChatGPT's tight clustering means you need stronger signals to break through, but the reward may be more stable once achieved. Research from Brian Balfour's framework on AI distribution suggests that platform-specific strategies are necessary because each AI system has different content preferences and citation patterns (Brian Balfour). The State of LLMs 2025 report documents that platform differences are structural, not temporary, requiring long-term platform-specific optimization (State of LLMs).
Measurement requirement: If you only check one platform, you may be misreading your progress. Use Gemini as the early signal and ChatGPT or Claude as the stability check. If your visibility rises only on Gemini, you have not "won" yet. You have a signal. The next step is to replicate the same coverage on surfaces that ChatGPT and Claude rely on. That usually means a mix of YouTube, Reddit, and credible editorial mentions. Research from Evertune shows that AI platforms require different optimization strategies—ChatGPT favors authoritative knowledge bases while Perplexity emphasizes community discussions (Evertune). Mention Network's multi-model optimization guide provides platform-specific recommendations for each AI system (Mention Network).
Discussion: Strategic Implications and Actionable Framework
Prioritized Action Plan
If you are building demand in an AI-first market, here is the prioritized action plan based on the statistical findings:
1. Break above the baseline (92% are stuck here). If you are stuck at 0.6875, you are being acknowledged, not recommended. Your goal is to earn citations that move you above the floor. That usually requires third-party proof, not on-site tweaks. You can write the best FAQ page in the world and still stay at the baseline if no one else mentions you.
2. Invest in authority signals first, GEO second. Brand authority (r=0.389) predicts visibility 2.3x more strongly than GEO (r=0.168). YouTube (+11.1), Reddit (+10.8), and Wikipedia (+14.6) show the strongest lifts. Use GEO as a foundation, not the growth engine. If you only have time for two initiatives this quarter, pick one video surface (YouTube) and one community surface (Reddit).
3. Close the discovery gap. Brand prompts are not enough. You need category prompts, comparisons, and best-of lists to make AI recommend you. This is the difference between "people know you" and "people choose you." Most companies never make it past brand recognition to category recommendation.
4. Treat platforms separately. Platform correlations (0.461) are moderate but insufficient for single-platform strategies. Build per-platform coverage and track it independently. Use Gemini as an early signal (highest variance), ChatGPT as a stability check (tightest clustering), and Claude for authority-focused optimization (strongest authority correlation).
90-Day Implementation Plan
If you want a concrete 90-day plan based on the data, it looks like this:
-
Month 1: Produce two category-focused videos and place them on YouTube with clear, query-driven titles. Track citations in Gemini first (highest variance means you'll see movement fastest).
-
Month 2: Join three relevant Reddit threads and publish answer-first posts. Track whether category visibility moves. Reddit shows +10.8 point lift and is accessible to 81% of companies.
-
Month 3: Publish one comparison page and one "best [category]" page. Measure whether you move above the 0.6875 baseline. Category content closes the discovery gap.
This is not a hack. It is the shortest path that actually matches the data.
Priority Ordering Heuristic
If you want a clear priority order, use this heuristic based on the lift analysis:
- First: YouTube + Reddit (fastest lift, most accessible—85% and 81% prevalence)
- Second: Category and comparison content (closes the discovery gap)
- Third: Formal authority signals like Wikipedia or infobox (strongest lift but low accessibility—17% prevalence)
That ordering aligns with the lift table and accessibility data. It is not subjective. It is what the dataset shows.
Measurement Framework
If you want a measurement cadence, run a visibility check monthly. Focus on three indicators:
- Baseline share: Are you still stuck at 0.6875? (92% of companies are)
- Category visibility: Are you closing the discovery gap? (97% have lower category than brand visibility)
- Platform spread: Are your wins consistent across platforms? (Correlation 0.461 means platform differences are real)
If you track those three numbers, you will know whether your AI visibility strategy is working long before traffic catches up.
For deeper playbooks, read:
Limitations and Future Research
Data Quality Limitations
Perplexity data issue: All Perplexity visibility scores in this dataset show zero, indicating a data collection or storage issue. This prevents analysis of Perplexity-specific patterns and cross-platform comparisons including Perplexity. Future research should investigate and resolve this data quality issue.
Temporal scope: Data collected over one week (Jan 18-25, 2026) represents a snapshot rather than longitudinal analysis. Visibility patterns may change over time as AI models update and training data refreshes. Future research should examine temporal stability of visibility patterns.
Sample composition: The sample may not represent all companies equally. Companies that use the /check tool may differ systematically from those that don't, potentially biasing results toward companies already interested in AI visibility optimization.
Methodological Limitations
Causality: Correlational analysis cannot establish causal relationships. The observed associations between authority signals and visibility may reflect unmeasured confounds. Experimental or quasi-experimental designs would strengthen causal inference.
Industry classification: Industry labels are inconsistent and may not reflect true competitive categories. Industry-specific analyses are limited by classification quality. Future research should develop more robust industry taxonomies.
Signal interaction effects: This analysis examines single-signal effects. Real-world performance likely involves signal interactions and compounding effects. Future research should examine how signal combinations create synergistic or antagonistic effects.
Future Research Directions
-
Longitudinal analysis: Track visibility changes over time to understand stability, decay, and refresh cycles. The AIOps 2026 State of AI Search report documents that visibility shifts continuously as models rebuild answers, with only 30% of brands maintaining visibility across back-to-back answer generations (AIOps). Future research should examine these temporal patterns systematically.
-
Causal inference: Experimental designs testing whether authority-building interventions actually increase visibility. Current research is primarily correlational; randomized controlled trials would strengthen causal claims.
-
Signal interaction modeling: Examine how combinations of authority signals create compounding effects. The current analysis examines single-signal lifts; future research should model signal interactions and optimal authority portfolios.
-
Platform-specific mechanisms: Deep-dive into why platforms differ in their visibility patterns. Research from Yext shows that the same search query produces different results across Google and ChatGPT (Yext). Understanding these mechanisms would enable more effective platform-specific strategies.
-
Industry-specific patterns: Develop robust industry taxonomies and examine vertical differences. Rich Sanger's analysis of Google AI Overviews across industries found that health and home and garden trigger AI Overviews over 50% of the time, while real estate and automotive trigger them less than 7% (Rich Sanger). Similar patterns likely exist across all AI platforms.
-
Category visibility analysis: Expand analysis of category vs brand visibility gap with larger samples. The current finding that 97.4% of companies have lower category visibility than brand visibility suggests a fundamental discovery challenge that warrants deeper investigation.
FAQ
Why focus on citations instead of traffic?
Citations are the root cause of AI traffic. The Princeton GEO study shows that improving citation surfaces directly increases recommendation probability (KDD 2024). When I built Loamly, I discovered that traffic only appears after you're recommended. If you're mentioned but not preferred, you get noise, not pipeline. This report is about what separates "mentioned" from "recommended."
Are AI citations trustworthy?
Not always. The Tow Center at Columbia Journalism Review found high error rates in AI search citations (CJR study). That is exactly why consistent, verifiable brand signals matter. When I analyze companies, I see that brands with strong off-site authority get cited more accurately than brands that only optimize on-site.
Why does Reddit matter so much?
Independent research shows community sources dominate certain AI citation sets. Profound's Perplexity analysis reports that Reddit accounts for 46.7% of Perplexity's top cited sources (Profound analysis). In my dataset, companies with Reddit presence show a +10.8 point visibility lift. It's not about gaming Reddit—it's about being part of genuine conversations that AI models trust.
Does SEO still matter?
Yes, but it is not the primary driver of AI visibility. Brand authority (r=0.389) predicts visibility 2.3x more strongly than GEO optimization (r=0.168). That lines up with Ahrefs' 75k-brand study on AI visibility (Ahrefs study). Think of SEO as table stakes and authority signals as the differentiator. You need both, but if you have a fixed budget, the data says push authority first.
How often should I check my AI visibility?
Monthly is enough for most companies. The platforms don't change that fast, and you need time for your efforts to compound. I track three indicators: baseline share (are you stuck at 0.6875?), category visibility (are you closing the discovery gap?), and platform spread (are your wins consistent?). If you track those three numbers, you'll know whether your strategy is working long before traffic catches up.
What's the fastest way to break above the baseline?
Based on the data, YouTube (+11.1 points) and Reddit (+10.8 points) show the strongest lifts and are most accessible (85% and 81% prevalence). If you only have time for two initiatives this quarter, pick one video surface and one community surface. But remember: these are single-signal averages. Real-world performance compounds. The companies with the highest visibility usually show up across multiple signals simultaneously.
Why is 92% of companies stuck at 0.6875?
This appears to be the default "acknowledgment" state where AI systems recognize the brand from training data but lack sufficient signals to confidently recommend it. Breaking above requires third-party proof—distributed mentions across YouTube, Reddit, news, or Wikipedia—that makes the model confident enough to recommend you by name. On-site optimization alone is rarely sufficient.
What about Perplexity?
Perplexity visibility data shows all zeros in this dataset, indicating a data collection issue. This prevents analysis of Perplexity-specific patterns. Future research will address this limitation.
Conclusion
This study provides the largest comprehensive analysis of AI visibility to date, examining 1,792 companies across four major AI platforms. The findings reveal a stark reality: 92% of companies cluster at a visibility floor, with only 7.8% achieving meaningful visibility above baseline acknowledgment.
The statistical evidence is clear: brand authority predicts visibility 2.3x more strongly than on-site GEO optimization. Authority signal composition matters, with Wikipedia (+14.6), YouTube (+11.1), and Reddit (+10.8) showing the strongest lifts. Platform fragmentation is substantial (correlation 0.461), requiring platform-specific optimization strategies.
These findings have immediate strategic implications. Companies seeking to optimize AI visibility should prioritize authority-building activities (YouTube, Reddit, news) over on-site optimization alone. Platform-specific strategies are necessary rather than optional. The discovery gap—the difference between brand recognition and category recommendation—represents the primary growth opportunity for most companies.
Future research should examine temporal stability, causal mechanisms, signal interaction effects, and industry-specific patterns. As AI platforms continue to evolve, understanding these visibility dynamics becomes increasingly critical for companies competing in AI-first markets.
Check Your AI Visibility
If you want to see where you sit in this benchmark, run a free report at loamly.ai/check. The analysis uses the same methodology described in this report, giving you real-time visibility data across ChatGPT, Claude, Gemini, and Perplexity.
Data availability: The statistical analysis scripts and findings are available for verification. All numbers in this report can be reproduced by querying the brand_reports database using the methodology described above.
Last updated: January 25, 2026
Stay Updated on AI Visibility
Get weekly insights on GEO, AI traffic trends, and how to optimize for AI search engines.
No spam. Unsubscribe anytime.
Related Articles
The AI Traffic Attribution Crisis: Why Your Analytics Are Wrong
60% of AI traffic lands as 'direct' in GA4. Here's why your analytics are systematically lying to you.
What 141 Companies With Low Authority But High AI Visibility Have in Common
Loamly analysis of 141 overperformers shows strong AI visibility without high brand authority. The common signals are YouTube and Reddit, not Wikipedia.
Check Your AI Visibility
See what ChatGPT, Claude, and Perplexity say about your brand. Free, no signup.
Get Free Report