I spent the last week analyzing 100 YC-backed companies to understand how they perform across ChatGPT, Claude, and Gemini.
The results revealed something I didn't expect: a hidden baseline that exposes the difference between companies AI models know about versus companies AI models actually recommend.
Update: What the 1,528-Company Dataset Shows
I expanded the analysis to 1,528 companies (Jan 9-21, 2026). The baseline is not a YC artifact. It shows up across the entire dataset.
Average citation rates:
| Platform | Avg Citation Rate |
|---|---|
| ChatGPT | 0.703 |
| Claude | 0.643 |
| Gemini | 0.498 |
The baseline still clusters around 0.6875. This is the "known but not recommended" zone at scale:
| Platform | At 0.6875 Baseline | % of Dataset |
|---|---|---|
| ChatGPT | 1278 | 83.7% |
| Claude | 1217 | 79.6% |
| Gemini | 686 | 44.9% |
This matters because it means the YC results are not an edge case. The same discovery gap appears across the broader market.
The 69% Baseline: What It Reveals
Before diving into the findings, you need to understand the methodology—because it changes everything.
I run 16 queries per company across three AI platforms:
| Query Type | Count | Purpose |
|---|---|---|
| Category Prompts | 5 | "Best [product category] tools" — No brand mentioned |
| Brand Prompts | 5 | "What is [Company]?" — Direct brand queries |
| Competitive Prompts | 5 | "Alternatives to [Company]" — Comparison queries |
| Description | 1 | Brand overview for sentiment |
The critical insight: Category prompts simulate a user researching without knowing your brand exists. If AI doesn't mention you in these queries, you have zero discovery visibility.
When a company appears in Brand + Competitive queries but NOT in Category queries, they get cited in 11 out of 16 queries (68.75%)—which rounds to 69%.
69% is the baseline. It means AI knows you exist when asked directly, but doesn't recommend you when users search your category.
Key Findings
| Finding | Insight |
|---|---|
| At the 69% Baseline | 95% of companies (ChatGPT) |
| Break Above Baseline | Only 4% on ChatGPT, 17% on Gemini |
| Fall Below Baseline | 7% on Claude (brand query failures) |
| Schema Correlation | -0.01 (essentially zero) |
| GEO Correlation | 0.04 (essentially zero) |
The bottom line: Almost every YC company is invisible in category queries. AI models know they exist, but don't recommend them when users research their product category.
Finding 1: 96% Are Invisible in Category Queries
This is the headline finding.
| Platform | Above 69% | At 69% (Baseline) | Below 69% |
|---|---|---|---|
| ChatGPT | 4 (4%) | 91 (95%) | 1 (1%) |
| Claude | 4 (4%) | 85 (89%) | 7 (7%) |
| Gemini | 16 (17%) | 74 (77%) | 6 (6%) |
On ChatGPT, 95% of YC companies sit at exactly the baseline. They're only visible when you already know to ask about them.
Only 4 companies break through on ChatGPT—meaning AI recommends them in category queries like "best AI sales tools" or "top developer documentation platforms."
The Category Query Winners
These are the companies AI actually recommends when users search their category:
| Company | ChatGPT | Claude | Gemini | What It Means |
|---|---|---|---|---|
| Sully | 100% | 100% | 75% | Recommended in every query |
| Flow.club | 100% | 94% | 94% | Strong category presence |
| Lightly | 100% | 88% | 69% | ChatGPT category winner |
| Weave | 75% | 100% | 75% | Claude category winner |
Sully achieves 100% visibility on both ChatGPT and Claude. When someone asks "best AI healthcare documentation tools," Sully appears every single time.
Lightly is fascinating: 100% on ChatGPT, but only 69% on Gemini. ChatGPT recommends them in category queries; Gemini doesn't. Same product, different AI platforms, completely different visibility.
The Hidden Majority
Meanwhile, companies like Mailmodo (GEO score 88) and Greptile (GEO score 77) sit at exactly 69% across all platforms. They have excellent technical optimization, but AI doesn't recommend them when users search their category.
Finding 2: Gemini is 4x More Generous Than ChatGPT
The platform differences are dramatic:
| Metric | ChatGPT | Claude | Gemini |
|---|---|---|---|
| Above Baseline | 4% | 4% | 17% |
| At Baseline | 95% | 89% | 77% |
| Below Baseline | 1% | 7% | 6% |
Based on 96 YC companies with complete visibility data.
Gemini recommends 4x more companies in category queries than ChatGPT or Claude.
16 companies break through on Gemini versus just 4 on ChatGPT. This likely reflects Gemini's integration with Google Search—it pulls from real-time search results where companies have worked to build SEO visibility.
Gemini's Favorites
| Company | ChatGPT | Claude | Gemini |
|---|---|---|---|
| Paradigm | 69% | 69% | 100% |
| Payflow | 69% | 69% | 100% |
| Vapi | 69% | 69% | 94% |
| Waydev | 69% | 69% | 94% |
| Unsloth | 69% | 69% | 94% |
| Mem0 | 69% | 69% | 94% |
Paradigm and Payflow achieve 100% Gemini visibility while sitting at baseline on ChatGPT and Claude. Whatever they're doing works specifically for Gemini's retrieval mechanism.
Finding 3: Claude is the Most Skeptical
Claude shows the most variance—both below and above baseline:
Claude's Skeptics
| Company | ChatGPT | Claude | Gemini |
|---|---|---|---|
| LeapingAI | 69% | 6% | 63% |
| CloudHumans | 69% | 13% | 69% |
| Noble Stay | 69% | 19% | 69% |
| AgentHub | 69% | 19% | 69% |
LeapingAI gets cited in only 1 out of 16 Claude queries (6%). ChatGPT cites them at the 69% baseline. Claude actively refuses to recommend them even in brand-specific queries.
This isn't just "Claude doesn't know them." Claude is actively skeptical—it fails to cite them even when asked directly about the company.
Claude's Favorites
Conversely, Claude sometimes breaks above baseline when other platforms don't:
| Company | ChatGPT | Claude | Gemini |
|---|---|---|---|
| Weave | 75% | 100% | 75% |
| Sully | 100% | 100% | 75% |
Weave is Claude's favorite—100% citation rate versus 75% on other platforms.
Finding 4: GEO Score Doesn't Predict Visibility
The correlation between GEO score and citation rate is essentially zero:
| Factor | ChatGPT Correlation | Claude Correlation | Gemini Correlation |
|---|---|---|---|
| GEO Score | 0.04 | 0.10 | 0.25 |
| Schema Markup | -0.01 | 0.06 | 0.15 |
| AI Readability | 0.05 | 0.05 | 0.26 |
Schema markup has a -0.01 correlation with ChatGPT citation. That's not weak—it's nonexistent.
Gemini shows the strongest correlation (0.26 with GEO, 0.16 with schema), but even that's weak by statistical standards. A correlation of 0.26 explains only 7% of the variance.
The Paradox in Action
| Company | GEO Score | Schema | ChatGPT | Claude | Gemini |
|---|---|---|---|---|---|
| Lightly | 39 | 0 | 100% | 88% | 69% |
| Mailmodo | 88 | 100 | 69% | 69% | 69% |
Lightly has a GEO score of 39 with zero schema markup—and achieves 100% ChatGPT visibility.
Mailmodo has a GEO score of 88 with perfect schema markup—and sits at the baseline.
Technical optimization doesn't create visibility. Brand authority does.
Finding 5: Platform Correlation is Low
The three platforms disagree more than they agree:
| Platforms | Correlation |
|---|---|
| ChatGPT vs Claude | 0.39 |
| ChatGPT vs Gemini | 0.19 |
| Claude vs Gemini | 0.17 |
A correlation of 0.17 between Claude and Gemini means they're essentially independent. A company successful on one platform has almost no predictive advantage on the other.
This fragmentation creates both problems and opportunities:
The problem: No single optimization strategy works across platforms.
The opportunity: Companies that understand platform-specific dynamics can gain disproportionate advantage.
Finding 6: 59% Have Zero Schema Markup
Out of 96 companies, 57 have no structured data on their homepage. No JSON-LD. No Organization schema. Nothing.
And yet, schema's correlation with citation is essentially zero. The companies with perfect schema sit at the same 69% baseline as those with none.
| Schema Tier | Count | ChatGPT | Claude | Gemini |
|---|---|---|---|---|
| High (80+) | 8 | 69.0% | 69.0% | 73.6% |
| Medium (40-79) | 26 | 70.4% | 68.5% | 71.4% |
| Low/None (0-39) | 62 | 69.8% | 67.0% | 71.3% |
The high-schema companies average 69.0% on ChatGPT. The no-schema companies average 69.8%.
Schema markup is table stakes for traditional SEO. It's irrelevant for AI visibility.
Finding 7: What Actually Predicts Visibility
The data combined with external research reveals a stark truth: technical GEO factors don't predict AI visibility. Here's what the correlations actually show:
| Factor | ChatGPT Correlation | Claude Correlation | Gemini Correlation |
|---|---|---|---|
| Schema Markup | -0.01 | 0.06 | 0.16 |
| Technical SEO | -0.13 | 0.01 | -0.06 |
| AI Readability | 0.05 | 0.05 | 0.26 |
| Content Quality | 0.00 | 0.08 | 0.16 |
| GEO Score Overall | 0.04 | 0.10 | 0.26 |
Technical SEO has a negative correlation with ChatGPT visibility. Companies with more technical optimization get cited less.
So what actually matters?
1. Brand Authority (External Research)
Research from The Digital Bloom analyzing 680 million citations found:
- Brand mentions (all types): 0.664 correlation
- Brand search volume: 0.334 correlation
- Wikipedia presence: 2.8x more likely to be cited
- Multi-platform presence (4+): 2.8x more likely to be cited
These factors dwarf any technical optimization.
2. Platform-Specific Signals
From the data:
- Claude: 0.29 correlation with positive sentiment. Elite companies have 3.6x higher Claude sentiment scores (4.7 vs 1.3).
- Gemini: 0.26 correlation with AI Readability. This is the ONLY platform that responds to technical factors.
- ChatGPT: Near-zero correlation with everything I measured. Training data presence dominates.
3. Entity Coherence
Companies with consistent naming, positioning, and descriptions across the web get cited more. If you're "AI Company Inc." on one platform and "The AI Company" on another, you confuse entity recognition.
4. Content Quality Over Technical Factors
Elite companies (those breaking above baseline on ALL platforms) have:
- 86.7 average Content Quality score (vs 77.5 baseline)
- 67% llms.txt adoption (vs 18% baseline)
- 78.7 AI Readability (vs 70.8 baseline)
Content quality matters. Schema markup doesn't.
What This Means For You
The Bad News
If you're at the 69% baseline, AI doesn't recommend you when users search your category. You're invisible to users who don't already know you exist.
The Good News
The path to breaking through is clear:
1. Invest in earned media. Get mentioned in industry publications, analyst reports, and comparison sites. Third-party mentions have 10x the impact of technical optimization.
2. Build brand search volume. PR, speaking engagements, and community presence create the search demand signals AI models learn from.
3. Optimize for each platform separately.
- For ChatGPT: Build Wikipedia presence and news coverage
- For Claude: Focus on technical documentation and expert credentials
- For Gemini: Develop content on Reddit, Quora, and YouTube
4. Maintain entity coherence. Ensure consistent brand representation across Crunchbase, G2, LinkedIn, and industry directories.
5. Don't over-invest in technical GEO. Schema markup and AI readability are hygiene factors, not differentiators. Spend your time on brand authority instead.
Check Your Current State
Want to see where you stand?
Get your free report at loamly.ai/check
The report shows your citation rate across ChatGPT, Claude, and Gemini—and tells you whether you're at the baseline or breaking through.
Methodology
Sample: 96 YC-backed companies, primarily B2B SaaS, analyzed January 12-14, 2026
Platforms: ChatGPT (gpt-5-nano + web grounding), Claude (claude-haiku-4-5 + web search), Gemini (gemini-2.5-flash + Google Search)
Query Types:
- 5 Category prompts (unbranded — "Best [category] tools")
- 5 Brand prompts (branded — "What is [Company]?")
- 5 Competitive prompts (comparative — "Alternatives to [Company]")
- 1 Description prompt (sentiment baseline)
Baseline: 69% = 11/16 queries succeeded (brand + competitive only). Above 69% = category query wins. Below 69% = brand query failures.
GEO Scoring: Meta Tags (20%), Schema Markup (25%), Technical SEO (20%), AI Readability (25%), Content Quality (10%)
Correlations: Pearson coefficient. All percentages rounded to nearest integer.
Data Availability: All reports generated using Loamly's free /check tool.
What's Next
This is the first in a series of research articles using this dataset:
- Why Claude Recommends Different Companies Than ChatGPT — Deep dive into the 7 companies Claude actively ignores
- The Schema Markup Illusion — Why 0.16 correlation means technical optimization is oversold
- The 4% Club — What separates category query winners from the baseline majority
Check Your AI Visibility
Want to see how ChatGPT, Claude, and Gemini perceive your brand?
Get your free report at loamly.ai/check
No signup required. Real data. No marketing spin.
Last updated: January 21, 2026
Stay Updated on AI Visibility
Get weekly insights on GEO, AI traffic trends, and how to optimize for AI search engines.
No spam. Unsubscribe anytime.
Related Articles
The AI Traffic Attribution Crisis: Why Your Analytics Are Wrong
60% of AI traffic lands as 'direct' in GA4. Here's why your analytics are systematically lying to you.
AI Visibility Benchmark 2026: A Comprehensive Analysis of 1,792 Companies Across ChatGPT, Claude, Gemini, and Perplexity
1,792 company AI visibility benchmark. Reveals what predicts AI recommendations, platform fragmentation, and authority signals that drive visibility.
Check Your AI Visibility
See what ChatGPT, Claude, and Perplexity say about your brand. Free, no signup.
Get Free Report