I Analyzed 100 YC Companies' AI Visibility. Here's What I Found.

First-ever GEO benchmark report from real data. 96% of companies are invisible in category queries. The 69% baseline reveals who AI models actually recommend.

Marco Di Cesare

Marco Di Cesare

January 12, 2026 · 14 min read

Share:

I spent the last week analyzing 100 YC-backed companies to understand how they perform across ChatGPT, Claude, and Gemini.

The results revealed something I didn't expect: a hidden baseline that exposes the difference between companies AI models know about versus companies AI models actually recommend.


Update: What the 1,528-Company Dataset Shows

I expanded the analysis to 1,528 companies (Jan 9-21, 2026). The baseline is not a YC artifact. It shows up across the entire dataset.

Average citation rates:

PlatformAvg Citation Rate
ChatGPT0.703
Claude0.643
Gemini0.498

The baseline still clusters around 0.6875. This is the "known but not recommended" zone at scale:

PlatformAt 0.6875 Baseline% of Dataset
ChatGPT127883.7%
Claude121779.6%
Gemini68644.9%

This matters because it means the YC results are not an edge case. The same discovery gap appears across the broader market.


The 69% Baseline: What It Reveals

Before diving into the findings, you need to understand the methodology—because it changes everything.

I run 16 queries per company across three AI platforms:

Query TypeCountPurpose
Category Prompts5"Best [product category] tools" — No brand mentioned
Brand Prompts5"What is [Company]?" — Direct brand queries
Competitive Prompts5"Alternatives to [Company]" — Comparison queries
Description1Brand overview for sentiment

The critical insight: Category prompts simulate a user researching without knowing your brand exists. If AI doesn't mention you in these queries, you have zero discovery visibility.

When a company appears in Brand + Competitive queries but NOT in Category queries, they get cited in 11 out of 16 queries (68.75%)—which rounds to 69%.

69% is the baseline. It means AI knows you exist when asked directly, but doesn't recommend you when users search your category.


Key Findings

FindingInsight
At the 69% Baseline95% of companies (ChatGPT)
Break Above BaselineOnly 4% on ChatGPT, 17% on Gemini
Fall Below Baseline7% on Claude (brand query failures)
Schema Correlation-0.01 (essentially zero)
GEO Correlation0.04 (essentially zero)

The bottom line: Almost every YC company is invisible in category queries. AI models know they exist, but don't recommend them when users research their product category.


Finding 1: 96% Are Invisible in Category Queries

This is the headline finding.

PlatformAbove 69%At 69% (Baseline)Below 69%
ChatGPT4 (4%)91 (95%)1 (1%)
Claude4 (4%)85 (89%)7 (7%)
Gemini16 (17%)74 (77%)6 (6%)

On ChatGPT, 95% of YC companies sit at exactly the baseline. They're only visible when you already know to ask about them.

Only 4 companies break through on ChatGPT—meaning AI recommends them in category queries like "best AI sales tools" or "top developer documentation platforms."

The Category Query Winners

These are the companies AI actually recommends when users search their category:

CompanyChatGPTClaudeGeminiWhat It Means
Sully100%100%75%Recommended in every query
Flow.club100%94%94%Strong category presence
Lightly100%88%69%ChatGPT category winner
Weave75%100%75%Claude category winner

Sully achieves 100% visibility on both ChatGPT and Claude. When someone asks "best AI healthcare documentation tools," Sully appears every single time.

Lightly is fascinating: 100% on ChatGPT, but only 69% on Gemini. ChatGPT recommends them in category queries; Gemini doesn't. Same product, different AI platforms, completely different visibility.

The Hidden Majority

Meanwhile, companies like Mailmodo (GEO score 88) and Greptile (GEO score 77) sit at exactly 69% across all platforms. They have excellent technical optimization, but AI doesn't recommend them when users search their category.


Finding 2: Gemini is 4x More Generous Than ChatGPT

The platform differences are dramatic:

MetricChatGPTClaudeGemini
Above Baseline4%4%17%
At Baseline95%89%77%
Below Baseline1%7%6%

Based on 96 YC companies with complete visibility data.

Gemini recommends 4x more companies in category queries than ChatGPT or Claude.

16 companies break through on Gemini versus just 4 on ChatGPT. This likely reflects Gemini's integration with Google Search—it pulls from real-time search results where companies have worked to build SEO visibility.

Gemini's Favorites

CompanyChatGPTClaudeGemini
Paradigm69%69%100%
Payflow69%69%100%
Vapi69%69%94%
Waydev69%69%94%
Unsloth69%69%94%
Mem069%69%94%

Paradigm and Payflow achieve 100% Gemini visibility while sitting at baseline on ChatGPT and Claude. Whatever they're doing works specifically for Gemini's retrieval mechanism.


Finding 3: Claude is the Most Skeptical

Claude shows the most variance—both below and above baseline:

Claude's Skeptics

CompanyChatGPTClaudeGemini
LeapingAI69%6%63%
CloudHumans69%13%69%
Noble Stay69%19%69%
AgentHub69%19%69%

LeapingAI gets cited in only 1 out of 16 Claude queries (6%). ChatGPT cites them at the 69% baseline. Claude actively refuses to recommend them even in brand-specific queries.

This isn't just "Claude doesn't know them." Claude is actively skeptical—it fails to cite them even when asked directly about the company.

Claude's Favorites

Conversely, Claude sometimes breaks above baseline when other platforms don't:

CompanyChatGPTClaudeGemini
Weave75%100%75%
Sully100%100%75%

Weave is Claude's favorite—100% citation rate versus 75% on other platforms.


Finding 4: GEO Score Doesn't Predict Visibility

The correlation between GEO score and citation rate is essentially zero:

FactorChatGPT CorrelationClaude CorrelationGemini Correlation
GEO Score0.040.100.25
Schema Markup-0.010.060.15
AI Readability0.050.050.26

Schema markup has a -0.01 correlation with ChatGPT citation. That's not weak—it's nonexistent.

Gemini shows the strongest correlation (0.26 with GEO, 0.16 with schema), but even that's weak by statistical standards. A correlation of 0.26 explains only 7% of the variance.

The Paradox in Action

CompanyGEO ScoreSchemaChatGPTClaudeGemini
Lightly390100%88%69%
Mailmodo8810069%69%69%

Lightly has a GEO score of 39 with zero schema markup—and achieves 100% ChatGPT visibility.

Mailmodo has a GEO score of 88 with perfect schema markup—and sits at the baseline.

Technical optimization doesn't create visibility. Brand authority does.


Finding 5: Platform Correlation is Low

The three platforms disagree more than they agree:

PlatformsCorrelation
ChatGPT vs Claude0.39
ChatGPT vs Gemini0.19
Claude vs Gemini0.17

A correlation of 0.17 between Claude and Gemini means they're essentially independent. A company successful on one platform has almost no predictive advantage on the other.

This fragmentation creates both problems and opportunities:

The problem: No single optimization strategy works across platforms.

The opportunity: Companies that understand platform-specific dynamics can gain disproportionate advantage.


Finding 6: 59% Have Zero Schema Markup

Out of 96 companies, 57 have no structured data on their homepage. No JSON-LD. No Organization schema. Nothing.

And yet, schema's correlation with citation is essentially zero. The companies with perfect schema sit at the same 69% baseline as those with none.

Schema TierCountChatGPTClaudeGemini
High (80+)869.0%69.0%73.6%
Medium (40-79)2670.4%68.5%71.4%
Low/None (0-39)6269.8%67.0%71.3%

The high-schema companies average 69.0% on ChatGPT. The no-schema companies average 69.8%.

Schema markup is table stakes for traditional SEO. It's irrelevant for AI visibility.


Finding 7: What Actually Predicts Visibility

The data combined with external research reveals a stark truth: technical GEO factors don't predict AI visibility. Here's what the correlations actually show:

FactorChatGPT CorrelationClaude CorrelationGemini Correlation
Schema Markup-0.010.060.16
Technical SEO-0.130.01-0.06
AI Readability0.050.050.26
Content Quality0.000.080.16
GEO Score Overall0.040.100.26

Technical SEO has a negative correlation with ChatGPT visibility. Companies with more technical optimization get cited less.

So what actually matters?

1. Brand Authority (External Research)

Research from The Digital Bloom analyzing 680 million citations found:

  • Brand mentions (all types): 0.664 correlation
  • Brand search volume: 0.334 correlation
  • Wikipedia presence: 2.8x more likely to be cited
  • Multi-platform presence (4+): 2.8x more likely to be cited

These factors dwarf any technical optimization.

2. Platform-Specific Signals

From the data:

  • Claude: 0.29 correlation with positive sentiment. Elite companies have 3.6x higher Claude sentiment scores (4.7 vs 1.3).
  • Gemini: 0.26 correlation with AI Readability. This is the ONLY platform that responds to technical factors.
  • ChatGPT: Near-zero correlation with everything I measured. Training data presence dominates.

3. Entity Coherence

Companies with consistent naming, positioning, and descriptions across the web get cited more. If you're "AI Company Inc." on one platform and "The AI Company" on another, you confuse entity recognition.

4. Content Quality Over Technical Factors

Elite companies (those breaking above baseline on ALL platforms) have:

  • 86.7 average Content Quality score (vs 77.5 baseline)
  • 67% llms.txt adoption (vs 18% baseline)
  • 78.7 AI Readability (vs 70.8 baseline)

Content quality matters. Schema markup doesn't.


What This Means For You

The Bad News

If you're at the 69% baseline, AI doesn't recommend you when users search your category. You're invisible to users who don't already know you exist.

The Good News

The path to breaking through is clear:

1. Invest in earned media. Get mentioned in industry publications, analyst reports, and comparison sites. Third-party mentions have 10x the impact of technical optimization.

2. Build brand search volume. PR, speaking engagements, and community presence create the search demand signals AI models learn from.

3. Optimize for each platform separately.

  • For ChatGPT: Build Wikipedia presence and news coverage
  • For Claude: Focus on technical documentation and expert credentials
  • For Gemini: Develop content on Reddit, Quora, and YouTube

4. Maintain entity coherence. Ensure consistent brand representation across Crunchbase, G2, LinkedIn, and industry directories.

5. Don't over-invest in technical GEO. Schema markup and AI readability are hygiene factors, not differentiators. Spend your time on brand authority instead.


Check Your Current State

Want to see where you stand?

Get your free report at loamly.ai/check

The report shows your citation rate across ChatGPT, Claude, and Gemini—and tells you whether you're at the baseline or breaking through.


Methodology

Sample: 96 YC-backed companies, primarily B2B SaaS, analyzed January 12-14, 2026

Platforms: ChatGPT (gpt-5-nano + web grounding), Claude (claude-haiku-4-5 + web search), Gemini (gemini-2.5-flash + Google Search)

Query Types:

  • 5 Category prompts (unbranded — "Best [category] tools")
  • 5 Brand prompts (branded — "What is [Company]?")
  • 5 Competitive prompts (comparative — "Alternatives to [Company]")
  • 1 Description prompt (sentiment baseline)

Baseline: 69% = 11/16 queries succeeded (brand + competitive only). Above 69% = category query wins. Below 69% = brand query failures.

GEO Scoring: Meta Tags (20%), Schema Markup (25%), Technical SEO (20%), AI Readability (25%), Content Quality (10%)

Correlations: Pearson coefficient. All percentages rounded to nearest integer.

Data Availability: All reports generated using Loamly's free /check tool.


What's Next

This is the first in a series of research articles using this dataset:

  • Why Claude Recommends Different Companies Than ChatGPT — Deep dive into the 7 companies Claude actively ignores
  • The Schema Markup Illusion — Why 0.16 correlation means technical optimization is oversold
  • The 4% Club — What separates category query winners from the baseline majority

Check Your AI Visibility

Want to see how ChatGPT, Claude, and Gemini perceive your brand?

Get your free report at loamly.ai/check

No signup required. Real data. No marketing spin.

Tags:Original ResearchGEOBenchmarksData

Last updated: January 21, 2026

Marco Di Cesare

Marco Di Cesare

Founder, Loamly

Stay Updated on AI Visibility

Get weekly insights on GEO, AI traffic trends, and how to optimize for AI search engines.

No spam. Unsubscribe anytime.

Check Your AI Visibility

See what ChatGPT, Claude, and Perplexity say about your brand. Free, no signup.

Get Free Report