How to Optimize for Claude and Perplexity (Not Just ChatGPT)
Most GEO advice focuses on ChatGPT. That's a mistake.
ChatGPT dominates AI traffic today (78% market share), but Perplexity is growing at 15% and Claude users spend 19 minutes per session—nearly 4x the organic search average.
I learned this building Loamly: optimizing for one AI platform doesn't mean you're optimized for all of them. They work differently, cite differently, and value different content characteristics.
What the Loamly Dataset Shows
From 1,528 company reports (Jan 9-21, 2026):
| Platform | Avg Citation Rate |
|---|---|
| ChatGPT | 0.703 |
| Claude | 0.643 |
| Gemini | 0.498 |
Brand authority matters more on Claude than anywhere else:
| Platform | Brand Authority Correlation | GEO Correlation |
|---|---|---|
| ChatGPT | 0.320 | 0.114 |
| Claude | 0.430 | 0.057 |
| Gemini | 0.180 | 0.039 |
This is the core optimization point: Claude is authority-sensitive, Gemini shows weak correlations across the board, and ChatGPT is mostly training-data-driven. Treat them as separate systems.
Why Each Platform Matters
| Platform | Traffic Share | Session Duration | Citation Style |
|---|---|---|---|
| ChatGPT | 77.97% | 10 min | Inconsistent, training-biased |
| Perplexity | 15.10% | Variable | Real-time, heavily cited |
| Claude | 0.17% | 19 min | Depth-focused, context-rich |
| Gemini | 6.40% | Variable | YouTube/multi-modal preference |
Source: SE Ranking AI Traffic Study
The conversion story is more important than volume. AI visitors convert at 11x the rate of organic search. When Claude sends a visitor who stays 19 minutes, that's not a casual browser—that's a qualified lead.
Understanding the Architectural Differences
This is the key insight most marketers miss: these platforms don't work the same way.
ChatGPT: Training Data + Optional Search
ChatGPT primarily generates answers from patterns learned during training on massive datasets. It mentions brands roughly 3x more often than it formally cites them because many responses come from training data without web retrieval.
When ChatGPT does use web search (via "Browse with Bing"), it becomes more like Perplexity. But most ChatGPT interactions don't trigger web retrieval.
Implication: Being in ChatGPT's training data matters more than real-time optimization.
Perplexity: Real-Time Search First
Perplexity is built entirely around retrieval-augmented generation (RAG). Every answer goes through live web search first. The platform searches the internet, gathers information, then synthesizes answers with numbered citations.
This means:
- Your content's freshness directly impacts citation likelihood
- Being indexed by Perplexity matters more than training data
- Citations are consistent and traceable
Implication: Real-time optimization works. Update your content regularly.
Claude: Extended Context + Depth
Claude's architectural advantage is its 200,000-token context window—roughly 150,000 words. Claude can process entire research papers, comprehensive guides, and multi-document analyses without chunking.
Claude values:
- Comprehensive, thoroughly researched content
- Sophisticated analysis and expert reasoning
- Content that demonstrates nuance and depth
Implication: Depth wins. Create the most comprehensive resource on your topic.
Platform-Specific Optimization Strategies
Optimizing for Perplexity
Perplexity's real-time web retrieval creates specific optimization opportunities.
1. Content Freshness Is Critical
According to research on Perplexity's citation patterns, content decay begins within 2-3 days of publication. This is dramatically different from traditional SEO, where monthly or quarterly updates suffice.
For Perplexity optimization:
- Audit high-performing pages monthly
- Add new data, insights, or sections regularly
- Republish with updated timestamps
- Prioritize pages showing declining visibility
2. The Reddit Phenomenon
This surprised me when I first saw the data: Reddit accounts for 46.7% of Perplexity's citations. That's nearly half.
Why? Perplexity values:
- Real-time community discussions
- Authentic, unfiltered user experiences
- Practical implementation details from actual users
- Peer recommendations over institutional authority
Action items:
- Participate authentically in relevant subreddits
- Monitor Reddit for brand discussions
- Extract Reddit insights into your own content (with attribution)
3. Answer-First Formatting
Pages with FAQ schema appear in AI responses at 3x the rate of equivalent unstructured content.
Structure content with:
- Direct answers in opening paragraphs
- Clear question-answer pairs
- Comparison tables with original data
- Numbered steps for processes
4. Conversational Query Optimization
Perplexity encourages users to ask questions naturally. Optimize for conversational queries, not keyword phrases:
Instead of: "best email marketing software" Optimize for: "What's the best email marketing software for a small B2B SaaS company?"
Optimizing for Claude
Claude's approach requires different tactics.
1. Go Deeper Than Competitors
Claude's extended context means it can process and prefer unusually comprehensive content. Where traditional SEO might reward concise featured-snippet-friendly content, Claude rewards depth and nuance.
Create:
- Guides exceeding typical word counts (3,000-5,000+ words)
- Research with extensive supporting data
- Documentation covering edge cases and variations
- Content demonstrating sophisticated reasoning
2. Author Credentials Matter More
Claude appears to weight author authority heavily. According to research on author bylines, AI systems preferentially cite sources with:
- Detailed author biographies
- Verifiable credentials (LinkedIn, ORCID IDs)
- Professional directory listings
- Transparent editorial review processes
Action: Create dedicated author profile pages with credentials, not just generic "staff writer" bylines.
3. Control Your Crawler Access
Anthropic operates three distinct crawlers with different purposes:
| Bot | Purpose | Respects robots.txt |
|---|---|---|
| ClaudeBot | Training data collection | Yes |
| Claude-User | Real-time user queries | No* |
| Claude-SearchBot | Search result quality | Yes |
*Claude-User ignores robots.txt because users initiated the request.
Strategic choice: Block ClaudeBot (prevent training data collection) while allowing Claude-User (enable live search access) if you want to control training but maintain visibility.
User-agent: ClaudeBot
Disallow: /
User-agent: Claude-User
Allow: /
Optimizing for Both Simultaneously
The good news: many optimizations benefit multiple platforms.
Schema Markup (Universal)
Implement structured data that helps all AI systems:
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [{
"@type": "Question",
"name": "How do I track AI traffic to my website?",
"acceptedAnswer": {
"@type": "Answer",
"text": "Use a tool like Loamly that specifically detects AI referrals, or configure GA4 with custom regex matching for known AI platform domains."
}
}]
}
Essential schema types:
- FAQ schema: Question-answer pairs (highest impact)
- Article schema: Publication dates, authors, modification dates
- Organization schema: Brand entity definition
- Person schema: Author credentials and expertise
Semantic HTML (Universal)
Both Claude and Perplexity benefit from clean document structure:
<article>
<h1>Main Topic</h1>
<section>
<h2>Subtopic</h2>
<p>Content...</p>
</section>
</article>
Use proper semantic tags:
<h1>through<h4>in logical hierarchy<article>,<section>,<aside>for structure<ul>and<ol>for lists<table>for comparison data
Original Data (Universal)
Content with 19+ data points receives 5.4 average citations vs 2.8 for minimal data content.
Create:
- Original research and surveys
- Benchmark reports with primary data
- Case studies with specific metrics
- Comparison tables with tested results
What Actually Differs by Platform
| Factor | Perplexity Priority | Claude Priority |
|---|---|---|
| Freshness | Critical (2-3 day decay) | Moderate |
| Depth | Important | Critical |
| Reddit presence | Very high | Moderate |
| Author credentials | Important | Very high |
| Real-time updates | Critical | Moderate |
| Comprehensive coverage | Important | Critical |
| FAQ structure | High | Moderate |
The Citation Pattern Differences
Understanding how each platform cites helps you structure content appropriately.
Perplexity Citations
Perplexity provides numbered citations linked directly to sources. Users see exactly where information came from and can click through.
This means:
- Citation prominence matters (earlier = more clicks)
- Your source appears in a traceable way
- Getting cited = measurable traffic potential
Claude Citations
Claude's citation behavior is less consistent. It may:
- Reference sources conversationally without explicit links
- Cite from training data without providing URLs
- Provide links when web search is enabled
This means:
- Training data presence matters
- Being a "go-to" authority in your space compounds
- Depth and comprehensiveness build long-term citation likelihood
Measuring Cross-Platform Performance
You can't optimize what you can't measure. Here's how I track performance across platforms:
Manual Prompt Testing
Ask each platform standardized questions about your industry weekly:
- "What's the best [your category] tool?"
- "How do I [problem you solve]?"
- "Compare [your brand] vs [competitor]"
Document:
- Are you cited?
- What position in the citation list?
- What competitors appear?
- What's the sentiment?
GA4 Referral Tracking
Configure GA4 to track known AI referrers:
perplexity\.ai|claude\.ai|chat\.openai\.com|gemini\.google\.com
Create custom channel groups matching these patterns.
Caveat: This only captures clicks, not citations. You'll miss copy-paste traffic (the majority) and mentions that don't result in visits.
AI Visibility Monitoring Tools
Specialized tools track citation frequency across platforms:
- SE Ranking Visible
- BrightEdge Prism
- Semrush AI features
These measure visibility even when users don't click through.
My Honest Take
When I started building Loamly, I thought "AI optimization" was one thing. It's not.
Perplexity rewards freshness and real-time relevance. Claude rewards depth and authority. ChatGPT sits somewhere in between, biased by its training data.
The companies winning at GEO aren't optimizing for "AI" generically—they're building platform-specific strategies while maintaining content quality that works everywhere.
Start with:
- FAQ schema on your most important pages
- Author credentials and bylines on all content
- Update cadence appropriate to your content type
- Original data and research where possible
Then add platform-specific tactics based on where your audience is.
See Your AI Visibility Across Platforms
Not sure where you're appearing—and where you're invisible?
Start with a free Loamly AI visibility check to see how you show up across ChatGPT, Perplexity, Claude, and Gemini.
Different platforms, different opportunities.
Last updated: January 21, 2026
Stay Updated on AI Visibility
Get weekly insights on GEO, AI traffic trends, and how to optimize for AI search engines.
No spam. Unsubscribe anytime.
Related Articles
The 15 Search Queries Your Competitors Are Not Appearing In (And Why You Should Target Them)
Most companies optimize for 20-40 queries where they already win, missing hundreds of high-value queries where competitors haven't established dominance.
The AI Search Visibility Stack: 7 Essential Components Every Company Needs in 2026
AI search visibility needs 7 components: monitoring, content optimization, entity mapping, audience research, competitive intel, integrations, measurement.
Check Your AI Visibility
See what ChatGPT, Claude, and Perplexity say about your brand. Free, no signup.
Get Free Report