What We Measure — 147 Factors, 12 Categories, One AI Visibility Score
Ansly goes deeper than any SEO tool. We check if AI platforms can find your content, understand your brand, extract citable answers, and actually mention you when users ask.
How Categories Are Weighted
Each category contributes to your overall AI Visibility Score based on its impact on AI citation probability.
| Category | Factors | Weight |
|---|---|---|
| AI Crawler Access | 16 | 12% |
| Entity Identity | 12 | 15% |
| Schema Structured Data | 15 | 10% |
| Content Freshness | 16 | 8% |
| AI Presence Testing | 12 | 20% |
| AI Extractability | 9 | 10% |
| Content Structure | 14 | 8% |
| Semantic Completeness | 11 | 5% |
| Evidence Quality | 10 | 5% |
| Trust & E-E-A-T | 8 | 4% |
| Multimodal | 7 | 2% |
| Technical Health | 7 | 1% |
| Total | 147 | 100% |
12 Categories of AI Visibility
Each category reflects a different dimension of how AI platforms perceive your content.
Can AI Bots Access Your Content?
Before AI can cite your content, it needs to access it. Most AI platforms — ChatGPT, Perplexity, Google AI Overviews — use web crawlers to discover and index content. If your robots.txt blocks these bots, if your content requires JavaScript to render, or if you haven't signalled AI-readiness through standards like llms.txt, AI platforms will never see your content. This category checks whether the front door is open.
Key Factors We Check
- AI Agents Allowed — Are GPTBot, ClaudeBot, PerplexityBot, Diffbot, and Applebot-Extended permitted?
- llms.txt Present — Do you have an llms.txt file telling AI systems what content to read?
- Render Compatibility — Is your content visible in raw HTML without JavaScript?
- Response Time — Can your server respond fast enough for AI crawlers?
- Robots.txt & Canonical — Are your technical foundations correct?
Does AI Recognise Your Brand?
AI platforms need to know who you are before they can recommend you. Entity identity measures whether your brand exists in the knowledge systems AI relies on — Google's Knowledge Graph, Wikidata, Wikipedia. Research shows brand search volume has the strongest correlation (0.334) with AI mentions — stronger than backlinks or domain authority.
Key Factors We Check
- Wikipedia Presence — Wikipedia accounts for 22% of LLM training data and 7.8% of ChatGPT citations.
- Wikidata Presence — The structured knowledge base behind Wikipedia, directly consumed by AI systems.
- Google Knowledge Graph Confidence — How confidently does Google recognise your brand?
- Brand Search Volume — The #1 predictor of AI citations.
- Cross-Platform Mentions — Are you mentioned on Reddit, Crunchbase, industry sites?
Can AI Pull Citable Answers From Your Content?
AI platforms don't just read your content — they need to extract specific passages to cite in their responses. Research on Google AI Overviews shows the ideal citable passage is 134-167 words, self-contained (understandable without context), and directly answers a question. Pages with extractable summary blocks get cited more.
Key Factors We Check
- Self-Contained Answer Passages — Do your paragraphs make sense on their own?
- Summary & Takeaway Blocks — TL;DR sections and key takeaways are easy to extract.
- Table Data Extractability — Comparison tables get cited more than prose.
- Question-Answer Pair Density — Question headings map to user queries.
- Claim Specificity — Specific numbers get cited; vague claims don't.
Is Your Content Structured for AI Comprehension?
AI systems parse your content structurally — heading hierarchy, paragraph length, section organisation. Well-structured content with clear heading hierarchy, readable sentence lengths, and logical flow helps AI understand your topic and extract relevant information. Poor structure forces AI to guess at relationships between ideas.
Key Factors We Check
- Heading Hierarchy — Proper H2 → H3 → H4 nesting helps AI understand relationships.
- Readability — Short sentences (under 25 words) and digestible paragraphs.
- Content Sections — Clearly defined sections with headings, not walls of text.
- Word Count — Sufficient content depth. Thin pages rarely get cited.
- Internal Linking — Links to related pages signal topical authority.
Do You Cover Everything AI Users Ask About?
When someone asks ChatGPT a question, the platform decomposes it into 3-8 sub-queries and looks for sources that answer multiple sub-queries. Pages that cover the complete semantic space of a topic get selected more. Google AI Overviews sources with 15+ recognised entities per 1,000 words have 4.8x higher selection rates.
Key Factors We Check
- Semantic Completeness Score — How thoroughly does your page cover its topic?
- Query Fan-Out Coverage — How many sub-questions does your content answer?
- Comparison Content — 'Best X tools' and 'X vs Y' pages are highly cited.
- Related Entity Coverage — Do you reference expected competitors and concepts?
- Content Uniqueness — Does your content offer unique perspectives?
Are Your Claims Backed by Credible Sources?
AI platforms cross-reference claims before citing them. Content that cites authoritative institutions (.gov, .edu, peer-reviewed journals) is more trusted. Original data, statistical claims, and properly attributed research increase citation probability by 22%. AI systems check whether what you say is supported.
Key Factors We Check
- Tier 1 Source Citations — Links to .gov, .edu, and peer-reviewed journals.
- Citation Recency — Are your statistics and data references current?
- Statistical Claim Density — Content with specific numbers gets cited more.
- Original Data — Pages with original research get cited disproportionately.
- Author Expertise — Author credentials strengthen trust.
Does Your Markup Help AI Understand Your Content?
Schema markup (JSON-LD structured data) is a direct communication channel between your website and AI systems. It explicitly tells AI: this is our organisation name, this is the article's author, these are our FAQ answers. Pages with complete schema markup have significantly higher AI visibility because there's no guesswork.
Key Factors We Check
- JSON-LD Present — Do you have any structured data at all?
- Article Completeness — headline, author, datePublished, dateModified all present?
- FAQPage Schema — FAQ sections marked up for direct extraction.
- WebSite SearchAction — Tells AI systems how to search your site.
- No Schema Conflicts — Multiple conflicting types confuse AI systems.
Can AI Understand Your Images, Videos, and Media?
AI systems are increasingly multimodal — they process text, images, and video together. But they still rely heavily on text descriptions to understand media. Images without alt text are invisible to AI. Videos without transcripts are skipped. Text-based descriptions allow AI to comprehend your full content.
Key Factors We Check
- Image Alt Text Coverage — What percentage of images have descriptive alt text?
- Main Content Landmark — Is content wrapped in a semantic <main> element?
- Media Captions & Transcripts — Do videos have captions and transcripts?
- Transcript Relevance — Are transcripts on-topic and adding value?
Is Your Content Current?
AI platforms strongly prefer fresh content. Perplexity's content decay rate is 2-3 days for time-sensitive topics. Google AI Overviews prioritise recently updated sources. Stale content with outdated statistics or no visible update dates gets deprioritised. Multiple date signals need to be consistent.
Key Factors We Check
- Visible Date — Is there a clear publication or last-updated date?
- Schema Dates — datePublished and dateModified in Article schema.
- Date Consistency — Do HTTP headers, schema, and visible dates agree?
- Temporal Claim Accuracy — Are time-sensitive claims still accurate?
- Update Cadence — Evidence of regular content maintenance.
Does Your Site Demonstrate Expertise and Authority?
Google's E-E-A-T framework (Experience, Expertise, Authoritativeness, Trustworthiness) directly influences AI citation decisions. Research shows 96% of Google AI Overview citations come from sources with strong E-E-A-T signals. This means identifiable authors with credentials and clear organisational identity.
Key Factors We Check
- Core Trust Pages — About, Contact, Privacy, Terms accessible from every page.
- Organization Schema — Clear organisational identity with consistent branding.
- External Reviews — Third-party review signals (G2, Trustpilot, etc.).
- AI Training Policy — Do you disclose your policy on AI training data usage?
- HTTPS — Basic security as a trust foundation.
Do AI Platforms Actually Mention Your Brand?
This is the ultimate measure. Everything else contributes to one outcome: does AI actually mention and cite your brand? This category uses live testing — we send real queries to ChatGPT and Perplexity and analyse the responses. We check if your brand is mentioned, how prominently, and whether your URL is cited.
Key Factors We Check
- Brand Mention Rate — In what percentage of relevant queries does AI mention you?
- Citation Rate — Does AI cite your website URL, not just mention your name?
- Mention Position — Are you mentioned first, in the top 3, or buried at the end?
- Cross-Platform Consistency — Mentioned on ChatGPT AND Perplexity?
- Share of Voice — How often are you mentioned versus competitors?
Does Your Site Meet Basic Performance Standards?
While AI crawlers care less about performance than human visitors, basic technical health still matters. Extremely slow sites may time out before AI crawlers finish. Poor search visibility means AI training data may not include your content. This category carries the lowest weight but shouldn't be ignored.
Key Factors We Check
- Desktop Performance Score — PageSpeed Insights desktop score.
- Search Visibility — Healthy Google Search Console metrics.
- Internal Link Strength — Strong internal linking signals site authority.
The Feature No Other Tool Has — Live AI Platform Testing
Most AI visibility tools analyse your website and guess whether AI will cite you. Ansly doesn't guess. We send real queries to ChatGPT and Perplexity — the same questions your customers ask — and show you exactly what these platforms say about your brand.
We generate 8-12 prompts based on your brand
We send prompts to ChatGPT and Perplexity
We analyse responses for mentions & citations
Frequently Asked Questions
Ansly analyses 147 individual factors grouped into 12 categories. Each factor is scored as pass, partial, or fail, and every result includes a specific recommendation for improvement. The factors range from technical checks (can AI crawlers access your page?) to content analysis (are your paragraphs self-contained and citable?) to live platform testing (does ChatGPT actually mention your brand?).
Currently we test against ChatGPT (via OpenAI) and Perplexity, the two most widely used AI search platforms. We're adding Claude and Google AI Overviews in upcoming releases. For each platform, we send real queries and analyse the live responses.
Traditional SEO audits check factors that affect search engine rankings — backlinks, keyword density, page speed, mobile-friendliness. Ansly checks factors that affect whether AI platforms cite your content — entity recognition, passage extractability, knowledge graph presence, semantic completeness, and actual AI platform responses. There is some overlap (structured data, content quality), but roughly 60% of Ansly's factors are unique to AI Visibility.
We recommend scanning after any significant content update and at minimum monthly. AI platforms continuously update their knowledge, and your competitors are constantly publishing new content. Regular scanning helps you track improvements and catch any regressions in your AI Visibility Score.
Yes. You can scan any publicly accessible URL. Many users scan their own site alongside 2-3 competitors to benchmark their AI Visibility and identify gaps. Our comparison report feature shows side-by-side category breakdowns.
See Your Score Across All 12 Categories
Free scan. 147 factors. Results in 90 seconds.