Methodology · How Rankxa works

The way AI search
is actually measured.

Four engines. Three runs each. Six audit dimensions. One ranked action queue. Here's exactly what happens between the moment you sign up and the moment you start outranking competitors in AI answers.

01
Step 1 · Capture the prompts that matter

We mine your industry's buying-intent prompts.

Most analytics tools start with keywords. We start with prompts: the actual sentences your buyers type into ChatGPT, Claude, Gemini, and Google AI Overviews. We pull them from search-volume data, classify them by intent (transactional, commercial, informational), and surface the ones where you have a real shot at being recommended.

What goes in

Your domain, your industry, and a one-line description of what you do. That's it. We bootstrap the rest from public data sources.

  • Google Ads keyword volume + DataForSEO trend data
  • An LLM-classified intent label on every prompt (commercial / transactional / informational / navigational)
  • 500+ prompts per industry by default; you can add more or trim down
  • Re-classified weekly so emerging prompts get on the list
PROMPT INTAKE · LIVE
+best digital marketing agency for SaaScommercial · 2.4k/mo
+hire growth marketing consultanttransactional · 880/mo
+SEO vs. AI search agency comparisoncommercial · 410/mo
+what is AI SEOinformational · 6.1k/mo
~BusySeed reviewsnavigational · 60/mo
+top NYC marketing agencies 2026commercial · 1.2k/mo
+ 512 more prompts classified… 
02
Step 2 · Run the prompts on every engine

Three runs each, four engines, every prompt.

Every active prompt is fired at ChatGPT, Claude, Gemini, and Google AI Overviews three times. Three runs, not one — because LLM answers vary, and a single sample is noise. We aggregate the runs into a stable share-of-voice and average position per platform.

What we measure on each run

Not just whether you appear, but the position the engine gave you and which sources it cited. Citation domains feed the audit step.

  • Brand position in the answer (1st mentioned, 2nd, etc.)
  • Whether you're mentioned at all (the visibility floor)
  • Every cited source domain — we use these for the citation audit later
  • Sentiment of the mention (positive, neutral, negative)
  • Brand traits the engine attributes to you, mined for patterns over time
PROMPT · "best digital marketing agency for SaaS"
ChatGPT · run 1/3pos #2 · cited yourdomain.com
ChatGPT · run 2/3pos #3 · cited 4 sources
ChatGPT · run 3/3pos #2 · cited yourdomain.com
Claude · run 1/3pos #5 · cited webfx.com
Claude · run 2/3not mentioned
Gemini · run 1/3pos #1 · AIO panel
Google AIO · livepos #2 · AIO source
03
Step 3 · Audit the AI ranking signals

Six dimensions, weighted by what AI engines actually use.

Every site we track gets scored across the same six dimensions. The weighting comes from a year of correlating audit scores against actual AI rank movement. No vanity metrics. No backlink-count dressing.

15% weight
Technical

llms.txt declarations, robots.txt rules, render parity, response time, status codes the crawlers see.

20% weight
Content

Depth, freshness, internal linking, presence of long-form, case studies, whitepapers. AI answers prefer canonical sources.

20% weight
E-E-A-T

Experience, expertise, authoritativeness, trust. Author bylines, credentials, "About" depth, reviews, social proof, location signals.

10% weight
Schema

Organization, WebSite, FAQPage, HowTo, Review schema. The structured data that turns ambiguous text into machine-readable entities.

20% weight
Citation

Breadth and quality of third-party domains that mention your brand. AI engines disproportionately reward entities cited by independent sources.

15% weight
Platform

Real, observed AI visibility across ChatGPT, Claude, Gemini, and AIO. Closes the loop between what you do and what shows up.

04
Step 4 · Turn the gap into actions

Every gap becomes a ranked, deliverable task.

Audit findings + competitor delta + opportunity prompts collapse into a single ranked queue. Each task names the dimension it lifts, the size of the lift, and ships with a generated deliverable so your team can execute, not interpret.

What you actually do next

Pick a task, click "Generate", paste the output. The system tracks completion and re-runs the relevant audit signals so you see the lift land.

  • Schema generator: JSON-LD blocks ready to drop into <head>
  • Brief generator: AI-friendly content briefs with target prompts and competitor mentions
  • Outreach planner: the 5-10 specific publications worth pursuing for citation lift, with angle pitches
  • Trust badge: the live embed that proves your AI rank to your own visitors
ACTION QUEUE · SORTED BY LIFT
[1] Add FAQPage schema to /pricing+8 audit pts
[2] Publish "X vs Y" comparison page+6 audit pts
[3] Get featured on industry digest+5 audit pts
[4] Refresh case study #3+4 audit pts
[5] Add llms.txt+3 audit pts
+ 14 more ranked tasks… 
AI engines
4
↑ ChatGPT, Claude, Gemini, AIO
Runs per prompt
3
↑ stable share of voice
Audit dimensions
6
↑ weighted, not vanity
First scan
<90s
↑ dashboard live before you finish coffee
Free · No card · No call

Run your own scan.

Drop your domain. Watch all four engines answer. See exactly where you fall in the ranking.