This data reflects which brands (yours and competitors’) get mentioned in AI search answers across Conductor’s supported AI search engines.For an LLM, this is the “awareness” layer: it lets you answer “who’s showing up and how often?” with market share, share-of-voice, and breakdowns by topic, persona, intent, and search engine. It’s essential for competitive positioning and identifying where your brand is invisible.
This data reflects which URLs and domains AI engines cite as sources when answering prompts. Where brand data answers “am I being talked about?”, Citations answers “am I being trusted as a source?”For an LLM, this is the “authority” layer. It’s critical for diagnosing why a brand might be mentioned frequently but not driving traffic, and for URL-level drill-down to see which specific pages earn citations for which prompts. This is where content strategy recommendations actually get actionable.
This data captures the quality of brand mentions: positive, neutral, negative—plus category-level breakdowns (quality, price, ethics, experience, etc.) and source attribution (which domains are driving which sentiment).For an LLM, this turns raw mention counts into narrative. You can surface actual quotes, identify reputation risks, and explain why a brand’s perception is shifting. Indispensable for PR and reputation analysis, not just visibility counting.
This is the metadata layer in your data: what topics, prompts, brands, competitors, personas, intents, locales, and search engines you are tracking for a given account.For an LLM, this information grounds queries, preventing hallucinations and letting the model resolve fuzzy user references (“my UK brand,” “the retirement topic”) into the exact identifiers needed for data queries. Without it, every downstream queries could be filtering on data that doesn’t exist.