Here’s a question most AEO dashboards can’t answer: when a CFO asks an LLM about solutions in your category, does the model associate your brand with cost efficiency and compliance? When a VP of Engineering asks the same model, does it associate you with API flexibility and developer experience?
If you’re tracking AEO the way most B2B teams do right now, counting mentions, tallying citations, watching a single visibility score, you have no idea. That gap is costing you more deals than you think.
When we studied how LLMs actually construct recommendations across complex B2B buying journeys, we found something the current AEO playbook completely ignores. Where search was a directory, LLMs act more like a personal shopper, tailoring responses to each stakeholder’s constraints, trade-offs, and use cases. The model doesn’t see your brand the same way regardless of who’s asking, it forms different associations depending on what each buyer cares about.
The metric that tells you whether those associations are working in your favour is what we call criteria alignment.
What Criteria Alignment Actually Measures
Think of criteria alignment as an association metric. It measures how often your brand comes up alongside the specific attributes that matter to each stakeholder in your buying group.
A brand mention tells you the model knows you exist. Criteria alignment tells you whether it understands what you’re actually good at, and whether that understanding matches what each buyer persona cares about. The strategic question it answers is one citation counts never will: “When buyers care about X, does the model associate our brand with X?”

This is where the personal shopper behaviour of LLMs becomes strategically consequential. When a security-focused buyer adds compliance constraints to their prompt, the model doesn’t just filter a static list. It reconstructs its recommendation around those constraints, surfacing brands it associates with compliance and deprioritising those it doesn’t. When a revenue leader asks about the same category but emphasises speed-to-value, the model reshuffles again. Your brand either survives that reshuffling or it doesn’t. Survival depends on whether the model has learned to associate you with the criteria being applied.
The stronger the alignment, the more likely your brand is to feature as the option pool narrows. That’s why criteria alignment needs to be something you actually measure, not just something you assume is happening.
The Three-Step Process for Tracking Criteria Alignment
We developed a concrete, repeatable process for measuring criteria alignment by persona. It requires a deliberate shift from how most teams currently approach AEO measurement.
Step 1: Identify 3 to 5 Attributes Per Buyer Persona
Start with your buying group, not your content library. For each key persona involved in the purchase decision, define the three to five attributes that most influence their evaluation.
This needs to be specific, not generic. A CFO’s attributes might include total cost of ownership, implementation risk, and compliance coverage. A VP of Engineering’s might include API extensibility, developer documentation quality, and how well the product integrates with their existing tech stack.
Our research visualises this as stakeholder mapping: Finance cares about cost and support, the CEO about innovation, Sales about features and UI, Legal about SOC 2 and ISO 27001. Your map will be different, but the structure is the same: personas on one axis, their decision-driving attributes on the other.
Step 2: Track How Often Your Brand Appears Alongside Those Attributes
For each persona-attribute pair, track how frequently your brand appears in LLM responses in proximity to those specific attributes, particularly in awareness and consideration-stage prompts.
The emphasis on early-stage prompts is deliberate. That’s where associations are forming and where the model’s understanding is still malleable. A prompt like “What should I consider when evaluating [category] platforms?” reveals which brands the model associates with which evaluation criteria. A prompt like “Which [category] tools are best for enterprise compliance requirements?” shows whether the model links your brand to compliance before any shortlisting has occurred.
One critical warning: don’t build your prompt tracking lists by scraping your own site. Tools and teams often generate tracked prompts based on the questions their existing content answers. The problem is that this creates a self-fulfilling loop. You end up measuring prompts where your brand is already likely to appear, rather than the prompts that actually matter. Build your prompt clusters from buyer research, not content audits.
Step 3: Compare Against Competitors on the Same Attributes
Criteria alignment is inherently competitive. Knowing that your brand appears alongside a given attribute in some portion of relevant responses means very little without context. If your primary competitor appears alongside the same attribute significantly more often, you have a gap that no amount of citation optimisation will close.
This comparative dimension is core to the method, not an optional add-on. For each persona-attribute pair, track your co-mention frequency against the same metric for your top two or three competitors. This gives you a criteria alignment competitive map: a clear diagnostic of where the model perceives you as a strong fit, where it perceives your competitors as stronger, and where neither of you has established association.
Those white spaces, attributes where no brand has strong alignment, are your highest-leverage content opportunities.
What Changes When You Measure This Way
The shift from “are we visible?” to “are we associated with the right things, for the right buyers, at the right stage?” changes how you allocate content effort.
You start producing content that strengthens specific brand-attribute associations for specific personas, and you measure whether the model is more likely to recommend you when a compliance-focused buyer adds constraints. The result is that you earn the associations upstream that determine whether you make the shortlist at all, rather than chasing citations at the bottom of the funnel.
It also changes how you think about your competitors. The threat isn’t just the brand that gets mentioned more often, it’s the brand that owns stronger associations with the criteria your buyers actually apply when narrowing their options. Criteria alignment makes that competitive landscape visible and actionable.
The Bigger Framework
The process is straightforward: three steps, persona-specific attributes, upstream prompt tracking, and competitive comparison.
But honestly, the hard part isn’t tracking this alignment. It’s rather accepting that the AEO metrics you’ve been tracking aren’t the metric that matters.
The current AEO playbook treats AI visibility as a single leaderboard. Our research shows it’s different for every persona, at every stage. Track how LLMs perceive your fit for each buyer in your buying group. That’s the measurement approach that connects content strategy to how AI-mediated buying actually works. And for most B2B teams, it’s a capability they haven’t built yet.
Criteria alignment is one of three upstream measurement pillars we identified in our full Dark AI research report, alongside inclusion before convergence and information gain. Together they form a measurement framework for complex B2B AEO that starts where influence actually forms, not where it becomes visible.