Digital Pattern
Digital Pattern (1)

AEO Lessons From Analysing 50,000 Pieces of B2B Content

Tom3

Tom Rudnai

Founder and CEO Demand Genius

AEO Content Audit?

B2B teams are producing more content than ever. Faster, shorter, and increasingly assisted by AI. At the same time, content discovery is shifting from traditional search toward AI-mediated systems. Despite this, much of the guidance around “AI optimisation” remains speculative, tactical, or based on limited experiments.

To ground this conversation in evidence, we analysed 50,000 pieces of B2B content to understand how content strategies are actually evolving, and where they are breaking down, in the AI era. We intentionally ring-fenced the study to Fintech: a content-dense, highly regulated industry with complex buying journeys. That constraint allowed us to observe clearer patterns across an entire category, rather than dilute the signal across unrelated industries.

While the dataset is industry-specific, the findings are not.

The patterns surfaced reflect broader shifts in B2B content production, quality, and maintenance that apply well beyond Fintech. This article summarises the most transferable lessons, focusing on what the data reveals about content optimisation in the AI era environment, and what teams should do differently as a result.

3 Key Lessons About Content Optimisation in the AI Era

1. B2B brands are accumulating huge Content Debt

Brands are publishing shorter content at much higher velocity, a shift likely enabled by AI-assisted production. Short-form content grew from ~25% (2021) of all published content to 65% (2025). Content strategies are clearly adapting to higher volume and faster consumption.

We feared the worst. However, the data suggests this is not simply “AI slop” – the quality markers assessed by our AI agents across the dataset generally improved over time. Shorter content, it appears, does not inherently mean lower-quality or more superficial content.

What has changed materially is the surface area of content most brands now maintain. Content libraries have expanded rapidly, increasing the number of assets that contribute to how a brand is represented and understood.

This introduces a clear risk.

Four content factors are both critical for AI-era optimisation and directly controllable: clarity, confidence, transparency, and currency. As content volume grows, maintaining these signals consistently across an entire library becomes significantly harder. The result is what we refer to as content debt — the gap between the amount of content a brand has published and its capacity to keep that content accurate, aligned, and up to date.

The dataset shows that this gap is widening. As content libraries expand, outdated or decayed content increasingly coexists with newer positioning. In an AI-led environment, that inconsistency carries a cost: older content can still influence LLM perception, and can create contradictions and misaligned narratives that hurt overall AEO performance.

Evergreen Content Is the Exception, Not the Strategy Most Teams Are Running

In 2025, just 7% of newly published content was classified as evergreen. 91% was current, with only 2% already outdated.

At first glance, this appears positive. But viewed through a different lens, it suggests that the vast majority of content is time-bound. If only a small fraction is designed to last, then most published assets will require maintenance or replacement within the next 12–24 months.

One of the most effective ways to limit content debt is to prioritise evergreen content — assets that retain relevance over time and reduce the need for ongoing updates. The data suggests this is not how most teams are currently operating.

Brands don’t take Content Quality as seriously as they’d have you believe

Evaluating content maintenance is challenging from a distance – the tiniest metadata updates may show up as a modification.

Still, only 11% of content showed evidence of being updated. Given the likelihood that many of these updates were non-material, this figure almost certainly overstates true maintenance activity.

This points to a clear disconnect. Brands appear to invest heavily in quality at the point of publication, but far less in sustaining quality and relevance across their content libraries over time. When viewed at scale, quality and currency are not being prioritised consistently, despite their growing importance in the AI era.

Authority and Clarity Have Improved, But Remain Average.

Across the 50,000 pieces of content analysed, authority and clarity signals have meaningfully improved over time, but they remain uneven and in many cases, shallow.

At a high level, most modern B2B content now demonstrates baseline expertise. In the 2025 sample, 94% of content contained clear industry expertise signals, up from 70% across the full historical dataset. This suggests teams are getting better at sounding knowledgeable and aligned with their category.

However, when we looked beyond surface-level expertise, the picture changed.

Fewer than 20% of all analysed assets scored “high” for authority signals, the vast majority being scored “moderate”. In other words, while most content communicates familiarity with a topic and does seek to substantiate this through defensible reasoning, our AI found it to be lacking in true authority.

Clarity followed a similar pattern. Average clarity scores have improved year over year, indicating progress in structure, readability, and intent. But the distribution remains tightly clustered around “average,” with limited lift at the high end. This points to incremental optimisation rather than step-change improvement. In short, jargon continues to prevail in B2B.

The takeaway is not that B2B content lacks quality. It’s that quality has plateaued. Many brands are converging on the same standards of competence, language, and structure, producing content that is acceptable, but rarely distinctive.

For teams thinking about optimisation in the AI era, this gap between sounding credible and demonstrating authority is one of the clearest opportunities surfaced by the dataset.

Transparency Is the Weakest Signal, and Biggest Opportunity, Across 50,000 Assets.

Across the dataset, transparency was the lowest-performing content signal. We assessed transparency as a combination of source attribution, methodological explanation, and contextual completeness. It scored lower than any other category we assessed. In practice, this means most B2B content explains what to think, but not why those claims should be trusted.

Methodological transparency has improved, but remains limited. In 2025, 83% of content scored “average”, while the proportion rated “low” fell from 27% to 10%. This suggests fewer opaque claims, but little movement toward clearly articulated reasoning. Source attribution shows an even larger gap: while references appear more frequently, only 3% of analysed content scored “excellent” for source transparency.

Contextual completeness is the main area of progress. 66% of content now scores “high”, up from 34% historically, indicating better framing and narrative continuity. However, context often substitutes for evidence rather than reinforcing it.

The takeaway is straightforward: B2B content has become easier to follow, but not easier to verify. For human readers, this limits trust. For AI systems synthesising and comparing information across sources, it reduces confidence and precision. From an optimisation perspective, transparency remains the most underdeveloped lever to improve content quality.

What We’d Do Differently Today Based on These Findings

Build a system for maintaining content currency and quality.

Stop treating content maintenance as an occasional clean-up and start treating it as a continuous system. The data is clear: publishing velocity now outpaces most teams’ ability to manually monitor accuracy, relevance, and alignment across their content libraries.

In practice, this requires ongoing evaluation of content currency, clarity, accuracy, and quality at scale. This is where AI agents can add leverage by continuously scanning large libraries to flag outdated claims, inconsistencies, and declining quality signals, and surfacing where attention is needed before problems compound. Whether achieved through Demand-Genius or another approach, the capability matters more than the tool.

Content governance has to evolve from a periodic “spring clean” into a habitual process. Teams that build this discipline reduce content debt over time, maintain narrative coherence as they scale, and keep their content libraries aligned with how both buyers and AI systems actually learn.

Use Original Research to Anchor Authority Across Entire Content Libraries

Invest in original research as a structural asset, not a campaign. One well-designed study can anchor authority across dozens of downstream content pieces.

Original research creates defensible claims that AI systems can trace, reuse, and attribute. Unlike opinion-led content, research-backed insights give AI a stable source of truth to return to when summarising, comparing, or recommending information across a category.

Practically, this means treating research as infrastructure. Build content pillars that consistently reference the same datasets, findings, and language, so authority compounds across the library rather than being diluted across disconnected assets.

Be Transparent in your Methodology

Make transparency deliberate and explicit. Not as a disclosure exercise, but as a way to strengthen persuasion. That means clearly attributing sources, stating assumptions, and making any original research scientifically rigorous and reproducible.

The reason is simple: AI systems infer less reliably than humans when signals are implicit. They can follow reasoning, but they respond with greater confidence when credibility cues are more explicit. Clear attribution and methodological context reduce ambiguity, making content easier for AI to assess, summarise, and cite accurately.

Teams that make evidence and reasoning explicit give both buyers and AI systems less work to do, increasing the likelihood that their content is trusted, reused, and recommended.

Quick-fire Observations!

  • Short-form content grew from ~25% of total published content in 2021 to 65% in 2025. Longer educational formats declined from ~70% to <30% over the same period.
  • BOFU Content is on the rise as a percentage of overall output, increasing 3x between 2020 & 2025, showing an increased role in sales enablement and conversion.
  • Content strategies are increasingly designed for complex, multi-stakeholder buying groups. Content targeting technical personas in 2025 is 4× higher than in 2020 and we observed more diversity in audiences across the dataset.

Notes around Methodology

This analysis is based on publicly available website content from 50 B2B Fintech SaaS companies, collected in October 2025. We crawled complete sitemaps, excluded low-value pages, and analysed each asset using a consistent AI-driven classification framework to assess content attributes and AI-era optimisation signals at scale. Results were validated through aggregation, manual spot-checks, and human review. All data used was publicly accessible.

LL Wave 1

Want to be a Demand-Genius?

Our Ambassador Program lets freelancers and creators benefit from free platform access, early and custom access to benchmark data, referral commissions and more!