You are opening our English language website. You can keep reading or switch to other languages.
Stop Buying AI. Start Fixing Data: The AI Readiness Stack for Asset Managers
06.02.202610 min read

Stop Buying AI. Start Fixing Data: The AI Readiness Stack for Asset Managers

Andrey Ivanov
Andrey Ivanov

Most asset managers "do AI." They run pilots in research, distribution, and client reporting. Yet few use AI at scale in critical processes. This article introduces a four-layer AI Readiness Stack designed explicitly for asset managers. It helps technology and risk leaders decide what to prioritize, what to pause, and how to move from scattered experiments to governed, production-grade AI.

Stop Buying AI. Start Fixing Data: The AI Readiness Stack for Asset Managers

1. Most Firms "Use AI." Very Few Are Ready.

AI adoption is real.

DataArt's AI personalization research shows that 67% of asset managers already use AI or ML in some capacity. 80 % of executives view "mass personalization" as a key growth driver. The AI in asset management market is forecast to reach about USD 21.7 billion by 2034.

At the same time:

  • 53% cite data quality and integrity as a key blocker.
  • 45% report a lack of in-house AI / ML talent.
  • Only 5% have a fully developed GenAI strategy.

So, most firms can say, "We have AI," but very few can say, "We are ready for AI at scale."

The uncomfortable truth:

Most AI initiatives in asset management do not fail because the models are weak. They fail because they sit on fragmented data, manual exceptions, and controls that were never designed for always-on, model-driven decisions.

If this does not change in the next couple of years:

  • Cost pressure rises as more tools and teams are added to manage AI silos.
  • Regulators increase scrutiny of opaque models in portfolio construction, surveillance, and distribution.
  • Competitors who treat AI as a data and operating-model problem, not a tools problem, pull ahead on cycle time and client experience.

The winning firms will not be those with the largest AI lab. They will be the ones who are ready.

2. The AI Readiness Stack for Asset Managers

To transition from pilots to scaled impact, leaders require a shared understanding of what constitutes "ready." The AI Readiness Stack has four layers:

  1. Data Foundations
  2. Governance & Risk Controls
  3. Operating Model & Talent
  4. Value & Adoption

It is specific to asset management because it mirrors how firms actually run: from security master and portfolio data through NAV and investor reporting, across front, middle, and back office, ESG, and risk.

Layer 1: Data Foundations

Core domains such as securities, portfolios, clients, transactions, risk factors, ESG data must be mastered and accessible via governed platforms.

DataArt's Modern Data Platform materials for asset management describe platforms that combine portfolio, market, risk, ESG, operational, and client data into a single, cloud-native environment that feeds analytics, reporting, and AI use cases across the firm.

Signs you are not ready

  • Quants and data scientists build their own pipelines for each AI use case.
  • The same security or client appears with conflicting attributes in different systems.

Metrics to track

  • Percentage of AI use cases fed from governed data stores versus bespoke extracts.
  • Time to provision a new dataset for an AI team, from request to first usable version. In some firms this stretches to 3+ months. Treat “months” as a scaling blocker. 
  • Watch the domains that tend to drive the longest delays:
    • Client data, especially joining CRM/IRM records with client holdings and historical transactions. 
    • Security master for private markets (private credit, private equity), where structures are complex, data is incomplete, and administrators are a hard dependency.

Layer 2: Duty of Care

AI is now firmly in regulatory focus. Supervisors are sharpening expectations for model inventories, explainability, testing, and ongoing monitoring in capital markets.

In practice, the “why now” differs by region. In much of the current market demand, the US looms large. The regulatory picture there is not one single, all-encompassing AI law. It is shaped by industry bodies and enforcement. For asset managers, that often means SEC, CFTC, and FINRA.

At this layer, firms:

  • Maintain a single inventory of AI models and use cases.
  • Classify use cases by risk tier and define standard validation and monitoring patterns.
  • Treat compliance and risk as design partners, not late-stage reviewers.

Metrics to track

  • Percentage of material AI use cases with named owners, documented controls, and monitoring.
  • Time to perform and approve a material model change. Even without mature benchmarks, a reasonable target is days, not weeks, supported by repeatable testing evidence and clear documentation.

Layer 3: Operating Model & Talent

AI that matters is cross-functional. It sits at the intersection of technology, data, risk, and business teams.

Many asset managers know their data well and have development and operations teams that want to learn. What they often lack is deep AI technical capability and the ability to test AI-based solutions with confidence. A second recurring gap is product ownership: capable product owners who can bridge business intent and technical execution. 

At this layer, firms:

  • Move away from isolated labs towards durable squads aligned to domains (research, distribution, operations, reporting).
  • Define product owners who are accountable for outcomes.
  • Invest in MLOps and shared tooling, not just notebooks and ad-hoc scripts.

What a “good” squad can look like
Assuming a working MLOps pipeline and resolved information security constraints:

  • Product owner
  • 1–2 ML engineers
  • QA / validator

Metrics to track

  • Ratio of AI projects using standard pipelines and tooling versus bespoke setups.
  • Percentage of critical AI skills covered by internal staff, not only partners.

Layer 4: Value & Adoption

This is where AI stops being a lab project and becomes part of how the firm operates.

Examples include:

  • AI-assisted narratives embedded in investor reports.
  • Risk insights surfaced in the tools portfolio managers already use.
  • Fewer manual exceptions in operations because models drive classification and routing.

Signs you are not ready

  • AI projects celebrate model accuracy, but no one tracks process or P&L impact.
  • Users see AI as an extra dashboard, not part of the workflow.

Metrics to track

  • Percentage reduction in manual effort for a target process.
  • Change in revenue, AUM, or risk metrics linked to AI-supported decisions.

3. Proof Points: When Readiness Comes First

DataArt has over 20 years of experience in capital markets and asset management, serving more than 80 clients and completing over 250 projects in this sector. The most durable AI outcomes sit on top of stronger foundations, not isolated AI projects.

Example 1: Reporting Modernization as a Readiness Win

A leading global asset manager across public and private markets, with more than USD 1 trillion AUM, relied on manual, PowerPoint-based investor and marketing reports.

DataArt helped:

  • Modernize investor and marketing reporting.
  • Unify data flows across channels.
  • Embed automated report generation with plain-language explanations on top of governed data.

Reports that once took days are now delivered in seconds, supporting over 1,000 accounts with diverse reporting needs.

In stack terms, this engagement strengthened Layer 1 (Data Foundations) and Layer 4 (Value & Adoption) first. It also created a natural runway for later AI-driven personalization.

Example 2: A Global Credit Platform as an AI Foundation

A leading private equity investment firm, with roughly USD 195 billion AUM, struggled with fragmentation and dependence on third-party administrators for its credit business.

DataArt built a custom global credit data platform that:

  • Unified data sources across funds and instruments.
  • Provided a central view of investment operations and performance.
  • Reduced dependency on rigid third-party tools and associated back-office costs.

This was not branded as an AI project. It was a Layer 1–3 project, focusing on data foundations, clearer ownership, and a scalable operating model. Once a platform like this exists, it becomes realistic to pursue domain-specific AI use cases that were previously too brittle. One common example is handling covenant terms embedded in private credit deals (often buried in documents and inconsistent administrator feeds), where AI can help extract, normalize, and monitor covenants at scale.

4. Failure Modes To Avoid

Even with a clear framework, firms fall into familiar traps. In asset management work, two show up especially often: PoC theatre and compliance as a blocker.

PoC Theater

Many pilots, little that reaches production. Success is measured by model accuracy, not by process or profit and loss (P&L) impact.

  • Hidden cost: Spend, fatigue, and growing skepticism.
  • How the stack helps: Use Layers 1–3 as gates; do not fund more PoCs if data, governance, and operating model are weak.

Model First, Data Second

Teams choose an LLM or ML framework first, then discover that security master data is inconsistent, client data is incomplete, and lineage is unknown.

  • Hidden cost: Poor recommendations, manual workarounds, and higher mis-selling / mis-reporting risk.
  • How the stack helps: Treat data fitness at Layer 1 as a non-negotiable gate for any critical use case. 

Shadow AI and Fragmented Ownership

Business units develop their own AI tools, sometimes outside of central control.

  • Hidden costs: tool sprawl, duplicated spend, and hidden operational and compliance risks.
  • How the stack helps: Layers 2 and 3 require a single AI inventory and shared ownership between IT, data, and model risk.

Compliance as Blocker, Not Designer

Compliance and risk see AI late and react defensively.

  • Hidden cost: Delays, rework, and a widening gap between business ambition and controls.
  • How the stack helps: Involve control functions at Layer 2 by design, with standard logging, explanation, and approval patterns.

5. A 90-Day AI Readiness Playbook

You do not need a multi-year program to start closing the gap. A focused 90-day effort can change direction.

  1. Map your AI and data reality
    Build a single inventory of AI pilots and production systems, linking each to its primary data sources, owners, and consumers.
  2. Assess your stack by domain
    Start with two or three domains that tend to expose real readiness gaps and business value quickly:
    • Client data (CRM/IRM plus holdings and historical transactions)
    • Portfolio data
    • Security master / reference data, especially in private markets
  3. Pick 2–3 critical use cases and pause the rest
    Focus on use cases with clear business value and regulatory visibility. Avoid spreading talent across too many experiments.
  4. Run a four-week readiness sprint per use case
    Fix the worst data and governance gaps, standardize pipelines and environments, and agree on outcome metrics before the next model iteration. Four weeks sells well. In practice, 4-6 weeks is safer.
  5. Wire AI into workflows and close the loop
    Integrate outputs into existing tools (OMS, CRM, portals) rather than adding parallel dashboards. Track process and P&L impact and feed lessons back into standards and platforms.

6. A Compact AI Readiness Diagnostic

If you want a neutral view before launching the next round of AI initiatives, you can structure a short engagement around the stack:

Asset Management AI Readiness Diagnostic (4–6 weeks)

  • Week 1: Landscape and stack assessment
    Review AI initiatives, data platforms, and controls across front, middle, and back office using the four-layer stack.
  • Week 2-3: Deep dive in one or two domains
    Focus on high-value areas such as research, distribution, or investor reporting. Map specific data, governance, and operating-model gaps.
  • Weeks 4-6: Roadmap and quick wins
    Deliver a practical roadmap with three to five high-impact use cases, target data and architecture patterns, and a 90-day backlog of concrete steps.

DataArt supports asset managers worldwide with modern data platforms, reporting systems, and AI-enabled solutions across front, middle, and back office. The goal is not to push a single platform. It is to modernize foundations so AI becomes a safe, repeatable part of how the business operates.

Subscribe to Our Newsletter

Subscribe now to get a monthly recap of our biggest news delivered to your inbox!