← Back to work

Enterprise services · AI Implementation

Building the data layer before anyone ships an agent.

Sector
Enterprise services organization
Client
Anonymized by default
Engagement
AI-readiness engagement, in flight
Approach
Governed data layer with agent registry and MCP surface

The situation

An enterprise services organization had just finished a one-day AI accelerator with a major cloud provider. The accelerator produced a strong list of use cases. Teams were excited. Leadership wanted to ship. But the data underneath was at 1.5 out of 5 on a governance maturity scale. The agents the use cases described had nowhere reliable to read from.

What we found

The most expensive misconception in the AI conversation right now is that AI work is mostly model work. It almost never is. The model is the last 10 percent. The first 90 percent is the data layer, the access controls on it, the governance over which agents can read what, and the registry that tracks which agents exist and what they're allowed to do. Skip that and you don't ship agents. You ship liability.

What we built

A data architecture roadmap that named the foundation layer explicitly: a governed data platform with column-level access controls, a Model Context Protocol surface over it so agents can query through one consistent interface, and an agent registry that records every deployed agent with its scope, owner, and data permissions.

The work doesn't look like AI work. It looks like 1990s data architecture done with 2026 tooling. That's the point. The cloud provider's tooling is good but it doesn't supply the discipline. The discipline is the engagement.

What changed

The accelerator's use cases got scoped, not abandoned. The ones that could ship soon shipped against the data layer. The ones that needed three quarters of data work got rescheduled honestly instead of freezing in the maybe column. The frontline teams who actually do the work were brought into the conversation about which agents got built first, because they were the ones who would be auditing the outputs.

The AI work is mostly data work. Everyone discovers this eventually. We start there. Start an audit →

Get in touch