Platform Engineering emerged as a discipline to solve a specific problem: the cognitive overhead of building and operating software was growing faster than individual teams could absorb it. Rather than every team solving the same infrastructure problems independently, Platform Engineering centralised that expertise and exposed it as internal products — paved paths that made the right way also the easy way.
The shift to AI-assisted development creates a new version of the same problem. Every team is now configuring its own AI agent context — system prompts, local knowledge, tool configurations — independently. The same organisational standards get encoded differently in each team's setup, or not at all. The institutional knowledge that Platform Engineering worked hard to codify in runbooks and architecture docs isn't reaching the AI agents that are now doing a significant share of the coding.
The team best positioned to fix this is the same team that fixed the original problem.
What Platform Engineering already owns
Platform Engineering teams, in most mature engineering organisations, own several things that are directly relevant to AI agent context:
The architecture reference. The documented understanding of how the organisation's systems fit together — service boundaries, data flows, integration patterns, current deprecations. This is exactly what an AI agent needs to produce architecture-aligned code.
The standards library. Approved languages, frameworks, and libraries. Coding conventions. The rationale behind key decisions. What's recommended, what's permitted, what's prohibited. An AI agent with no access to this will reinvent it imperfectly, or violate it unknowingly.
The incident history. Post-incident reviews, known failure modes, patterns to avoid. The organisation's hard-won experience with what breaks and why. This is some of the most valuable context an AI agent could have — and it's almost universally absent from current AI deployments.
The security baseline. Often in collaboration with the security team — approved authentication patterns, known vulnerabilities in dependencies, data handling constraints. Security knowledge that should be present in every agent session but typically isn't.
The gap AI agents expose
The problem isn't that this knowledge doesn't exist. Platform Engineering teams have invested significant effort in documenting it. The problem is that it lives in formats — Confluence pages, runbooks, architecture decision records — that AI agents don't naturally access.
A developer's AI coding assistant doesn't read Confluence before each session. It doesn't check the architecture decision record for the service it's about to modify. It doesn't know what was learned in last quarter's incident retrospective. It knows what it was trained on, and it knows what the developer tells it in the current session.
The result is that all the knowledge Platform Engineering has carefully curated exists in a layer that AI agents can't reach. The agents operate below that layer — capable and fast, but organisationally unaware. The gap between what the organisation knows and what its agents know is, in most cases, enormous.
Platform Engineering has spent years making the right way the easy way for human developers. The same work is now needed for AI agents — and it's the same team's job to do it.
The new job: context publisher
Adding "AI context publisher" to Platform Engineering's remit doesn't require a wholesale change in what the team does. It requires a new output channel for work they're largely already doing.
The architectural decisions that get written up in ADRs also need to be packaged as AI agent context. The incident lessons that go into PIRs also need to feed the "what not to do" context package. The approved library list that lives in a wiki page also needs to be structured in a form agents can act on.
The key shift is treating AI context as a first-class artefact of Platform Engineering work — not a downstream application of it. When a new architectural standard is adopted, the context package update happens at the same time as the documentation update, not weeks later when someone gets around to it.
This matters for the same reason that keeping CI/CD configurations in code matters: it makes the knowledge live where it's used, rather than in a separate system that requires manual synchronisation.
What to publish — and how to structure it
The practical question for Platform Engineering teams starting this work: what goes into the context layer, and how should it be structured?
The most immediately valuable categories are:
Architecture context. Current service topology, team ownership boundaries, approved integration patterns. The information a developer would need to orient themselves in the codebase — expressed directly, not as a document to be read but as facts to be acted on. "The auth service is at auth.internal. It uses service tokens, not user JWTs, for service-to-service calls."
What not to do. The most actionable category and the easiest to build incrementally. After each significant incident, one sentence: what the failure mode was and what pattern to avoid. These accumulate quickly and have immediate value. An agent that knows "don't use synchronous calls to the inventory service from the checkout flow — it caused the November 2024 outage" is a meaningfully safer AI assistant.
Approved library list. Not just the list, but the status of each library: current recommended version, any known issues, whether the team is planning to migrate away. Agents that know the current approved state don't suggest deprecated dependencies or libraries under evaluation.
Security baseline. Developed with the security team, but owned by Platform Engineering for distribution. Current authentication patterns, known vulnerability categories to avoid, data handling constraints. This should reach every agent in every session, not just when a security review catches something in code review.
The multiplier effect
The reason Platform Engineering is the right team for this work is the same reason it was the right team for internal developer platforms: the work multiplies across every developer who uses the output.
A senior architect who documents an architecture decision in a context package spends an hour doing it. That hour of work reaches every developer's AI agent in the organisation — including new hires who join six months later, including vendors who are onboarded next year, including junior engineers who would never have thought to ask the right question. The knowledge doesn't fade or get lost in a Slack thread. It's ambient in every session, for everyone, indefinitely.
This is Platform Engineering's core value proposition applied to a new layer. The team that makes the right way the easy way for human developers is the same team that should make the right way the default for AI agents. The infrastructure is different. The mission is the same.