Use Cases How It Works Agent Deliberation Security Blog Pricing Get Access →
← Back to Blog
Technology
8 min read · February 2025

MCP Explained for CTOs: The Protocol That Changes Everything

Model Context Protocol has quietly become the most important enterprise AI infrastructure standard of 2025. Here's what your engineering team needs to understand — and why it creates both significant opportunity and new risk.

CB
Corla Research
Published February 2025

In November 2024, Anthropic released the Model Context Protocol. At the time, it received moderate coverage as a technical standard for connecting AI assistants to external data sources. By mid-2025, it had become the de facto integration layer for enterprise AI — backed by Anthropic, OpenAI, Google, Microsoft, and the Linux Foundation.

If you're a CTO and MCP is not yet on your radar, it should be. Not because it's new technology you need to adopt, but because your developers have already adopted it — and the governance implications are significant.

What MCP actually is

The non-technical version: MCP is a standard protocol that lets AI tools (like Claude Code, GitHub Copilot, or Cursor) connect to external services and data sources. Without MCP, connecting an AI assistant to your database requires custom integration code. With MCP, the AI assistant and the database both speak a common language, and connecting them is configuration, not engineering.

The technical version: MCP is a client-server protocol based on JSON-RPC 2.0. AI clients (IDE plugins, coding agents) connect to MCP servers (services that expose tools and context). The protocol standardises three types of server capabilities: tools (actions the AI can invoke), resources (data the AI can read), and prompts (templates that shape AI behaviour).

The governance version — which is what matters for this article: MCP creates a standardised channel through which AI tools receive instructions, context, and capabilities. That channel is powerful, and it is largely ungoverned in most enterprise environments today.

Why it matters more than you think

The Boston Consulting Group characterised MCP as "a deceptively simple idea with outsized implications." The implication they highlighted: without MCP, integration complexity between AI agents and enterprise systems rises quadratically. With MCP, it rises linearly. That's a large structural efficiency gain as AI agent use scales.

But there's a more specific implication that BCG didn't highlight: MCP standardises the channel through which enterprise AI context flows. Before MCP, context delivery to AI tools was ad hoc — files here, system prompts there, custom integrations everywhere. Each approach had its own access patterns and its own (limited) security posture.

MCP creates a single, standardised pipe. That's excellent news for integration efficiency. It is also the canonical attack surface for anyone who wants to intercept, modify, or extract enterprise AI context.

MCP is to enterprise AI context what HTTP is to web data. The standardisation that enables the ecosystem also concentrates the risk. HTTP needed TLS, firewalls, and WAFs. MCP needs its own governance layer.

The enterprise adoption picture in 2025

The adoption data, when you look at it, is striking. As of late 2025, over 97 million monthly SDK downloads and 5,800+ available MCP servers. Block, Bloomberg, Amazon, and hundreds of Fortune 500 companies have production MCP deployments. The Linux Foundation's Agentic AI Foundation now governs the spec, which has removed the single-vendor governance risk that early skeptics raised.

The security picture is more concerning. Recent research from Clutch Security found that in a typical 10,000-person organisation, roughly 15% of employees are running an average of two MCP servers each — meaning 3,000+ MCP servers running across the organisation, each with its own credentials, and most with no centralised governance, no audit trails, and no revocation mechanism.

This is the pattern security teams call "shadow infrastructure" — functional, widely used, completely outside the governance perimeter. It happened with SaaS in the early 2010s. It happened with cloud infrastructure in the late 2010s. It is happening with AI context in 2025.

The governance risk nobody is talking about

The MCP governance conversation in 2025 is primarily about what tools AI agents can call. MCP gateway products control which external services an AI agent can invoke. That's the right question for agentic risk, where an AI might execute code or write to databases.

But there's a second, less-discussed risk: what context flows through the MCP channel into AI tools. Most MCP gateway products are designed to govern the outbound direction — what the AI can do. Fewer are designed to govern the inbound direction — what proprietary knowledge reaches the AI tool.

For most enterprises, the inbound direction is where the proprietary IP lives. Your system prompts, your engineering playbooks, your internal knowledge base — these flow into developer AI tools via MCP and related mechanisms. And right now, that flow is typically:

  • Ungoverned — no policy on what context should flow to which developers
  • Unaudited — no record of what context a developer's AI tool received
  • Unrevocable — when a vendor engagement ends, the context they received during it is permanent

The MCP standard does not inherently solve any of these problems. It provides the pipe. What flows through the pipe, with what controls, remains an open question for most enterprises.

What to do about it

The governance response to MCP has three levels, and most organisations are at level zero:

Level 1 — Visibility. Know which MCP servers are running in your organisation, who created them, and what they expose. This is the minimum viable governance posture. An MCP registry with mandatory registration for production servers achieves this. Most MCP gateway products provide this visibility as a baseline capability.

Level 2 — Outbound control. Govern what external services AI agents can call. This is what current MCP gateway products do well. OAuth 2.1 authentication, RBAC, audit trails for tool invocations. This protects against agentic risk — an AI agent that shouldn't have access to your CRM, or shouldn't be able to write to your database.

Level 3 — Inbound context governance. Govern what proprietary context flows into developer AI tools, with scoping, compilation, auditing, and revocation. This is what Corla addresses, and it is currently the least-served governance layer. An AI agent that receives the wrong context can do as much damage as an AI agent with the wrong permissions — it's just a different kind of damage.

The enterprises that will lead in AI-assisted development over the next five years are the ones that treat the context pipe — not just the action pipe — as core governance infrastructure. MCP made that pipe universal. Governing it is now table stakes.