Use Cases How It Works Agent Deliberation Security Blog Pricing Get Access →
← Back to Blog
Strategy
6 min read · February 2025

Your Prompts Are Intellectual Property. Treat Them That Way.

Engineering teams spend months refining system prompts and playbooks. Most have no mechanism to protect them when they need to be shared. The gap between IP law and AI tooling is wider than most legal teams realise.

CB
Corla Research
Published February 2025

In 2023, a consulting firm spent four months and significant senior engineering time building a system prompt for their AI-assisted due diligence workflow. The prompt encoded proprietary analytical frameworks, risk heuristics developed over years of deals, and a structured reasoning pattern that consistently outperformed their competitors' outputs.

In 2024, they noticed a competitor producing strikingly similar outputs. They had no way to know how, or when the exposure happened, because they had no audit trail. The prompt had been shared with an outsourced development team during an integration project. Beyond that, they had no visibility.

This is not an unusual story. It is becoming common. And the legal and technical frameworks to address it have not kept pace.

What a prompt actually is, legally

The intellectual property status of AI prompts is genuinely unsettled law. As of early 2025, there is no clear judicial precedent in most jurisdictions. But several legal frameworks offer partial protection, and understanding them clarifies where the real risk lies.

A system prompt is most naturally analysed as a trade secret under frameworks like the US Defend Trade Secrets Act or the EU Trade Secrets Directive. To qualify, the information must derive economic value from not being generally known, and you must take reasonable measures to keep it secret. The second requirement is where most organisations fail — they have valuable prompts but no mechanism for "reasonable measures."

Prompts may also contain protectable expression as literary works under copyright law, though the threshold for originality is contested for short, functional texts. More interesting is the trade secret angle applied not to the text of a prompt, but to the reasoning patterns and heuristics it encodes — these are closer to algorithms than to documents, and algorithmic trade secrets are well-established.

The legal question "is this prompt protected IP?" is less important than the practical question "if this prompt were extracted by a competitor, what would we have lost?" For sophisticated domain-specific prompts, the answer is often: significant competitive advantage.

The investment you're not accounting for

Consider how a high-value enterprise prompt gets built. It starts with a vague goal — "make our AI assistant better at analysing contracts." Then there are weeks of iteration: testing, measuring, refining. Senior domain experts are pulled in to encode their knowledge. Edge cases are discovered and handled. The prompt develops structure, then nuance, then something that starts to feel like institutional intelligence.

The cost of building a sophisticated system prompt for a complex domain is often measured in months and senior engineering time. The prompt for a fraud detection workflow might represent six months of work by a team that combines ML expertise with deep domain knowledge. That is not a text file — that is an asset.

But it is usually stored and managed like a text file. In a repository, probably. Maybe with some version control. Shared freely within the engineering team. And when vendors are onboarded, shared with them too — because what choice is there? You need the vendor to produce outputs consistent with your standards, which means they need the context that produces those outputs.

The gap between the asset's value and its protection is striking. Most enterprises have more rigorous controls around their AWS credentials than around their most valuable AI prompts.

How prompt IP leaks today

There are four primary leak vectors, each operating differently:

Direct transmission to vendors. The most obvious. You share the prompt because the vendor needs it to do their work. The NDA you have in place is the only protection, and NDAs are reactive — they establish liability after a breach, they do not prevent or detect it.

AI tool context indexing. A developer's AI coding assistant indexes everything in the repository, including prompt files. The developer may not have consciously shared the prompt — the tool did it automatically. The vendor developer is now working with an AI assistant that understands your proprietary patterns.

Output pattern extraction. Even if a developer never sees a prompt directly, working extensively with AI outputs shaped by that prompt teaches them the patterns. This is harder to characterise as a traditional IP violation, but the economic harm is the same.

Embedding in code and documentation. Prompt fragments embedded in code comments, configuration files, or documentation are often more accessible than the prompt itself. Teams routinely embed abbreviated versions of prompts as inline documentation — a practice that serves clarity but creates unintended exposure.

What protection actually looks like

Effective protection for prompt IP requires addressing the transmission problem, not just the legal problem. A more aggressive NDA does not fix any of the technical leak vectors described above.

The key insight is that you need to separate the benefit of the context from the context itself. The vendor's developers need AI-assisted development that is aligned with your standards. They do not need to know what your standards are, or how you've encoded them.

This separation requires a compilation step — the same way you might provide a vendor with a compiled library rather than source code. The compiled form is useful; it does what the source does. But it cannot be trivially reverse-engineered into the original.

For AI context, this means: compile the prompt into a form that informs AI completions without being readable or extractable by the developer or their tools. Scope the compiled form to the project and role. Sign it so tampering is detectable. Expire it so the exposure window is bounded.

Practical steps for this quarter

Without getting into vendor recommendations, there are four things any engineering leader can do immediately to improve prompt IP hygiene:

Inventory your prompt assets. Most organisations don't know exactly where their prompts are, which are high-value, and which are shared with vendors. A one-day audit across repositories and AI tool configurations usually produces surprises. Start there.

Apply data classification. Prompts should be classified with the same rigour as other sensitive assets. A system prompt encoding your fraud detection logic is not a "medium" sensitivity document — it is likely "high" or "critical." Your DLP policies should reflect this.

Audit vendor access. For any vendor who has touched your AI tooling configurations or prompt files, document what they had access to and for how long. This is the baseline you need before you can start improving.

Establish a transmission policy. Before the next vendor onboarding, decide: what context should vendors receive, in what form, with what controls? The answer "the same way we've always done it" is no longer acceptable for high-value assets.

The bottom line: Prompt engineering is becoming a core enterprise capability. The organisations that treat their prompt assets with the same rigour as their source code and their customer data will retain their AI-driven competitive advantages. Those that don't will discover what they've lost only when it's too late to matter.