The Zero-Trust Identity Layer for MCP & Autonomous Agents
Implementations: Go
AIP (Agent Identity Protocol) is an open-source standard for authentication, attestation, authorization, and governance of AI agents. It's the IAM standard for AI.
Today, agents are granted full permissions to API keys, secrets, and system resources, running as the user with no distinction between human and non-human actions. As the line between what a human and an autonomous agent does becomes increasingly blurred, this creates serious risks β not just at a security level, but at a legal, societal, and economic level.
AIP is being built and proposed to the IETF to provide a universal standard for identity in the Internet of Agents (IoA) β so that anyone, anywhere, can build secure agents and gain full visibility with confidence.
There is no universal way to distinguish an AI agent from a human actor. When you connect Claude, Cursor, or any MCP-compatible agent to your systems, it receives god mode β full access to every tool the server exposes, with the same credentials as the user.
Model safety isn't enough. Attacks like Indirect Prompt Injection β demonstrated by the GeminiJack vulnerability β have proven that adversarial instructions embedded in documents, emails, or data can hijack agent behavior. The model believes it's following your intent while executing an attacker's commands.
Your agent is one poisoned PDF away from rm -rf /.
Beyond security, agents operating without identity creates systemic gaps:
- No audit trail β actions taken by agents are indistinguishable from human actions in logs
- No revocation β once an agent has credentials, there is no standard way to revoke them
- No authorization granularity β access is all-or-nothing at the API key level
- Compliance blind spots β SOC 2, GDPR, HIPAA, and SOX requirements are unmet for agentic actions
"Authentication is for Users. AIP is for Agents."
AIP is built on two layers that work together. Layer 1 establishes who the agent is. Layer 2 decides what it's allowed to do. The Agent Authentication Token (AAT) is the bridge. It's issued by Layer 1, enforced by Layer 2.
The current Go implementation of AIP introduces policy-based authorization at the tool-call layerβthe missing security primitive between your agents and your infrastructure. Try it for yourself.
LAYER 1 β IDENTITY LAYER 2 β ENFORCEMENT
(Who is this agent?) (What can it do?)
βββββββββββββββββββ βββββββββββββββββββ
β Root Registry β (AIP Authority) β AI Client β
β Signs Agent β β Cursor / Claude β
β Certificates β ββββββββββ¬βββββββββ
ββββββββββ¬βββββββββ β tool call + AAT
β Issues Attestation βΌ
βΌ βββββββββββββββββββββββββββ
βββββββββββββββββββ β AIP Proxy β
β Agent Identity β β β
β (Public Key) β β 1. Verify AAT signature ββββ AIP Registry
ββββββββββ¬βββββββββ β 2. Check token claims β (revocation)
β Signs Token Requests β 3. Evaluate policy β
βΌ β 4. DLP scan β
βββββββββββββββββββ β 5. Audit log β
β Token Issuer β ββββββββββ¬βββββββββββββββββ
β Validates ID β AAT β β
ALLOW / π΄ DENY
β Issues AAT β ββββββββββββββββββββββββββββββΆ β
βββββββββββββββββββ βΌ
βββββββββββββββββββ
β Real Tool β
β Docker/Postgres β
β GitHub / etc. β
βββββββββββββββββββ
The AAT is what connects the two layers. It carries signed claims about the agent β who issued its identity, which user it's acting on behalf of, what capabilities it declared, and when it was issued. The proxy in Layer 2 doesn't just check a static YAML allowlist β it verifies the cryptographic signature on the AAT, checks those claims against policy, and only then permits the tool call.
This means:
- A hijacked agent fails at Layer 2 β its AAT claims don't match the attempted action
- A revoked agent fails at Layer 2 β the proxy checks the registry revocation list on every call
- A legitimate agent passes through both layers with a full audit trail tied to its identity
AIP establishes cryptographic identities for AI agents. Before an agent can act, it obtains an AAT from the Token Issuer β a signed token tied to both the agent's key pair and the end-user's identity.
Security model:
- Root of Trust β AIP registry holds the issuer private key and signs agent certificates
- Agent Key Pair β each agent generates its own keys; the private key never leaves the agent
- AAT Claims β token encodes agent ID, user binding, capabilities, expiry, and issuer
- Revocation β registry maintains a revocation list checked by the proxy at runtime
AIP also operates as a transparent proxy between the AI client (Cursor, Claude, VS Code) and the MCP tool server. Every tool call passes through the policy engine before reaching the real tool. Today the proxy enforces YAML-defined policy. As Layer 1 matures, policy decisions will be driven by claims inside the AAT itself β moving from static configuration to cryptographically-grounded authorization.
graph LR
subgraph Client["π€ AI Client"]
A[Cursor / Claude Desktop]
end
subgraph AIP["π‘οΈ AIP Proxy (Sidecar)"]
B[Policy Engine]
C[DLP Scanner]
D[Audit Log]
end
subgraph Server["π§ Real Tool"]
E[Docker / Postgres / GitHub]
end
A -->|"tools/call"| B
B -->|"β
ALLOW"| E
B -->|"π΄ DENY"| A
B --> C
C --> D
E -->|"response"| C
C -->|"filtered"| A
style B fill:#22c55e,stroke:#16a34a,stroke-width:2px,color:#fff
style AIP fill:#f0fdf4,stroke:#16a34a,stroke-width:3px
When an injected prompt attempts to execute a dangerous operation, AIP intercepts and blocks it before the tool ever receives the request.
sequenceDiagram
participant Agent as π€ Agent (Hijacked)
participant AIP as π‘οΈ AIP Proxy
participant Policy as π agent.yaml
participant Tool as π§ Real Tool
Agent->>AIP: tools/call "delete_database"
AIP->>Policy: Check allowed_tools
Policy-->>AIP: β Not in allowlist
AIP->>AIP: π΄ Decision: DENY
AIP-->>Agent: Error: -32001 Permission Denied
Note over Tool: β οΈ Never receives request
Note over AIP: π Logged to audit trail
- Verifies the AAT signature against the AIP registry public key
- Checks token claims (agent ID, user binding, expiry) against policy
- Allows, denies, or escalates to a human based on the tool and arguments
- DLP-scans both the request and the response for sensitive data
- Writes an immutable audit log entry tied to the agent's verified identity
- Language Agnostic β supports agents written in Python, JavaScript, Go, Java, Rust, and more
- Zero Trust β no implicit trust between agents or based on network location
- Minimal Overhead β fast token verification without centralized bottlenecks
- Compliance Ready β generates audit trails that satisfy SOC 2, GDPR, HIPAA, and SOX
- Developer Friendly β simple SDK integration that works locally without infrastructure
| Term | Definition |
|---|---|
| Agent | An autonomous AI system that makes decisions and performs actions |
| Agent Identity Document (AID) | JSON structure defining an agent's cryptographic identity |
| Agent Authentication Token (AAT) | A signed token proving agent identity at runtime |
| Registry | Central directory of registered agents, permissions, capabilities, and federation |
| Token Issuer | Service that generates and signs AATs |
| Resource Server | API or system that agents request access to |
| Policy Engine | Runtime component that evaluates every tool call against defined policy |
| Feature | Standard MCP | API Keys | AIP |
|---|---|---|---|
| Agent Identity | β Per-agent cryptographic identity | ||
| Prompt Injection | β Policy blocks unauthorized intent | ||
| Authorization Granularity | β Per-tool, per-argument validation | ||
| Audit Trail | β Immutable JSONL per action | ||
| Human-in-the-Loop | β Native OS approval dialogs | ||
| Revocation | β Registry revocation list | ||
| Data Exfiltration | β DLP scanning + egress filtering | ||
| Compliance | β SOC 2, GDPR, HIPAA, SOX ready |
AIP and workforce AI governance tools solve different problems at different layers:
| Aspect | Workforce AI Governance | AIP |
|---|---|---|
| Focus | Employee AI usage monitoring | Agent action authorization |
| Layer | Network/application level | Tool-call level |
| Question | "Who in my org is using AI?" | "What can my AI agents do?" |
| Deployment | Typically SaaS | Open protocol, self-hosted |
| Use Case | Audit employee ChatGPT usage | Block agent from deleting databases |
These are complementary: Use workforce governance to monitor employee AI usage. Use AIP to secure the agents those employees build.
| Aspect | OAuth | AIP |
|---|---|---|
| Granularity | Scope-level ("repo access") | Action-level ("repos.get with org:X") |
| Timing | Grant-time | Runtime (every call) |
| Audience | End users | Developers/Security teams |
| Format | Token claims | YAML policy files |
OAuth answers "who is this?" β AIP answers "should this specific action be allowed?"
When an agent attempts a dangerous operation, AIP blocks it immediately:
{
"jsonrpc": "2.0",
"id": 1,
"error": {
"code": -32001,
"message": "Permission Denied: Tool 'delete_database' is not allowed by policy"
}
}What just happened?
- Agent (possibly hijacked by prompt injection) tries to call
delete_database - AIP policy engine checks
allowed_toolslist - Tool not found β Request blocked before reaching your infrastructure
- Attempt logged to audit trail for forensic analysis
Your database never received the request. This is zero-trust authorization in action.
Secure any MCP tool server in one command:
# Secure your local Docker MCP
aip wrap docker --policy ./policies/read-only.yamlOr protect your existing configuration:
# Start the AIP proxy with your policy
aip --target "python mcp_server.py" --policy ./agent.yaml
# Generate Cursor IDE configuration
aip --generate-cursor-config --policy ./agent.yaml --target "npx @mcp/server"apiVersion: aip.io/v1alpha1
kind: AgentPolicy
metadata:
name: secure-agent
spec:
mode: enforce
allowed_tools:
- read_file
- list_directory
- git_status
tool_rules:
- tool: write_file
action: ask # Human approval required
- tool: exec_command
action: block # Never allowed
dlp:
patterns:
- name: "AWS Key"
regex: "AKIA[A-Z0-9]{16}"We're building a standard, not just a tool.
-
v0.1: Localhost Proxy β The "Little Snitch" for AI Agents
- Tool allowlist enforcement
- Argument validation with regex
- Human-in-the-Loop (macOS, Linux)
- DLP output scanning
- JSONL audit logging
- Monitor mode
-
v0.2: Kubernetes Sidecar β The "Istio" for AI Agents
- Helm chart
- NetworkPolicy integration
- Prometheus metrics
-
v1.0: OIDC / SPIFFE Federation β Enterprise Identity
- Workload identity federation
- Centralized policy management
- Multi-tenant audit aggregation
| Resource | Description |
|---|---|
| AIP Specification | Formal protocol definition (v1alpha1) |
| Policy Reference | Complete YAML schema |
| Go Proxy README | Reference implementation |
| Quickstart Guide | 5-minute tutorial |
| Why AIP? | Threat model and design rationale |
| FAQ | Common questions |
| Language | Repository | Status |
|---|---|---|
| Go | aip-go | β Stable |
| Rust | aip-rust | π§ Coming Soon |
Want to build an AIP implementation in another language? See CONTRIBUTING.md.
AIP is an open specification. We welcome:
- Protocol feedback β Issues and PRs to the spec
- New implementations β Build AIP in Rust, TypeScript, Python
- Security research β Threat modeling, attack surface analysis
- Documentation β Tutorials, examples, integrations
See CONTRIBUTING.md for guidelines.
Apache 2.0 β See LICENSE
Enterprise-friendly. Use it, fork it, build on it.
For vulnerability reports, see SECURITY.md.
Stop trusting your agents. Start verifying them.