Your AI Coding Tools Are Leaking Your API Keys
TL;DR: AI coding assistants — Claude Code, Cursor, Copilot, Windsurf, Cline, Aider — read every file in your project to build context. That includes .env files, MCP server configs with embedded tokens, and terminal history with pasted keys. Credential values that enter a context window get sent to a remote API for inference. The fix: stop storing secrets where AI can read them. Use environment variable references, encrypted backends, and runtime injection. One command: npx secretless-ai init
The Exposure Surface
AI coding assistants need broad file access to be useful. When you ask Claude Code to refactor a module, it reads your source files, package configs, and documentation. When Cursor generates a database query, it reads your schema files and connection setup. This is the expected behavior that makes these tools productive.
The side effect is that these tools also read files that contain credentials. They do not distinguish between a source file and a .env file — both are text files in the project directory. Here is what a typical project exposes:
.env / .env.local / .env.productionAPI keys for Stripe, OpenAI, Supabase, AWS. Database connection strings with passwords.
claude_desktop_config.json / .cursor/mcp.jsonMCP server configurations with plaintext API keys in the env field.
~/.zshrc / ~/.bashrc / ~/.zsh_historyExport statements and command history containing credentials.
config.yaml / settings.json / docker-compose.ymlApplication configs with embedded tokens, webhook secrets, and service credentials.
Every file listed above is a standard part of development workflows. Developers have used .env files for years — they were designed for local use before AI tools existed. The problem is not that developers are careless. The problem is that the toolchain changed, and credential storage practices have not caught up.
Three Leakage Vectors
1. Context Window Ingestion
When an AI assistant opens your project, it indexes files to understand your codebase. If .env is in the project root, the AI reads it to “understand your configuration.” The key values enter the context window and get sent to the inference API.
Once in the context, credential values can appear in several places: API request logs on the provider side, model training feedback loops (for tools that train on user interactions), and autocomplete suggestions shown to you or other users.
# What the AI sees when it reads your .env
OPENAI_API_KEY=sk-proj-abc123...def456
STRIPE_SECRET_KEY=sk_live_51Hx...9kZ
DATABASE_URL=postgres://admin:P@ssw0rd@db.example.com:5432/prod
AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCY...These values are now part of the conversation context. The provider's API processed them. Depending on the provider's data retention policies, they may persist in logs.
2. Terminal History
AI coding tools with terminal access can read your shell history. Every export statement and every curl command with an Authorization header is recorded in your history file.
# Commands in your shell history
$ export OPENAI_API_KEY=sk-proj-abc123...def456
$ curl -H "Authorization: Bearer sk-proj-abc123..." https://api.openai.com/v1/models
$ psql "postgres://admin:P@ssw0rd@db.example.com/prod"When the AI reads ~/.zsh_history or ~/.bash_history to understand your workflow, it ingests every credential you have typed or pasted into your terminal.
3. MCP Server Configurations
The Model Context Protocol (MCP) connects AI assistants to external tools — databases, APIs, cloud services. Each MCP server needs credentials to authenticate. These credentials are stored as plaintext values in JSON configuration files.
// claude_desktop_config.json
{
"mcpServers": {
"stripe": {
"command": "npx",
"args": ["-y", "@stripe/mcp"],
"env": {
"STRIPE_SECRET_KEY": "sk_live_51Hx...9kZ"
}
},
"github": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-github"],
"env": {
"GITHUB_PERSONAL_ACCESS_TOKEN": "ghp_abc123..."
}
}
}
}The AI assistant reads these configs to discover which tools are available. In doing so, it reads the API keys alongside the tool definitions. This is a structural problem with how MCP configs are currently designed — tool metadata and secrets live in the same file.
What This Looks Like in Practice
These are not theoretical concerns. Here are concrete scenarios that occur during normal AI-assisted development:
Scenario 1: Debug logging with real credentials
You ask the AI to add debug logging to your API client. It has seen your .env file and generates code that logs the request headers, including the actual Bearer token value from your environment, directly into the console output.
Scenario 2: Curl command with a live token
You ask for help testing an API endpoint. The AI generates a curl command using the actual API key it read from your config, rather than a placeholder. If you copy-paste this into a shared document or Slack thread, the key is exposed further.
Scenario 3: Dockerfile that copies .env
You ask the AI to containerize your application. It generates a Dockerfile with COPY .env . because it observed your app reads from that file at startup. Your credentials are now baked into the container image and will be pushed to a registry.
In each case, the AI is trying to be helpful. It uses the information it has access to, and credentials happen to be in that information. The solution is to remove credentials from the places the AI reads, not to stop using AI tools.
The Fix: A Secretless Approach
The principle is straightforward: credentials should never exist as plaintext in files that AI tools can read. Instead, use references that point to credentials stored in secure backends, and inject the actual values only at runtime.
1. Environment Variable References, Not Inline Values
Your code should reference credential names, never credential values. When the AI reads your source code, it sees a variable name — not a secret.
Before: hardcoded value
const stripe = new Stripe(
"sk_live_51Hx...9kZ"
);After: environment reference
const stripe = new Stripe(
process.env.STRIPE_SECRET_KEY
);The AI sees process.env.STRIPE_SECRET_KEY and knows the variable name, but never learns the value. It can still generate correct code that uses the credential.
2. Encrypted Backend Storage
Instead of .env files, store credentials in a backend the AI cannot access: the OS keychain, an encrypted local store, or a password manager like 1Password.
# Store a secret in the encrypted backend
$ npx secretless-ai secret set STRIPE_SECRET_KEY
Enter value: ********
Stored STRIPE_SECRET_KEY (encrypted)
# Use OS keychain for hardware-backed encryption
$ npx secretless-ai backend set keychain
Backend: macOS Keychain (hardware-backed on Apple Silicon)Secrets stored in the OS keychain are protected by your system login credentials and, on Apple Silicon, by the Secure Enclave hardware. The AI tool has no mechanism to read from the keychain directly.
3. File Blocking with .secretlessrc
Even with secrets in a backend, legacy .env files may still exist in the project (other team members, CI configs, migration in progress). Block the AI from reading them.
# Secretless blocks these patterns from AI context
.env
.env.*
*.key
*.pem
*.p12
.aws/credentials
claude_desktop_config.json
.cursor/mcp.jsonFor Claude Code, this is enforced with hooks that intercept file reads before they execute. For other tools, instruction files tell the AI to skip these patterns. The result: the AI never sees credential values, even if the files are present.
4. Runtime Injection
Credentials are decrypted and injected as environment variables only when your application starts. They exist in process memory during execution, but never at rest in project files.
# Inject secrets and start your app
$ npx secretless-ai run -- npm start
# Inject only what is needed (least privilege)
$ npx secretless-ai run --only DATABASE_URL -- npm run migrate
# Works with any command
$ npx secretless-ai run -- python app.pyThe AI tool can observe that your application started successfully, but it cannot read the injected environment variables. The credential values are never written to disk and never enter the AI context window.
Detection: Find Exposed Credentials
Before migrating, identify where credentials currently live in your project. Two tools help with this.
opena2a review
The opena2a review command scans your project for hardcoded credentials and scores your security posture. It detects API keys, tokens, passwords in config files, and credentials in Docker and CI configurations.
$ npx opena2a review
OpenA2A Security Review
Credential Scan
WARN .env contains 4 API keys
WARN docker-compose.yml has inline DATABASE_URL
WARN claude_desktop_config.json has 2 plaintext tokens
Score: 52/100
Recoverable: +31 by moving credentials to encrypted storagesecretless-ai verify
After setting up Secretless, verify that your configuration is correct and all AI tools are protected:
$ npx secretless-ai verify
Secretless Verification
PASS Claude Code hooks installed
PASS .cursorrules contains deny patterns
PASS Copilot instructions configured
PASS No .env files in project root
PASS MCP configs encrypted
All checks passed. Credentials are protected from AI context.Migration Path: Before and After
Migrating from hardcoded credentials to a secretless approach takes a few minutes per project. Here is the full workflow:
# Step 1: Initialize Secretless (auto-detects AI tools)
$ npx secretless-ai init
# Step 2: Import existing .env secrets to encrypted backend
$ npx secretless-ai import --detect
Found .env with 4 secrets
Imported: OPENAI_API_KEY, STRIPE_SECRET_KEY, DATABASE_URL, SENTRY_DSN
# Step 3: Encrypt MCP server credentials
$ npx secretless-ai protect-mcp
Encrypted 3 secrets across 2 MCP servers
# Step 4: Remove the .env file (secrets are now in the backend)
$ rm .env
# Step 5: Verify everything works
$ npx secretless-ai verify
All checks passed.
# Step 6: Run your app with injected secrets
$ npx secretless-ai run -- npm startAfter migration, your project directory looks like this:
Before: credentials in the open
After: credentials in encrypted backend
Your workflow does not change. You still use environment variables in your code. The difference is where the values come from: an encrypted backend instead of a plaintext file.
Protect Your Credentials in 10 Seconds
Works with Claude Code, Cursor, Copilot, Windsurf, Cline, and Aider. No config changes to your application code.
npx secretless-ai init