How to Protect API Keys from AI Coding Tools (Without Breaking Your Workflow)
TL;DR: AI coding tools read .env files, MCP server configs, and shell profiles to provide context. That sends your API keys to remote APIs. Secretless AI blocks secret files from AI context, stores credentials in encrypted backends (local, OS keychain, or 1Password), and injects them at runtime. One command: npx secretless-ai init
The Problem: AI Reads Everything in Your Project
Claude Code, Cursor, GitHub Copilot, Windsurf, Cline, and Aider all need access to your project files to provide useful suggestions. They read source code, config files, and documentation to build context. The problem: they also read files that contain secrets.
Consider what lives in a typical project directory:
.envAPI keys for Stripe, OpenAI, database URLs.env.localLocal overrides with production credentialsclaude_desktop_config.jsonMCP server configs with plaintext API keys~/.zshrc / ~/.bashrcExport statements with credentials.aws/credentialsAWS access keys and secret keysOnce a secret enters an AI tool's context window, it gets sent to a remote API for processing. You cannot take it back. The secret is now in the provider's logs, training pipeline, or at minimum their inference infrastructure.
The solution is not to stop using AI coding tools. The solution is to prevent secrets from entering the context window in the first place, and to store them somewhere the AI cannot reach.
Block Secrets from AI Context
Secretless AI auto-detects which AI tools you use and installs the right protections for each one. Each tool gets a different mechanism because each tool has different extension points.
$ npx secretless-ai init
Secretless v0.10.0
Keeping secrets out of AI
Detected:
+ Claude Code
+ Cursor
+ GitHub Copilot
Configured:
* Claude Code (PreToolUse hook + deny rules)
* Cursor (.cursorrules)
* GitHub Copilot (.github/copilot-instructions.md)
Created:
+ .claude/hooks/secretless-guard.sh
+ CLAUDE.md
Modified:
~ .claude/settings.json
~ .cursorrules
~ .github/copilot-instructions.md
Done. Secrets are now blocked from AI context.Claude Code gets the strongest protection. It supports hooks — shell scripts that run before every file read, grep, glob, and bash command. The hook checks whether the operation would access a secret file or dump a secret variable. If it would, the tool call is blocked before it executes. The AI never sees the content.
Cursor, Copilot, Windsurf, and Cline get instruction-based protection. Secretless writes rules into each tool's instruction file telling the AI to never read secret files and to reference credentials by environment variable name instead.
Aider uses .aiderignore patterns, which work like .gitignore — files matching the patterns are excluded from the context window entirely.
Store Secrets in Encrypted Backends
Blocking AI access is the first layer. The second layer is removing secrets from the places AI looks. Instead of .env files and shell profile exports, store secrets in an encrypted backend that AI tools cannot access.
Secretless supports three backends:
Local
AES-256-GCM encrypted file on disk. Zero setup. Works everywhere Node.js runs. Default backend.
OS Keychain
macOS Keychain or Linux Secret Service. Hardware-backed encryption on Apple Silicon. Authenticated by OS login.
1Password
Dedicated vault via the op CLI. Biometric unlock (Touch ID). Service accounts for CI/CD. Cross-device sync.
$ npx secretless-ai secret set STRIPE_KEY=sk_live_51Hx...
Stored STRIPE_KEY
$ npx secretless-ai backend set 1password
Backend set to 1password
$ npx secretless-ai migrate --from local --to 1password
Migrating 4 secrets from local to 1password...
Done. All secrets migrated.1Password for Teams and CI/CD
For teams, 1Password is the recommended backend. Every developer authenticates with biometrics (Touch ID / Windows Hello). Secrets sync across devices through 1Password vaults. In CI/CD, set the OP_SERVICE_ACCOUNT_TOKEN environment variable — same secrets, no code changes, no desktop app needed.
Inject Secrets at Runtime
Secrets stored in backends need to get into your application somehow. The run command injects secrets as environment variables into any command. The AI tool can see the command output, but never the secret values themselves.
# Inject all secrets
$ npx secretless-ai run -- npm test
# Inject only specific keys
$ npx secretless-ai run --only STRIPE_KEY -- npm start
# Inject for a one-off API call
$ npx secretless-ai run --only DATABASE_URL -- npm run migrateThe --only flag restricts which secrets get injected, following the principle of least privilege. If your test suite only needs a database URL, don't give it your Stripe key.
Encrypt MCP Server Secrets
If you use MCP servers with Claude Desktop, Cursor, or VS Code, you have a problem that is easy to overlook: every MCP server config stores API keys as plaintext in JSON files. The LLM can read these files. Your Stripe key, GitHub token, and database credentials are sitting in a config the AI has access to.
$ npx secretless-ai protect-mcp
Secretless MCP Protection
Scanned 1 client(s)
+ claude-desktop/browserbase
BROWSERBASE_API_KEY (encrypted)
+ claude-desktop/github
GITHUB_PERSONAL_ACCESS_TOKEN (encrypted)
+ claude-desktop/stripe
STRIPE_SECRET_KEY (encrypted)
3 secret(s) encrypted across 3 server(s).
MCP servers will start normally -- no workflow changes needed.Here is what the config looks like before and after:
Before: plaintext in JSON
{
"stripe": {
"command": "npx",
"args": ["-y", "@stripe/mcp"],
"env": {
"STRIPE_SECRET_KEY": "sk_live_51Hx..."
}
}
}After: encrypted, injected at runtime
{
"stripe": {
"command": "secretless-mcp",
"args": ["npx", "-y", "@stripe/mcp"],
"env": {}
}
}The secretless-mcp wrapper decrypts secrets from your configured backend and injects them as environment variables before starting the MCP server. Non-secret env vars (URLs, region names) stay in the config untouched. MCP servers start normally with no workflow changes.
The AI-Safe Guard
What happens when an AI tool tries to read a secret directly? Secretless detects non-interactive execution (which is how AI tools run commands) and blocks output.
$ npx secretless-ai secret get STRIPE_KEY
secretless: Blocked
Secret values cannot be read in non-interactive contexts.
AI tools capture stdout, which would expose the
secret in their context.
To inject secrets into a command:
npx secretless-ai run -- <command>Direct terminal access (a human typing in a terminal) works normally. The guard specifically detects non-interactive execution — piped commands, subprocess spawning, and the patterns that AI tools use to run shell commands. This means you can still use secret get yourself in a terminal, but the AI cannot use it to exfiltrate values.
Project Setup for Teams
For team projects, Secretless supports manifest files and CI verification.
The .secretless manifest
Define required secrets in a .secretless file at the project root. This tells new team members and CI what secrets the project needs:
# .secretless
STRIPE_KEY required Stripe API key for payments
DATABASE_URL required PostgreSQL connection string
SENTRY_DSN optional Error tracking# New team member onboarding
$ npx secretless-ai setup
Missing: STRIPE_KEY (required)
Missing: DATABASE_URL (required)
Enter STRIPE_KEY:
# CI: fail if required secrets are missing
$ npx secretless-ai setup --check
# Auto-find and import existing .env files
$ npx secretless-ai import --detectPre-commit hooks
Install a pre-commit hook that scans staged files for secrets before they enter git history. Catches hardcoded credentials that .gitignore misses.
$ npx secretless-ai hook install
Installed pre-commit hook.
$ npx secretless-ai hook status
Pre-commit hook: installed
Patterns: 49 credential patterns activeGet Started in 10 Seconds
Zero dependencies. Zero config. Works with Claude Code, Cursor, Copilot, Windsurf, Cline, and Aider.
npx secretless-ai init