I'm a Security Team Assessing AI Risk
You need to understand what AI agents are running across your organization, assess their security posture, and produce documentation for audits and compliance reviews.
Time to complete: approximately 10 minutes per machine. Fleet aggregation adds 5 minutes.
Step 1: Build an AI Asset Inventory
Export a CSV of every AI agent and MCP server detected on the machine. This format imports directly into ServiceNow, Jira Assets, or any CMDB that accepts CSV.
opena2a detect --export-csv assets.csvShadow AI Discovery =================== Scanning processes, configs, and network... Found 5 AI agents, 8 MCP servers across 3 users. Exported to: assets.csv (13 rows) CSV columns: type, name, version, pid, user, mcp_count, governance, credential_protection, last_seen
Step 2: Generate an Executive Report
Create an HTML dashboard suitable for sharing with leadership. The report includes risk scores, governance gaps, and remediation priorities -- no terminal required to view it.
opena2a detect --reportShadow AI Discovery Report ========================== Generated: shadow-ai-report.html Summary: Total AI agents: 5 Governed: 2 (40%) Credential protected: 1 (20%) MCP servers: 8 Signed configs: 3 (37%) Risk level: HIGH - 3 agents with no governance file - 4 agents with exposed credentials - 5 MCP servers with unsigned configs Open shadow-ai-report.html in a browser to view the full dashboard.
Step 3: Check Community Trust Data
Query the OpenA2A Registry for trust scores and known vulnerabilities associated with each detected agent and MCP server.
opena2a detect --registryRegistry Lookup =============== Checking 13 assets against OpenA2A Trust Registry... claude-code Trust: 92/100 Verified publisher cursor Trust: 87/100 Verified publisher copilot-agent Trust: 89/100 Verified publisher filesystem Trust: 78/100 Community reviewed postgres-mcp Trust: 64/100 1 known advisory (CVE-2026-1234) web-search Trust: -- Not registered 3 assets have advisories. See report for details.
Step 4: Run a Full Security Review
The 6-phase review produces a structured assessment covering credentials, governance, supply chain, runtime behavior, MCP configuration, and trust posture.
opena2a reviewSecurity Review (6 phases) ========================== Phase 1: Credential Exposure 12 checks PASS 10 FAIL 2 Phase 2: AI Governance 8 checks PASS 5 FAIL 3 Phase 3: Supply Chain Trust 6 checks PASS 4 FAIL 2 Phase 4: Runtime Monitoring 5 checks PASS 3 WARN 2 Phase 5: MCP Configuration 9 checks PASS 7 FAIL 2 Phase 6: Trust Posture 4 checks PASS 3 WARN 1 Overall: 44 checks | 32 PASS | 9 FAIL | 3 WARN Trust Score: 62/100 (recoverable to 89 by fixing 9 failures) Report: review-report.json
Step 5: Integrate with Your Audit Workflow
Attach the generated artifacts to your existing audit and compliance processes.
- ServiceNow: Import assets.csv into your CMDB as AI asset records.
- Jira / tickets: Use review-report.json to create remediation tickets per finding.
- Audit evidence: Attach shadow-ai-report.html as evidence of AI governance review.
- Compliance frameworks: Map findings to NIST AI RMF, ISO 42001, or SOC 2 controls.
Aggregating Across a Fleet
Run detection across multiple machines and merge the results into a single inventory.
# On each machine: opena2a detect --export-csv assets-$(hostname).csv --ci # Merge all CSVs (keep one header row): head -1 assets-machine1.csv > fleet-inventory.csv tail -n +2 -q assets-*.csv >> fleet-inventory.csv
The merged CSV provides a single view of all AI agents and MCP servers across your organization, ready for import into your asset management system.
Next Steps
- Add automated security gates to CI/CD pipelines -- enforce policy on every deployment.
- Share the developer quick-start with your engineering team -- reduce governance gaps at the source.
- Full detect command reference -- all flags, output formats, and filtering options.