Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
-
Updated
Dec 24, 2025 - Python
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.
Build Secure and Compliant AI agents and MCP Servers. YC W23
See what your AI agents can access. Scan MCP configs for exposed secrets, shadow APIs, and AI models. Generate AI-BOMs for compliance.
Agent orchestration & security template featuring MCP tool building, agent2agent workflows, mechanistic interpretability on sleeper agents, and agent integration via DLL injection and CLI wrappers.
Scan A2A agents for potential threats and security issues
Local open-source dev tool to debug, secure, and evaluate LLM agents. Provides static analysis, dynamic security checks, and runtime monitoring - integrates with Cursor and Claude Code.
AI-first security scanner with 74+ analyzers, 180+ AI agent security rules, intelligent false positive reduction. Supports all languages. CVE detection for React2Shell, mcp-remote RCE.
Real-time semantic security for AI coding agents and MCP tools.
A zero-trust encrypted transport layer for AI agents and tools, with AES-GCM encryption, HMAC signing, and identity-aware JSON-RPC messaging.
This framework enables secure, decentralized communication between AI agents using blockchain technology and smart contracts. It ensures the integrity, confidentiality, and verifiability of interactions through cryptographic identities, end-to-end encryption, and immutable audit trails.
Security infrastructure for AI agents. See what they do. Stop what they shouldn't.
Add a description, image, and links to the agent-security topic page so that developers can more easily learn about it.
To associate your repository with the agent-security topic, visit your repo's landing page and select "manage topics."