Skip to main content

General

MeshAI is the Agent Control Plane — the only platform that monitors AND governs AI agents. It provides unified observability, governance, cost intelligence, and EU AI Act compliance for enterprise AI agent deployments, regardless of framework or vendor.
Those tools focus on observability — tracing, debugging, evals. MeshAI does observability plus governance: 8 policy types enforced in real-time, HITL approval workflows, kill switch, agent quarantine, prompt injection detection, PII filtering, and EU AI Act compliance scoring. Nobody else combines monitoring and governance in one platform.
No. The MeshAI proxy works with one environment variable change — zero code modifications. Set ANTHROPIC_BASE_URL or OPENAI_BASE_URL to point at the proxy and you’re monitoring instantly. For deeper integration, our Python SDK supports 11 frameworks.
OpenAI, Anthropic, CrewAI, LangChain/LangGraph, AutoGen, Google Gemini, AWS Bedrock, LlamaIndex, Agno (ex-Phidata), Pydantic AI, and Microsoft Semantic Kernel. Enterprise platforms like Copilot Studio, Salesforce Einstein, and ServiceNow are covered by the proxy since they use OpenAI/Anthropic under the hood.
Anthropic (Claude), OpenAI (GPT, o1), Google Gemini, AWS Bedrock, Azure OpenAI, NVIDIA Nemotron — all through the transparent proxy with zero code changes.

EU AI Act & Compliance

Yes. The EU AI Act has extraterritorial reach — the same as GDPR. If your non-EU company’s AI system reaches the EU market or its outputs are used by EU residents, the Act applies. Any company outside the EU — whether based in the US, UK, Asia, or elsewhere — using AI agents for EU customer support, hiring, or financial services must comply by August 2, 2026. Fines are based on global turnover, not just EU revenue.
You’re classified as a deployer under the EU AI Act (Article 26). Even though the AI is from a third party, you can’t shift responsibility to the provider. If your service uses AI (e.g., speech-to-text, chatbots, content generation) and EU residents consume the output, you must comply. This includes maintaining audit trails, informing users they’re interacting with AI, and classifying the risk level of your AI systems. MeshAI automates all of this.
The Act defines 6 roles:
  • Provider — develops the AI system and places it on the market (e.g., OpenAI, Anthropic, Google). Must do conformity assessments, technical documentation, CE marking, and post-market monitoring.
  • Deployer — uses an AI system in their business (e.g., a SaaS company using GPT for customer support). Must ensure human oversight, maintain logs, inform users, and classify risk. This is MeshAI’s primary customer.
  • Importer — brings non-EU AI systems into the EU market. Must verify CE marking and conformity documentation.
  • Distributor — makes AI systems available within the EU without modifying them (e.g., cloud marketplaces). Must verify compliance before selling.
  • Authorized Representative — EU-based entity acting on behalf of a non-EU provider.
  • Product Manufacturer — integrates AI into a physical product (e.g., autonomous vehicles). Same obligations as provider for the AI component.
Important: If a deployer modifies the AI system’s intended purpose or makes substantial changes, they become reclassified as a provider under Article 25 — inheriting all provider obligations.
  • Prohibited AI practices: up to 35 million or 7% of global annual turnover
  • High-risk AI obligations (Articles 26, 27): up to 15 million or 3% of global turnover
  • Incorrect information to authorities: up to 7.5 million or 1.5% of global turnover
For SMEs and startups, fines are the lower of the percentage and fixed amount (not the higher).
MeshAI provides tooling for 8 articles:
  • Article 6 — Risk classification (4 levels with AI-assisted suggestion)
  • Article 12 — Record-keeping (immutable audit trail, 6-month retention, CSV/JSON export)
  • Article 13 — Transparency (auto-generated transparency cards)
  • Article 14 — Human oversight (HITL approval workflows)
  • Article 26 — Deployer obligations (12 obligations covered)
  • Article 27 — FRIA with all 6 required fields (a-f)
  • Article 50 — Transparency obligations
  • Article 73 — Serious incident reporting (15-day/2-day deadlines)
Article 27 requires deployers of high-risk AI systems to assess impacts on fundamental rights before deployment. MeshAI provides structured FRIA templates with all 6 required fields: intended purpose, usage period/frequency, affected person categories, specific risks, human oversight measures, and risk mitigation measures. FRIAs are versioned and can be submitted for authority notification.
MeshAI computes a 0-120 score across 7 components: audit trail active (Art 12), risk classification coverage (Art 6), human oversight for high-risk agents (Art 14), documentation completeness (Art 11), data retention (Art 12), FRIA completion (Art 27), and incident response (Art 73). Target: 100+ for enforcement readiness.
August 2, 2026 for high-risk AI system obligations (Articles 26, 27). Some provisions (prohibited practices) are already in effect since February 2025. General-purpose AI model obligations took effect August 2025.

Security & Privacy

No. MeshAI never stores prompt text or completion content. The proxy forwards request and response bodies to LLM providers in real-time but only persists operational metadata: token counts, cost estimates, latency, model name, and error type. No conversation history, no message content.
All data is encrypted at rest (AES-256 via Google Cloud) and in transit (TLS 1.2+ on all connections). API keys are hashed with bcrypt — never stored in plaintext. Provider API keys can be stored in GCP Secret Manager with only reference pointers in the database.
Yes. Every database query includes tenant-scoped filtering. Cross-tenant data access is architecturally impossible through the API. Each customer’s agents, policies, audit events, and billing are completely isolated.
MeshAI scans all request bodies at the proxy layer for 15+ known injection patterns including instruction override, system prompt extraction, role hijacking, delimiter attacks, and encoding evasion. Patterns are normalized with Unicode NFKC to prevent homoglyph bypasses. Detected injections return 403 before reaching the LLM provider.
MeshAI can detect PII in LLM responses (emails, phone numbers, SSNs, credit cards, IPs, passports, IBANs) and apply one of three actions per your policy:
  • Block — return 403, don’t forward the response
  • Redact — replace PII with [EMAIL_REDACTED], [SSN_REDACTED] etc.
  • Allow — log the detection but return the response unchanged
You choose the mode based on your use case. Agents that need PII (e.g., customer support) can use “allow” mode.
The proxy is designed to fail gracefully. If MeshAI is unavailable, configure your agents to fall back to direct provider URLs. The proxy itself runs on Cloud Run with auto-scaling (2-50 instances) and adds less than 5ms latency. Governance policies fail open on Redis errors — your agents keep working.

Pricing & Plans

  • Starter: $299/month — up to 25 agents
  • Professional: $799/month — up to 100 agents
  • Enterprise: $1,999/month — up to 1,000 agents
  • Enterprise Plus: Custom pricing — unlimited agents, dedicated infrastructure
All plans include the proxy, dashboard, SDK, and governance features.
We’re offering free Professional tier (3 months) for early beta users. Contact us or join the waitlist.
You won’t be able to register new agents until you upgrade or remove existing ones. Existing agents continue to work — we never block running agents due to billing limits.
Enterprise Plus plans support annual contracts with invoicing. Contact sales for details.

Technical

Less than 5ms. The proxy evaluates governance policies locally from Redis cache and forwards requests to the upstream provider. Telemetry is fire-and-forget (async, non-blocking).
When you block an agent, a flag is set in both PostgreSQL and Redis. The proxy checks Redis on every request — blocked agents get an instant 403 response. No waiting for cache expiry. Unblocking is equally instant.
Kill switch — immediate block for a known agent that’s behaving badly. Admin action. Quarantine — isolation for unknown or suspicious agents. Can be auto-triggered when the proxy detects an unregistered agent. Both result in a 403 at the proxy, but quarantine implies investigation is needed before release.
Yes. MeshAI focuses on AI agent governance — it complements Datadog, Grafana, PagerDuty, etc. We integrate with PagerDuty and Slack for alerting, and the audit trail can be exported as CSV/JSON for ingestion into any SIEM or compliance tool.
The Python SDK (meshai-sdk) is MIT licensed and available on PyPI and GitHub. The platform (API, proxy, dashboard) is proprietary.
Currently the SDK is Python-only. However, the proxy requires no SDK — any language that can set an environment variable works. JavaScript, Go, Rust, Java agents all work through the proxy with zero code changes.
Yes — two integration paths:Path 1: Proxy (tools with custom base URL support)
  • Claude Code: ANTHROPIC_BASE_URL=https://proxy.meshai.dev/v1/anthropic/k/msh_KEY
  • Cursor: Settings → Models → Custom Base URL
  • Cline / Roo Code: Settings → API Base URL
  • Codex: OPENAI_BASE_URL=https://proxy.meshai.dev/v1/openai/k/msh_KEY
Path 2: MCP Server (tools without proxy support)No install needed — configure npx -y @meshailabs/mcp-server in your tool’s MCP settings. Supports 19+ tools: Claude Desktop, VS Code Copilot, JetBrains IDEs, Windsurf, Zed, Warp, AWS Kiro, Continue.dev, Amazon Q Developer, Gemini CLI, Tabnine, Goose, and more.The MCP server registers the tool as a governed agent, tracks usage, checks policies, and reports compliance — all without proxying traffic.See the MCP Server guide for setup instructions per tool.