The EU AI Act requires organizations to classify AI systems by risk level. MeshAI supports four risk levels and provides AI-assisted suggestions to help you classify each agent.Documentation Index
Fetch the complete documentation index at: https://docs.meshai.dev/llms.txt
Use this file to discover all available pages before exploring further.
Risk Levels
| Level | Description | Obligations |
|---|---|---|
| Minimal | Low-risk agents (spam filters, recommendations) | Basic monitoring only |
| Limited | Agents interacting with humans (chatbots, content generators) | Transparency obligations — users must know they’re interacting with AI |
| High | Agents in critical domains (HR, finance, healthcare) | Full compliance: audit trail, HITL, FRIA, human oversight |
| Unacceptable | Banned use cases (social scoring, real-time biometric surveillance) | Should not be deployed without safeguards and exemptions |
Classify an Agent
AI-Assisted Suggestion
Not sure which level to assign? Use the suggestion endpoint. MeshAI analyzes the agent’s metadata (name, description, framework, use case) and suggests a risk level.Get Current Classification
What Changes by Risk Level
| Capability | Minimal | Limited | High | Unacceptable |
|---|---|---|---|---|
| Basic monitoring | Yes | Yes | Yes | Yes |
| Transparency card | Optional | Required | Required | Required |
| Audit trail | Basic | Full | Full | Full |
| HITL approval | Optional | Optional | Required | Required |
| FRIA | No | No | Required | Required |
| Incident reporting | No | No | Required | Required |
| Human oversight | No | Optional | Required | Required |

