Skip to main content
The EU AI Act (Regulation 2024/1689) enters enforcement on August 2, 2026. MeshAI maps its features to specific articles so you can demonstrate compliance.

Coverage Summary

ArticleTopicMeshAI FeatureCoverage
Art. 6Classification of high-risk AIRisk ClassificationFull
Art. 12Record-keepingAudit TrailFull
Art. 13TransparencyTransparency CardsFull
Art. 14Human oversightHITL ApprovalsFull
Art. 26Deployer obligationsAgent Registry + GovernanceFull
Art. 27Fundamental rights impact assessmentFRIAFull
Art. 50Transparency for certain AI systemsTransparency CardsFull
Art. 73Serious incident reportingIncident ReportingFull

Article 6 — Classification of High-Risk AI Systems

Requirement: Organizations must classify AI systems based on their risk level per Annex III categories. MeshAI coverage: The Risk Classification feature lets you assign risk levels (minimal, limited, high, unacceptable) to each agent. AI-assisted suggestions analyze agent metadata against Annex III categories.

Article 12 — Record-Keeping

Requirement: High-risk AI systems must have automatic logging capabilities that record events throughout the system’s lifecycle. MeshAI coverage: The Audit Trail captures every governance action as an immutable event — agent registration, policy changes, anomaly detection, approvals, and incidents. Events are timestamped, attributed to an actor, and exportable in CSV/JSON.

Article 13 — Transparency and Information to Deployers

Requirement: High-risk AI systems must be designed to be sufficiently transparent to enable deployers to interpret outputs and use them appropriately. MeshAI coverage: Transparency Cards are auto-generated for each agent, documenting purpose, capabilities, limitations, model provider, and risk classification. Access via GET /agents/{id}/transparency-card.

Article 14 — Human Oversight

Requirement: High-risk AI systems must be designed to allow effective human oversight during use. MeshAI coverage: HITL Approvals enforce human review before agents can execute certain actions. The require_approval and require_human_review policy types ensure humans remain in the loop for high-risk operations.

Article 26 — Obligations of Deployers

Requirement: Deployers must implement appropriate technical and organizational measures, monitor AI system operation, and keep logs. MeshAI coverage: The Agent Registry provides a complete inventory of all deployed AI agents. Governance policies enforce organizational rules. Real-time monitoring detects anomalies. All actions are logged in the audit trail.

Article 27 — Fundamental Rights Impact Assessment

Requirement: Deployers of high-risk AI must conduct a fundamental rights impact assessment (FRIA) before deployment. MeshAI coverage: The FRIA feature provides structured templates covering all six required assessment areas (a–f). FRIAs are stored, versioned, and included in the audit trail.

Article 50 — Transparency Obligations for Certain AI Systems

Requirement: Providers of AI systems that interact with natural persons must ensure users are informed they are interacting with AI. MeshAI coverage: Transparency Cards document agent purpose and interaction patterns. The limited risk classification automatically flags agents that require transparency disclosures.

Article 73 — Reporting of Serious Incidents

Requirement: Providers and deployers must report serious incidents to the relevant market surveillance authority within strict deadlines. MeshAI coverage: The Incident Reporting feature provides structured incident creation, automatic deadline tracking (15-day and 2-day timelines), and authority notification support.

Readiness Score

MeshAI calculates a readiness score (0–120) across 7 components that map to these articles. Use it to track your compliance progress and identify gaps before the August 2026 deadline.