The relationship graph maps every agent’s dependencies: which LLM models it calls, which providers it uses, and which tools it invokes. The graph is auto-populated from proxy telemetry — no manual configuration required.Documentation Index
Fetch the complete documentation index at: https://docs.meshai.dev/llms.txt
Use this file to discover all available pages before exploring further.
How It Works
- Every request through the MeshAI proxy records the agent, provider, and model used.
- The API builds a relationship graph from this telemetry data.
- Query individual agent dependencies or the full organization-wide graph.
- The graph data is structured for direct use with visualization libraries like D3.js.
Get Agent Relationships
Retrieve the models and providers an individual agent depends on:Get Full Relationship Graph
Retrieve the complete organization-wide graph with nodes and edges, suitable for visualization:Graph Structure
The graph uses a node-edge model:Node Types
| Type | Description |
|---|---|
agent | A registered agent in your organization |
provider | An LLM provider (OpenAI, Anthropic, Google, etc.) |
model | A specific LLM model (gpt-4o, claude-sonnet-4-20250514, etc.) |
Edge Semantics
Edges represent usage relationships. Theweight field indicates the number of requests between the two nodes over the queried time period.
- agent -> model: The agent called this model N times.
- model -> provider: This model belongs to this provider (N total requests).
D3.js Visualization
The graph response is structured for direct use with D3.js force-directed graphs:Use Cases
- Impact analysis — Before blocking a provider, see which agents depend on it.
- Cost optimization — Identify agents that use expensive models and could be routed to cheaper alternatives.
- Compliance reporting — Document all model/provider dependencies for EU AI Act Article 12.
- Incident response — Quickly trace which agents are affected by a provider outage.

