Documentation Index
Fetch the complete documentation index at: https://docs.meshai.dev/llms.txt
Use this file to discover all available pages before exploring further.
Installation
pip install meshai-sdk[crewai]
Usage
from meshai import MeshAI
from meshai.integrations.crewai import track_crewai
client = MeshAI(api_key="msh_...", agent_name="my-crew")
client.register(framework="crewai")
# Enable global tracking — applies to ALL crews
track_crewai(client)
# Run your crew as normal
from crewai import Agent, Task, Crew
researcher = Agent(role="Researcher", llm="gpt-4o", ...)
writer = Agent(role="Writer", llm="claude-sonnet-4-6", ...)
crew = Crew(agents=[researcher, writer], tasks=[...])
result = crew.kickoff()
# Each agent's LLM calls tracked with their specific model
How It Works
MeshAI registers a global after_llm_call hook with CrewAI. After every LLM interaction, the hook:
- Extracts the model name from the LLM context
- Extracts token counts from the response
- Infers the provider from the model name
- Sends the usage event to MeshAI (buffered, batched)
This means each agent in your crew is tracked with its own model — no hardcoding needed.
Multi-Model Crews
CrewAI crews often use different models per agent. MeshAI tracks each one separately:
Dashboard shows:
researcher (gpt-4o) → 15,000 tokens, $0.45
writer (claude-sonnet) → 8,000 tokens, $0.24
reviewer (gpt-4o-mini) → 3,000 tokens, $0.01