Documentation Index
Fetch the complete documentation index at: https://docs.meshai.dev/llms.txt
Use this file to discover all available pages before exploring further.
Installation
pip install meshai-sdk[llamaindex]
Usage
from meshai import MeshAI
from meshai.integrations.llamaindex import MeshAILlamaHandler
from llama_index.core import Settings, VectorStoreIndex
from llama_index.core.callbacks import CallbackManager
client = MeshAI(api_key="msh_...", agent_name="my-index")
client.register(framework="llamaindex")
# Add to the global callback manager
handler = MeshAILlamaHandler(client)
Settings.callback_manager = CallbackManager([handler])
# Use LlamaIndex as normal — all LLM calls tracked
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What is MeshAI?")
# Model and tokens captured automatically
How It Works
MeshAILlamaHandler implements LlamaIndex’s callback interface, listening to LLM events. On each LLM completion, it:
- Extracts the model name from the LLM event payload
- Extracts input and output token counts from the event callback data
- Infers the provider from the model name
- Sends the usage event to MeshAI (buffered, non-blocking)
Works with any LlamaIndex-compatible LLM: OpenAI, Anthropic, Gemini, HuggingFace, and others.
Alternative: Proxy (Zero-Code)
If your LlamaIndex pipeline uses OpenAI or Anthropic, you can route through the proxy instead:
export OPENAI_BASE_URL=https://proxy.meshai.dev/v1/openai/k/msh_YOUR_PROXY_KEY