Skip to main content

Installation

pip install meshai-sdk[llamaindex]

Usage

from meshai import MeshAI
from meshai.integrations.llamaindex import MeshAILlamaHandler
from llama_index.core import Settings, VectorStoreIndex
from llama_index.core.callbacks import CallbackManager

client = MeshAI(api_key="msh_...", agent_name="my-index")
client.register(framework="llamaindex")

# Add to the global callback manager
handler = MeshAILlamaHandler(client)
Settings.callback_manager = CallbackManager([handler])

# Use LlamaIndex as normal — all LLM calls tracked
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What is MeshAI?")
# Model and tokens captured automatically

How It Works

MeshAILlamaHandler implements LlamaIndex’s callback interface, listening to LLM events. On each LLM completion, it:
  1. Extracts the model name from the LLM event payload
  2. Extracts input and output token counts from the event callback data
  3. Infers the provider from the model name
  4. Sends the usage event to MeshAI (buffered, non-blocking)
Works with any LlamaIndex-compatible LLM: OpenAI, Anthropic, Gemini, HuggingFace, and others.

Alternative: Proxy (Zero-Code)

If your LlamaIndex pipeline uses OpenAI or Anthropic, you can route through the proxy instead:
export OPENAI_BASE_URL=https://proxy.meshai.dev/v1/openai/k/msh_YOUR_PROXY_KEY