Skip to main content

Installation

pip install meshai-sdk[semantic-kernel]

Usage

Option 1: Global Tracking

from meshai import MeshAI
from meshai.integrations.semantic_kernel import track_semantic_kernel
from semantic_kernel import Kernel

client = MeshAI(api_key="msh_...", agent_name="my-sk-agent")
client.register(framework="semantic-kernel")

kernel = Kernel()
# ... add AI services to kernel ...

# Enable tracking on the kernel
track_semantic_kernel(client, kernel)

# Use as normal — all LLM calls tracked
result = await kernel.invoke(my_function, input="Hello")

Option 2: Prompt Filter

from meshai.integrations.semantic_kernel import MeshAIPromptFilter
from semantic_kernel import Kernel

kernel = Kernel()
# ... add AI services to kernel ...

# Add as a prompt filter
kernel.add_filter("prompt", MeshAIPromptFilter(client))

result = await kernel.invoke(my_function, input="Hello")
# Model and tokens tracked automatically

How It Works

MeshAI integrates via Semantic Kernel’s function invocation filter system. The MeshAIPromptFilter intercepts prompt render and completion events:
  1. Extracts the model name from the AI service configuration
  2. Extracts token counts from the completion result metadata
  3. Infers the provider from the model or service type
  4. Sends the usage event to MeshAI (buffered, non-blocking)
Works with all Semantic Kernel AI connectors: OpenAI, Azure OpenAI, Google, HuggingFace, and others.

Alternative: Proxy (Zero-Code)

If your Semantic Kernel uses OpenAI or Azure OpenAI, route through the proxy:
export OPENAI_BASE_URL=https://proxy.meshai.dev/v1/openai/k/msh_YOUR_PROXY_KEY