Skip to main content

Overview

LiveKit Tracing provides deep observability into your LiveKit agent’s performance by integrating the Cekura Python SDK directly into your agent code. This integration significantly enhances the information available in the Cekura platform for end-to-end visibility over agent execution. What you get:
  • Complete conversation transcripts with full message history
  • Tool/function calls with inputs and outputs
  • Detailed performance metrics (STT, TTS, LLM, End-of-Utterance)
  • Session logs captured automatically from your application
  • Mock tools support for testing with predictable tool responses
  • Dual-channel audio recording for monitoring production calls
  • LiveKit job and room metadata

Video Tutorial

Prerequisites

  • A Cekura account with an API key
  • A LiveKit agent project

Setup

Use this setup in your test agents while running simulation calls from the Cekura platform.
1

Install the Cekura Python SDK

pip install cekura==1.1.0rc1
2

Integrate the SDK in your LiveKit agent

Add the Cekura tracer to your LiveKit agent’s entrypoint:
import os
from livekit import agents
from cekura.livekit import LiveKitTracer

# Initialize Cekura tracer
cekura = LiveKitTracer(
    api_key=os.getenv("CEKURA_API_KEY"),
    agent_id=123  # Your agent ID from Cekura dashboard
)

@server.rtc_session(agent_name="my_agent")
async def entrypoint(ctx: agents.JobContext):
    assistant = YourAssistant()
    session = agents.AgentSession(...)

    # Track session with automatic tool injection and export
    await cekura.track_session(ctx, session, assistant)

    await session.start(room=ctx.room, agent=assistant)
What this does:
  • Captures transcripts, tool calls, metrics, and session logs
  • Automatically injects mock tools configured in Cekura
  • Automatically configures chat/text mode when running text-based tests
  • Exports data to Cekura for test analysis
3

Configure LiveKit provider and enable tracing

Navigate to your agent settings in the Cekura dashboard, select LiveKit as the provider, and enable tracing:LiveKit Agent SettingsRequired configuration:
  • LiveKit API Key: Your LiveKit API key
  • LiveKit API Secret: Your LiveKit API secret
  • LiveKit URL: Your LiveKit server URL (e.g., wss://your-server.livekit.cloud)
  • Agent Name: The specific agent name to dispatch in LiveKit
  • LiveKit Config (JSON) (Optional): Additional room configuration parameters (accessible in agent code via get_simulation_data())
Testing connection types:LiveKit Connection TypesConfigure at least one connection type for voice-based testing:
  • WebRTC: Direct LiveKit room connection using the credentials configured above
  • Telephony: Phone-based testing if your LiveKit agent is connected to a phone system (requires Contact Number)
For text-based testing:
  • Chat: Select LiveKit to enable chat-based testing (uses the same WebRTC configuration, no additional setup required)
4

Run tests

Run tests using your preferred connection type:
  • Run with LiveKit: For WebRTC-based testing
  • Run with Voice: For telephony-based testing
  • Run with Text: For chat-based testing
Run Tests Options
5

Analyze the call

Navigate to the Runs section to view your test results with enhanced data including transcripts, tool calls, session logs, and detailed performance metrics.

Enhanced Data in Cekura UI

With tracing enabled, you’ll see enriched information in the Cekura platform: The run now displays:
  • Room Session ID: Visible in the call provider ID field, allowing you to correlate Cekura test runs with specific LiveKit sessions
  • Complete Transcript: Full conversation history from the LiveKit agent, including tool/function call requests and responses
  • Provider Call Data: Detailed metadata accessible in the run details, including job information, room configuration, session logs, and raw performance metrics
Enhanced Data Display Provider Call Data contains the following information:
  • Job Information: Job ID, room name, participant details, and agent dispatch metadata
  • Room Information: Room configuration, participant count, session duration, and connection details
  • Session Logs: Captured agent session logs with timestamps, log levels, and messages for debugging
  • Raw Metrics:
    • STT (Speech-to-Text): Latency, duration, and transcription timing
    • TTS (Text-to-Speech): Generation time and audio synthesis metrics
    • LLM: Token usage, response time, and inference latency
    • EOU (End-of-Utterance): Detection timing and accuracy
  • Custom Metadata: Additional metadata passed to the SDK via **metadata parameters

Automatic Chat Mode Support

The SDK automatically handles chat/text mode configuration when you run scenarios using “Run with Text” — no code changes required. When you run a text-based test, the SDK automatically patches your session to disable audio processing, enabling pure text-based interactions with your agent. This provides:
  • Targeted testing of your agent’s conversational logic without audio overhead
  • Cost savings by eliminating STT/TTS provider costs and reducing Cekura credit usage
  • Faster simulations compared to voice-based tests
Your track_session() integration works seamlessly for both voice and text modes.

Using Mock Tools with LiveKit Tracing

The SDK supports mock tools, allowing you to test your agent with predictable tool responses. This is useful for creating reproducible test scenarios without relying on live external services. To use mock tools:
  1. Create mock tools in Cekura: Set up your mock tool configurations in the Cekura dashboard. See the Mock Tools guide for detailed instructions.
  2. SDK handles the rest: Once mock tools are configured, the SDK automatically routes tool calls to Cekura’s mock endpoints during testing - no additional code changes needed.
  3. Test with predictable data: Your agent will receive the mock responses you configured, making it easy to test specific scenarios and edge cases.

Best Practices

  1. Use the right method for your environment: Use track_session() in your test/UAT environments for simulation testing with mock tools. Use observe_session() in your production environment for monitoring live calls with audio recording.
  2. Use environment variables for credentials: Don’t hardcode API keys in your code
  3. Keep the SDK updated: Run pip install --upgrade cekura periodically for the latest features
  4. Review tool calls regularly: Add the predefined metric Tool Call Success to your evaluators

SDK Reference

LiveKitTracer Initialization

from cekura.livekit import LiveKitTracer

cekura = LiveKitTracer(
    api_key="your_api_key",         # Required: Your Cekura API key
    agent_id=123,                   # Required: Agent ID from dashboard
    host="https://api.cekura.ai",   # Optional: Custom API host
    enabled=True                    # Optional: Enable/disable tracer
)

track_session()

Tracks simulation/test calls with automatic mock tool injection and chat mode support. Collects transcripts, tool calls, session logs, and metrics.
await cekura.track_session(
    ctx,               # Required: LiveKit JobContext
    session,           # Required: LiveKit AgentSession
    agent,             # Optional: Agent instance for mock tool injection
    capture_logs=True, # Optional: Capture session logs (default: True)
    **metadata         # Optional: Custom metadata
)
Environment variables:
  • CEKURA_TRACING_ENABLED="false": Disable tracking entirely
  • CEKURA_MOCK_TOOLS_ENABLED="false": Disable only mock tool injection

observe_session()

Monitors production calls with dual-channel audio recording. Collects transcripts, tool calls, session logs, and metrics. Requires LiveKit credentials configured in Cekura.
await cekura.observe_session(
    ctx,               # Required: LiveKit JobContext
    session,           # Required: LiveKit AgentSession
    capture_logs=True, # Optional: Capture session logs (default: True)
    **metadata         # Optional: Custom metadata
)
Environment variables:
  • CEKURA_OBSERVABILITY_ENABLED="false": Disable observability entirely

get_simulation_data()

Extracts simulation data populated by Cekura when running simulation calls from the platform. Returns empty dict for phone-based calls.
await ctx.connect()  # Must be called first

simulation_data = cekura.get_simulation_data(
    ctx    # Required: LiveKit JobContext
)
Returns: Dictionary with simulation metadata:
{
    "scenario_id": 123,              # Scenario being tested
    "run_id": 456,                   # Current run ID
    "test_profile_data": {           # Test profile data
        "customer_name": "John Doe",
        "account_number": "ACC-12345"
    },
    "additional_config": {           # LiveKit config from agent settings
        "sample_key": "sample_value"
    }
}
This data is ONLY available when using Option 2 (Automated LiveKit Testing) - running tests via “Run with LiveKit”. Phone-based calls (Option 1) will return an empty dictionary.

Next Steps