LangGraph

Deepchecks integrates with LangGraph to provide tracing, evaluation, and observability for agents and their interactions across the LangGraph workflow.

Deepchecks integrates seamlessly with LangGraph, letting you capture and evaluate your LangGraph workflows. With our integration, you can collect traces from LangGraph interactions and automatically send them to Deepchecks for observability, evaluation, and monitoring.

How it works

Data upload and evaluation
Capture traces from your LangGraph interactions and send them to Deepchecks for evaluation.

Instrumentation
We use OTEL + OpenInference to automatically instrument LangGraph. This gives you rich traces, including LLM calls, tool invocations, and agent-level spans within the graph.

Registering with Deepchecks
Traces are uploaded through a simple register_dc_exporter call, where you provide your Deepchecks API key, application, version, and environment.

Viewing results
Once uploaded, you’ll see your traces in the Deepchecks UI, complete with spans, properties, and auto-annotations. See here for information about multi-agentic use-case properties.


Package installation

pip install "deepchecks-llm-client[otel]"

Instrumenting LangGraph

from deepchecks_llm_client.data_types import EnvType
from deepchecks_llm_client.otel import LanggraphIntegration

# Register the Deepchecks exporter
LanggraphIntegration().register_dc_exporter(
    host="https://app.llm.deepchecks.com/",   # Deepchecks endpoint
    api_key="Your Deepchecks API Key",        # API key from your Deepchecks workspace
    app_name="Your App Name",                 # Application name in Deepchecks
    version_name="Your Version Name",         # Version name for this run
    env_type=EnvType.EVAL,                    # Environment: EVAL, PROD, etc.
    log_to_console=True,                      # Optional: also log spans to console
)

Example

This is a simple LangGraph workflow for a weather assistant that retrieves current weather information using a tool-based search, optionally refines the answer with a summarizer LLM step, and executes through a multi-node agentic graph. It includes tracing and Deepchecks registration code:

import os
from deepchecks_llm_client.data_types import EnvType
from deepchecks_llm_client.otel import LanggraphIntegration
from typing import Literal
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode
from langchain_openai import AzureChatOpenAI

# LLM node
os.environ["AZURE_OPENAI_API_KEY"] = "Your API Key"
model = AzureChatOpenAI(openai_api_version="The API Version",
                    azure_endpoint="The LLM Endpoint",
                    azure_deployment="The Deployment", model="The Model", validate_base_url=False, )


LanggraphIntegration().register_dc_exporter(
    host="https://app.llm.deepchecks.com/",   # Deepchecks endpoint
    api_key="Your Deepchecks API Key",        # API key from your Deepchecks workspace
    app_name="Your App Name",                 # Application name in Deepchecks
    version_name="Your Version Name",         # Version name for this run
    env_type=EnvType.EVAL,                    # Environment: EVAL, PROD, etc.
    log_to_console=True,                      # Optional: also log spans to console
)

# Define tools
@tool
def search(query: str):
    """Call to surf the web."""
    if "sf" in query.lower() or "san francisco" in query.lower():
        return "It's 60 degrees and foggy."
    if "ny" in query.lower() or "new york" in query.lower():
        return "It's 75 degrees and sunny."
    return "I couldn't find weather info."

tools = [search]
tool_node = ToolNode(tools)

# --- Helper branching ---
def should_continue(state: MessagesState) -> Literal["tools", "summarizer", "__end__"]:
    """Decide what to do next."""
    last_message = state['messages'][-1]
    if last_message.tool_calls:
        return "tools"
    if "Weather in" in last_message.content:
        return "summarizer"   # extra LLM step to refine
    return "__end__"

# --- Nodes ---
def call_agent(state: MessagesState):
    """First agent call (plans or tool call)."""
    response = model.invoke(state["messages"])
    return {"messages": [response]}

def call_summarizer(state: MessagesState):
    """Second agent call to refine the answer (multi-call chain)."""
    last_message = state["messages"][-1]
    prompt = f"Summarize the weather report in one friendly sentence: {last_message.content}"
    response = model.invoke([HumanMessage(content=prompt)])
    return {"messages": [response]}

# --- Build graph ---
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_agent)
workflow.add_node("tools", tool_node)
workflow.add_node("summarizer", call_summarizer)

workflow.add_edge("__start__", "agent")
workflow.add_conditional_edges("agent", should_continue)
workflow.add_edge("tools", "agent")
workflow.add_edge("summarizer", "__end__")

app = workflow.compile()

if __name__ == "__main__":
    # --- Run ---
    final_state = app.invoke(
        {"messages": [HumanMessage(content="Can you tell me the weather in SF?")]},
    )
    print(final_state["messages"][-1].content)