DocumentationAPI ReferenceRelease Notes
DocumentationLog In
Documentation
These docs are for v0.8.0. Click to read the latest docs for v0.20.0.

SDK Quickstart

Use the Deepchecks LLM Evaluation Python SDK to send data to the system

Preface

Deepchecks LLM Evaluation SDK is a python package built on top of Deepchecks LLM Evaluation REST API, to install it simply run pip install deepchecks-llm-client

The SDK allows you to upload data to the system. That can be done automatically using code instrumentation for OpenAI calls or manually using explicit SDK function calls. For more info check out the SDK Reference section.

In addition the SDK can be used to annotate the logged interactions and to download the interactions enriched by various deepchecks-computed enrichments, such as: topics, properties and estimated annotations.

Interaction is a single call of the LLM pipeline, consisting of:

user_interaction_idinputinformation_retrievalfull_promptoutputannotation
Must be unique within a single version.
used for identifying interactions when updating annotations, and identifying the same interaction across different versions
(mandatory) The input to the LLM pipelineData retrieved as context for the LLM in this interactionThe full prompt to the LLM used in this interaction(mandatory) The pipeline output returned to the userWas the pipeline response good enough?
(Good/Bad/Unknown)

πŸ“Œ

Uploading data directly from the UI

Notice - you can also upload data to the system using CSV format directly from the UI

Generating an API Key

Creating a new Application

Uploading data using the SDK

# In this code snippet we demonstrate how to upload Evaluation data (Golden Set) 
# to Deepchecks' LLM Evaluation using our python SDK

# Please notice - Deepchecks' SDK is non-intrusive, hence does not throw exceptions, in case
# of failures - expect to see only log prints (in case of verbose=True)

from datetime import datetime

from deepchecks_llm_client.client import dc_client
from deepchecks_llm_client.data_types import EnvType, AnnotationType

# This is deepchecks' service url
DEEPCHECKS_LLM_HOST = "https://app.llm.deepchecks.com"

# Login to deepchecks' service and generate new API Key (Configuration -> API Key) and place it here
DEEPCHECKS_LLM_API_KEY = "Fill Key Here"

# Use "Update Data" in deepchecks' service, to create a new application name and place it here
# This application must be exist, deepchecks' SDK cannot function without pre-defined application
# to work with
DEEPCHECKS_APP_NAME = "DemoApp"

# Init SDK's client
dc_client.init(host=DEEPCHECKS_LLM_HOST, api_token=DEEPCHECKS_LLM_API_KEY,
               app_name=DEEPCHECKS_APP_NAME, version_name="0.0.1",
               env_type=EnvType.EVAL, auto_collect=False, verbose=True)

# Log LLM call to Deepchecks server (running this in a loop over your real data)
new_id = dc_client.log_interaction(input="my user input",
                                   output="my model response",
                                   full_prompt="system part: my user input",
                                   information_retrieval="system part: my information retrieval",
                                   annotation=AnnotationType.GOOD,
                                   user_interaction_id="ExternalSystemUniqueIdIfExist",
                                   started_at=datetime(2023, 10, 31, 15, 1, 0, 200).astimezone(),
                                   finished_at=datetime(2023, 10, 31, 15, 1, 0, 550).astimezone())

print(f"Created new interaction in deepchecks server with id: {new_id}")

πŸ“˜

User Interaction ID

In the SDK/API you might see user_interaction_id, This is your way to set unique identifier for you "inputs", so the same "input" cross versions will get the same user_interaction_id

If you maintain such id in your system, please add it when you upload data. You will be able to search by the id from the UI / REST API. This is very helpful in cases were you have feedback you got on a particular interaction and what to observe that interaction in Deepchecks LLM Eval.

Notice that user_interaction_id must be unique in the context of a single version!
If you do not set it, Deepchecks will generate global unique UUID and put it for you.

SDK Reference

For a comprehensive list of available functionality, such as "auto instrumenting" OpenAI calls, uploading custom properties, tracing steps in an LLM chain, downloading data including deepchecks' calculated goodies and more.

To install:

pip install deepchecks-llm-client

For the full reference:

Python SDK Reference

πŸ“Œ

Few notes

  • dc_client(from deepchecks_llm_client.client import dc_client) is your handle to any interaction with the SDK.
  • dc_clientis a singleton, we have exactly one instance per process.
  • For now we do not support multiprocessing and async execution
  • Since SDK can be integrated in production environments, we decided to adopt non-intrusive behavior. Hence, we try not to throw exceptions and instead print error/warn logs. log printing can be controlled using init_verbose, verbose and log_level in the init function (dc_client.init())

What’s Next

Now that you have data in the system, head over to the dashboard to observe the insights deepchecks has to offer