DocumentationAPI ReferenceRelease Notes
DocumentationLog In
Release Notes
These docs are for v0.12.0. Click to read the latest docs for v0.38.0.

0.28.0 Release Notes

by Yaron Friedman

This version includes **Support of agent use-cases, experimentation management components and annotations on the session level,**along with more features, stability and performance improvements, that are part of our 0.28.0 release.

Deepchecks LLM Evaluation 0.28.0 Release:

  • 🕵️‍♂️ Support of Agent Use-Cases
  • 🥼 Experiment Management
  • ⏱️ Introducing Tracing Metrics (Latency and Tokens)
  • 👍 Session Annotation
  • 🗃️ Additional Retrieval Use-Case Properties

What’s New and Improved?

  • Support of Agent Use-Cases

    • We've introduced a new interaction type: Tool Use, designed to evaluate agentic workflows where LLMs invoke external tools (e.g., calculators, web search, APIs) during multi-step reasoning. This structure captures each step's observation, action, and response, enabling detailed analysis of the agent's decision-making process.
    • To support this, we've added specialized properties such as Tool Appropriateness, Tool Efficiency, and Action Relevance, allowing for nuanced evaluation of tool-based interactions. These properties help assess whether the chosen tools are suitable, efficiently used, and relevant to the task at hand. For more details, click here.
Example of an Agent Use-Case Session with Tool-Use Unique Properties

Example of an Agent Use-Case Session with Tool-Use Unique Properties


  • Experiment Management

    • We've enhanced our experiment management by introducing interaction-type-level configuration. Beyond version-level metadata, you can now define experiment-specific details—such as model identifiers, prompt templates, and custom tags—directly within each interaction type. This granularity enables more precise comparisons across experiments and a clearer understanding of how specific configurations impact performance. For more details, click here.
Experiment Configuration Data on the Interaction Type Level

Experiment Configuration Data on the Interaction Type Level


  • Introducing Tracing Metrics

    • We've added support for tracing metrics, enabling you to analyze interaction latency and token usage across your LLM workflows. These metrics are aggregated at the session level, providing a comprehensive view of performance over multi-step interactions. This enhancement facilitates deeper analysis of version behavior and more effective comparisons between different configurations. For more details, click here.
    Sorting and Filtering by tracing data on the Data Screen

    Sorting and Filtering by tracing data on the Data Screen


  • Session Annotations

    • We've expanded our annotation capabilities by introducing session-level annotations. Previously, annotations were available only at the interaction level. Now, Deepchecks aggregates these into a single session annotation using a configurable logic. This enhancement is particularly beneficial for evaluating multi-step workflows, such as agentic or conversational sessions, where understanding the overall session quality is crucial. For more details, click here.
A session that was annotated \"bad' due to a bad interaction annotation on a flagged interaction type (Q&A)

A session that was annotated 'bad' due to a bad interaction annotation on a flagged interaction type (Q&A)

0.27.0 Release Notes

by Yaron Friedman

This version includes improved properties flows, updated usage tracking method and flexibility in model choice, along with more features, stability and performance improvements, that are part of our 0.27.0 release.

Deepchecks LLM Evaluation 0.27.0 Release

  • 🏷️ New Categorical Prompt Properties
  • 🗃️ Document Classification and Retrieval Properties for RAG Use-Cases
  • 🤖 Support of Claude-Sonnet-3.7 as an Optional Model for Prompt Properties
  • 🌐 Customize Translation Settings per App

What’s New and Improved?

  • New Categorical Prompt Properties

    • We've introduced a new type of prompt property: Categorical. Previously, only numerical properties were available, providing scores of 1-5. Now, you can categorize interactions based on user-defined categories and guidelines, with options to allow the LLM to create new categories and classify an interaction into multiple categories. For more details, click here.
Add Categorical Property Screen

Add Categorical Property Screen

  • Document Classification and Retrieval Properties for RAG Use-Cases

    • We now offer enhanced support for RAG use-cases by introducing document classification into Platinum, Gold, and Irrelevant classes, along with dedicated retrieval-use-case properties derived from these classifications. To enable classification and retrieval property calculations, go to "Edit Application" on the "Manage Applications" screen.
    Example of Document Classification for a Single Interaction

    Example of Document Classification for a Single Interaction

    Example of the MRR Retrieval Property calculation

    Example of the MRR Retrieval Property calculation


  • Support of Claude-Sonnet-3.7 as an Optional Model for Prompt Properties

    • In this version, we introduce support for the Claude-Sonnet-3.7 model for custom prompt properties. To view usage info and switch your model to Sonnet-3.7, go to "Preferences" on the "Workspace Settings" tab at the organization level, or "Edit Application" on the "Manage Applications" screen at the application level.
  • Customize Translation Settings per App

    • Customers with translation capabilities can now toggle translation on or off at the application level. When translation is off, new uploaded data will not be translated. This can be configured in the "Edit Application" window.

0.26.0 Release Notes

by Yaron Friedman

This version includes improved properties flows, updated usage tracking method and flexibility in model choice, along with more features, stability and performance improvements, that are part of our 0.26.0 release.

Deepchecks LLM Evaluation 0.26.0 Release

  • 📄 New Properties Screen
    • 📕 Note: Properties Naming Update
  • 🦸‍♀️ LLM Model Choice for Prompt Properties
  • 🪙 Usage Tracking Updated to Deepchecks Processing Units (DPUs)
  • 🧮 New Property Recalculation Options
  • 💽 Download All Interactions in a Session

What’s New and Improved?

  • New Properties Screen

    • In the main properties screen, LLM, built-in and custom properties have been consolidated into one unified list, with icon differentiation for each property type.
    Illustration of the Properties Screen
    • Properties on the main screen will be automatically calculated for all interactions within the relevant interaction type. Additional properties can be incorporated by selecting them from the “property bank.”
    • A centralized “hub” is now available for adding and customizing new properties.
    • For more information on the new properties structure and flows, click here
    • 📘

      Property Naming Updates

      • To improve property naming and understanding, Deepchecks no longer requires the types -"out", "in", "llm", "custom". Instead - names of active properties have to be unique. Accordingly, the "type" filed is now redundant in YAML, and some properties were renamed for clarity and uniqueness.
      • Renamed properties:
        • All properties that had an "_INPUT" suffix, e.g. FLUENCY_INPUT, TOXICITY_INPUT are now INPUT_TOXICITY, INPUT_FLUENCY
        • All properties that had an "_OUTPUT" or an "_LLM" suffix have dropped that suffix (e.g. LEXICAL_DENSITY_OUTPUT is now LEXICAL_DENSITY, and COMPLETENESS_LLM is now COMPLETENESS)
  • LLM Model Choice for Prompt Properties

    • You can now select which models process your Prompt properties in the Deepchecks app, providing greater flexibility. Usage is calculated based on the selected LLM model.

    • This configuration can be managed on two levels:

      • Organization-wide default settings (accessible via "Workspace Settings").
      • Application-specific settings (override the default for specific applications in the "Application" screen).
  • Usage Tracking Method Shift — from Tokens to DPUs

    • We've updated our usage tracking method from tokens to DPUs (Deepchecks Processing Units) to accommodate our new flexible model choices. In addition to being a more accurate and transparent usage tracking method, this change provides you with a unified pool of processing units which you can allocate as needed.
    • The usage screen displays your plan in DPUs and shows your monthly usage. Click the small arrow to see a detailed breakdown of your monthly DPU usage.
    • Where applicable, you'll see how 1M LLM token usage converts to DPUs for different models.
  • Property Recalculation Options

    • You can now recalculate properties based on interaction upload dates (time range), in addition to recalculating across all interactions in selected versions.

  • Download All Interactions in a Session (Available in UI & SDK)

    • In the interaction download flow we’ve added to option to download all other interaction in a given session. Checking this option when downloading multiple interactions will result in downloading all of the interaction from all the relevant sessions.

  • You can also download all session related interaction with the SDK:

dc_client = DeepchecksLLMClient(
        host="HOST",
        api_token="API_KEY"
    )
df = dc_client.get_data(
        app_name="APP_NAME",
        version_name="APP_VERSION",
        env_type=EnvType.EVAL,
        user_interaction_ids=['46eaf233-5825-4bad-ad02-0d8dbd94994e', '80ab45da-53f4-45b1-a3b5-94c7afe05bec'],
        return_session_related=True
    )

0.25.0 Release Notes

by Shir Chorev

This version includes new user roles, updated design for the expected output data, and metadata information for the automatic annotation pipeline, along with more features, stability and performance improvements, that are part of our 0.25.0 release.

Deepchecks LLM Evaluation 0.25.0 Release

  • 🎨 New Expected Output Design
  • ⏳ Estimated Annotations Configuration - Metadata & Download
  • 🫵 User Roles

What’s New and Improved?

  • Expected Output Design

    • When "expected_output"s are logged for an interaction, they are now conveniently available alongside the original output, allowing easy comparison, highlighting and evaluating alongside the Expected Output Similarity property.

  • Estimated Annotations Configuration Updates

    • The interaction type auto annotation configuration now allows:

      • Seeing when the auto-annotation YAML was last uploaded and by whom.
      • Downloading the current or default (preset) configuration for that interaction type.

  • User roles

    • Deepchecks now supports different user roles. The following are the three preset roles:
      • Viewers - can view the applications and data inside the Deepchecks system
      • Members - can upload data, update the properties and evaluation configurations
      • Admins - full control, including inviting and removing users from organization, and organization deletion

0.24.0 Release Notes

by Shir Chorev

This version includes support for expected outputs (comparison to ground truth) and customization of interaction types for evaluation, along with more features, stability and performance improvements, that are part of our 0.24.0 release.

Deepchecks LLM Evaluation 0.24.0 Release

  • ✅ Support for Expected Outputs for Evaluation Data Comparison
  • 🥗 Custom Interaction Types & Configuration

What’s New and Improved?

  • Support for Expected Outputs for Evaluation Data Comparison

    • You can now send an expected_output field, allowing you to log your ground truths alongside your outputs

    • Expected Output Similarity Property - Deepchecks built in property for assessing the accuracy of your output in comparison to the ground truth. 5 is highly accurate and 1 is the opposite. This is used for identifying wrong outputs in the auto-annotation configuration. Read more about this property here.

  • Custom Interaction Types & Configuration

    • Update to the Interaction Types screen, including the auto-annotation configuration which is now available here.

    • You can now define your custom interaction types alongside the Deepchecks preset ones. Choose an icon, name, the desired properties and your auto-annotation configuration and you’re ready to go.
    • When defining a new interaction type you can either start from scratch, or use as a template any of the interaction types that you already have defined in your current app.

🚧

Note: SDK Breaking Changes

All calls to log_batch_interactions, are now done using the LogInteraction object, which is a renaming of the previous LogInteractionType object.

Previosly:

dc_client.log_batch_interactions(
	app_name="app", version_name="version", env_type=EnvType.EVAL,
  interactions=LogInteractionType(
  	input="input",
    output="output",
    user_interaction_id="id",
    interaction_type="Q&A",
    session_id="session-id"  
  )
)

now:

dc_client.log_batch_interactions(
	app_name="app", version_name="version", env_type=EnvType.EVAL,
  interactions=LogInteraction(
  	input="input",
    output="output",
    user_interaction_id="id",
    interaction_type="Q&A",
    session_id="session-id"  
  )
)

0.23.0 Release Notes

by Shir Chorev

This version introduces the concept of Sessions, enabling better organization and analysis of interactions across complex workflows such as agents. This capability is now fully integrated across the platform, including SDK support for managing and interacting with session-level data. The Sessions concept, along with additional improvements, stability updates, and performance enhancements, is part of our 0.23.0 release.

Deepchecks LLM Evaluation 0.23.0 Release

  • 🧮 New Sessions Layer, and SDK Enhancements Supporting it
  • 🔡 Data Screen Content Search
  • ⛏️ Feature Extraction Interaction Type

What’s New and Improved?

  • New Sessions Layer for evaluating and viewing multi-phase and agentic workflows

    • Sessions introduce a new hierarchy for organizing interactions, allowing users to logically group related activities, such as conversations or tasks split into multiple steps.

    • When opening an interaction on the data screen, you can see all interactions associated with the same session id

    • More info about SDK adaptations below

  • Data Screen Content Search

    • Interactions can now be searched for in Data screen based on interaction content and not only IDs.

    • This is selectable using the search filters in the Data screen.

  • Feature Extraction Interaction Type

    • Feature Extraction is an interaction type dedicated to cases where information is extracted from a text into a predefined format (e.g. a JSON schema). This interaction type also presents three new properties that excel in evaluating an LLM's performance in an extraction task.

Session SDK/API Enhancements

  • Session Support inLogInteraction

    • Introduced the optional session_id parameter in the LogInteraction class, enabling developers to assign custom session identifiers to group related interactions.

    • If session_id is omitted, the system generates a unique session ID automatically.

      from deepchecks_llm_client.data_types import LogInteraction
      from datetime import datetime
      
      single_sample = LogInteraction(
          user_interaction_id="id-1",
          input="my user input1",
          output="my model output1",
          started_at="2024-09-01T23:59:59",
          finished_at=datetime.now().astimezone(),
          annotation="Good",  # Either Good, Bad, Unknown, or None
          interaction_type="Generation",  # Optional. Defaults to the application's default type if not provided.
          session_id="session-1",  # Optional. Groups related interactions; auto-generated if not provided.
      )
  • Session Support in Stream Upload

    • Added support for session_id in stream upload via the log_interaction method, facilitating real-time tracking of interactions within sessions.

      dc_client.log_interaction(
          app_name="DemoApp",
          version_name="v1",
          env_type=EnvType.EVAL,
          user_interaction_id="id-1",
          input="My Input",
          session_id="session-1",
          is_completed=False,
      )
  • Session-Based Filtering inget_data

    • Enhanced the get_data method to include filtering by session_ids, providing greater flexibility in retrieving session-specific data.

      dc_client.get_data(
          app_name="MyAppName",
          version_name="MyVersionName",
          environment=EnvType.EVAL,
          session_ids=["session-1", "session-2"],
      )

0.22.0 Release Notes

by Shir Chorev

This version adds support for multi-step workflows, by allowing different types of interactions within a single application. Properties and annotations now run on the Interaction Type level. This, alongside additional improvements such as to the Grounded in Context property, UI simplifications, stability improvements and performance enhancements, are part of our 0.22.0 release.

Deepchecks LLM Evaluation 0.22.0 Release

  • 🚀 Enhanced Support for Complex Applications
    • 🧩 New Interaction Types Layer
    • 🔄 SDK Updates
  • ☝️ Improved Grounded in Context Property
  • 🟣 Simplified Versions and Auto-annotation Screen

What’s New and Improved?

  • Enhanced Support for Complex Applications - Interaction Types

    • Applications now natively support multi-phase workflows.
    • Interaction types allow specifying a distinct type for each phase in the application, allowing to adapt the properties and evaluation for that logical phase. Supported predefined types include Q&A, Summarization, Generation, Classification, and Other.
    • For more details about configuring the Properties and annotation on the Interaction Type level, see Properties and Auto-Annotation YAML Configuration.
  • SDK/API Updates

    • The app_type parameter now determines the default interaction type for all interactions within an application. This provides a more intuitive setup and ensures consistent property evaluation.

      # Example usage
      dc_client.create_application(APP_NAME,
                                   app_type=ApplicationType.QA)
      
    • The new LogInteraction class introduces support for the optional interaction_type parameter, allowing you to specify the type of interaction directly when logging.
      Note:While LogInteractionType is still supported for backward compatibility, we recommend transitioning to LogInteraction as LogInteractionType will be deprecated in future versions.

      from deepchecks_llm_client.data_types import LogInteraction
      
      single_sample = LogInteraction(
          user_interaction_id="id-1",
          input="my user input1",
          output="my model output1",
          started_at="2024-09-01T23:59:59",
          finished_at=datetime.now().astimezone(),
          annotation="Good",  # Either Good, Bad, Unknown, or None
          interaction_type="Generation"  # Optional. Defaults to the application's default type if not provided.
      )
      
    • Interaction types can now be specified in SDK methods designed for creating or retrieving interactions. Methods for logging interactions, such as log_interaction and log_batch_interactions, now allow assigning interaction types during creation. Similarly, data retrieval methods like get_data and data_iterator support an interaction_types array, enabling filtering and retrieval based on specific interaction types. For more, see Deepchecks' SDK.

0.21.0 Release Notes

by Shir Chorev

This version aligns capabilities and versions across Deepchecks Multi-tenant SaaS alongside SageMaker Partner AI Apps, towards the AWS re:Invent launch.

0.20.0 Release Notes

by Shir Chorev

This version includes new history field, enhancements to llm properties and improved explainability highlighting, along with more features, demos, stability and performance improvements, that are part of our 0.20.0 release.

Deepchecks LLM Evaluation 0.20.0 Release

  • 💬 New “History” field
  • 🏦 LLM properties bank enhancements
  • 🟣 Multiple line highlighting for property explainability
  • 🍿 Use case demos: Classification and Guardrails
  • 📩 Data logging: partial interaction logging, steps download and upload

What’s New and Improved?

  • New special field: History

    • For supplying previous historical context, such as chat history. Relevant properties will use the “History” field as additional context for checking property values.

  • LLM properties bank enhancements

    • Added new prompts and improved prompt performance. Includes unifying the “Completeness” prompt template into one (Non-Q&A use cases have the “coverage” built-in property for uncovering issues such as a non-complete summary).

  • Multiple line highlighting for explainability

    • Now properties such as “Grounded in Context”, “PII”, can display more than one area attributing to the highest/lowest scores, allowing efficient RCA

  • New demos

  • Data logging

    • An interaction can now be gradually logged, in separate parts, useful for example for production flows: Stream Upload Documentation.
    • Interaction steps can now be downloaded and uploaded via csv and SDK

0.19.0 Release Notes

by Shir Chorev

This version includes expanded explainability for properties, multi-category property support, multi-label classification support, and enhancements to the documentation, along with more features, stability and performance improvements, that are part of our 0.19.0 release.

Deepchecks LLM Evaluation 0.19.0 Release

  • 🌈 Highlighting of properties for explainability
  • 🎡 Multi-label support for properties and classification use cases
  • 🗒️ Docs Additions: Data Hierarchy and SDK Guide
  • ➕ Updates to Auto-annotation flow and to Steps upload

What’s New and Improved?

  • Highlighting of properties for explainability

    • Explainability highlighting for more properties: PII, Information Density, Coverage
  • Multi-label support for properties and classification use cases

  • Docs Additions

  • Updates to Auto-annotation flow and to Steps upload

    • While new recalculation is in progress, previously estimated annotations will be changed to state “Pending”, and then overridden by new estimate
    • Information retrieval support going forward: only as a designated field, and not as an information retrieval “step” ⚠️