DocumentationAPI ReferenceRelease Notes
DocumentationLog In

Azure OpenAI

This guide outlines how to integrate Deepchecks LLM Evaluation with your Azure OpenAI models to monitor and analyze their performance.

Prerequisites

Before you begin, ensure you have the following:

  • A Deepchecks LLM Evaluation account.
  • An Azure OpenAI resource with an API key and endpoint.
  • Python environment with the deepchecks-llm-client and openai packages installed (pip install deepchecks-llm-client openai).

Integration Steps

  1. Initialize Deepchecks Client:
from deepchecks_llm_client.client import DeepchecksLLMClient  

dc_client = DeepchecksLLMClient(
  api_token="YOUR_API_KEY"
)

Replace the placeholders with your actual API key, application name, and version name.

  1. Log Interactions with Azure OpenAI:
from deepchecks_llm_client.data_types import LogInteractionType, AnnotationType, EnvType
from openai import AzureOpenAI
import os

# Configure Azure OpenAI client
client = AzureOpenAI(
    api_key=os.getenv("AZURE_OPENAI_API_KEY"),  
    api_version="2023-12-01-preview",
    azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT")
)

def log_azure_openai_interaction(user_input):
    # Make prediction using Azure OpenAI
    response = client.completions.create(
        model="YOUR_ENGINE_NAME",
        prompt=user_input,
        # ... other parameters
    )
    prediction = response.choices[0].text

    # Log interaction to Deepchecks
    dc_client.log_interaction(
      app_name="YOUR APP NAME",
      version_name="YOUR VERSION NUMBER",
      env_type=EnvType.EVAL,
      input=user_input,
      output=prediction,
      annotation=AnnotationType.UNKNOWN  # Add annotation if available
    )

# Example usage
user_input = "Write a story about a robot who wants to become human."
log_azure_openai_interaction(user_input)

This code snippet demonstrates how to:

  • Use the openai library to interact with your Azure OpenAI endpoint.
  • Make predictions using the endpoint.
  • Log the interaction data (input, output) to Deepchecks using the log_interaction method.
  1. View Insights in Deepchecks Dashboard:
    Once you've logged interactions, head over to the Deepchecks LLM Evaluation dashboard to analyze your model's performance. You can explore various insights, compare versions, and monitor production data.