DocumentationAPI ReferenceRelease Notes
DocumentationLog In
Documentation

Oracle Cloud (OCI)

This guide outlines how to integrate Deepchecks LLM Evaluation with your Oracle Cloud Infrastructure (OCI) Generative AI models to monitor and analyze their performance.

Prerequisites

Before you begin, ensure you have the following:

  • A Deepchecks LLM Evaluation account.
  • An Oracle Cloud Infrastructure account with Generative AI service enabled.
  • Python environment with the deepchecks-llm-client, oci, and langchain_community packages installed (pip install deepchecks-llm-client oci langchain_community).

Integration Steps

  1. Initialize Deepchecks Client:
from deepchecks_llm_client.client import DeepchecksLLMClient  

dc_client = DeepchecksLLMClient(
  api_token="YOUR_API_KEY"
)

Replace the placeholders with your actual API key, application name, and version name.

  1. Log Interactions with OCI Models:

Here's an example of how to log interactions with an OCI Generative AI model using Langchain:

from deepchecks_llm_client.data_types import LogInteractionType, AnnotationType, EnvType
from langchain_community.llms import OCIGenAI

# Configure OCI GenAI LLM
llm = OCIGenAI(
    model_id="YOUR_MODEL_ID",
    service_endpoint="YOUR_SERVICE_ENDPOINT",
    compartment_id="YOUR_COMPARTMENT_ID",
    # ... other authentication and model parameters
)

def log_oci_interaction(user_input):
    # Make prediction using OCI model
    response = llm.invoke(user_input)

    # Log interaction to Deepchecks
    dc_client.log_interaction(
      app_name="YOUR APP NAME",
      version_name="YOUR VERSION NUMBER",
      env_type=EnvType.EVAL,
      input=user_input,
      output=response,
      annotation=AnnotationType.UNKNOWN  # Add annotation if available
    )

# Example usage
user_input = "Summarize the main points of the article about quantum computing."
log_oci_interaction(user_input)

This code snippet demonstrates how to:

  • Use the langchain_community.llms.OCIGenAI class to interact with your OCI model.
  • Make predictions using the model.
  • Log the interaction data (input, output) to Deepchecks using the log_interaction method.
  1. View Insights in Deepchecks Dashboard:

Once you've logged interactions, head over to the Deepchecks LLM Evaluation dashboard to analyze your model's performance. You can explore various insights, compare versions, and monitor production data.