DocumentationAPI ReferenceRelease Notes
DocumentationLog In
Documentation

Model Configuration for Self-Hosted Deployments

Configuring Models for Self-Hosted Deployments

In self-hosted deployments, Deepchecks does not provide or host LLMs. All model inference uses models you own and operate - either through managed cloud providers (Azure OpenAI, AWS Bedrock, OpenAI) or self-hosted endpoints.

Why Model Configuration Is Required

Unlike Deepchecks SaaS, self-hosted deployments don't have access to Deepchecks-managed LLMs. You must explicitly configure connections to your models to run evaluations, scoring, and analysis.

Once configured, models become available across the platform:

  • Basic and Advanced model selection in Preferences
  • Application configuration
  • Evaluation workflows

At initial deployment, no models are configured - all model dropdowns will be empty until you add at least one model.

Permissions

Model management is done one the organization level and therefore is restricted to Owners only.

Owners can add, edit, test, and delete models. Other roles can use configured models but cannot manage them.

Accessing Model Management

  1. Navigate to Preferences
  2. Click Manage Models

This opens the Manage Models modal where you can view, add, edit, and delete models.

Supported Providers

ProviderDescription
OpenAIOpenAI API
AzureAzure OpenAI Service
BedrockAWS Bedrock
Self-HostedCustom endpoints via LiteLLM

Adding a Model

Click Add Model to expand the form.

OpenAI

FieldRequiredDescription
ModelYesModel identifier (e.g., gpt-4, gpt-4o)
API KeyYesYour OpenAI API key
Max TokensYesMaximum tokens per request
Display NameYesName shown in Deepchecks (max 40 characters, must be unique)

Azure OpenAI

FieldRequiredDescription
ModelYesModel identifier
API KeyYesYour Azure OpenAI API key
API BaseYesAzure endpoint URL
API VersionYesAPI version (e.g., 2023-05-15)
Deployment IDYesYour Azure deployment ID
Max TokensYesMaximum tokens per request
Display NameYesName shown in Deepchecks (max 40 characters)

AWS Bedrock

FieldRequiredDescription
ModelYesModel ARN or identifier
API KeyNoBearer token (optional, uses AWS credentials in Hadron)
Max TokensYesMaximum tokens per request
Display NameYesName shown in Deepchecks (max 40 characters)

Self-Hosted (LiteLLM Supported only)

Deepchecks supports self-hosted model endpoints compatible with LiteLLM.

FieldRequiredDescription
ModelYesModel identifier (e.g., ollama/mistral, llama-3)
API BaseYesEndpoint URL (e.g., http://localhost:8000/v1)
API KeyNoOnly if your endpoint requires authentication
Max TokensYesMaximum tokens per request
Display NameYesName shown in Deepchecks (max 40 characters)

Saving a Model

When you click Save, Deepchecks:

  1. Validates all required fields
  2. Tests connectivity to the model

If successful, the model and appears immediately in all model dropdowns. If connection fails, the model is not saved and the form remains open for corrections

Editing a Model

Hover over any model in the list and click the Edit button. You can modify all fields except the provider type.

Testing Connection

Click Test Connection on any existing model to verify connectivity. The test sends a simple prompt and validates the response.

Deleting a Model

  1. Hover over the model and click Delete
  2. Confirm in the dialog

If the model is currently in use, the confirmation will warn you. When deleted:

  • The model is removed from the database
  • The model is removed from all application dropdowns
  • If it was in use, the system selects another available model