Creating a custom endpoint
Open Models & Providers
Navigate to Settings → Agent → Models & Providers and scroll to the Models section.
Open Custom Providers Settings
Click Custom Providers in the top-right corner of the Models section.
Set provider type
Under Provider Type, select the API spec that your provider uses.
Unless the Provider specifically mentioned a certain API spec, you most likely want to use OpenAI (Chat Completions) as it’s the de facto standard for now.
Set custom model mapping
This may be required for some providers. See Model ID Mapping.
Supported API specifications
| Spec | Description | Example services |
|---|---|---|
anthropic | Anthropic Messages API | Self-hosted Claude, any Anthropic-compatible proxy |
openai-chat-completions | OpenAI Chat Completions | vLLM, Ollama, LiteLLM, Together AI |
openai-responses | OpenAI Responses API | Direct OpenAI-compatible responses endpoints |
google | Google Generative AI | Self-hosted Gemini-compatible services |
azure | Azure OpenAI | Azure OpenAI Service |
amazon-bedrock | AWS Bedrock | Amazon Bedrock (requires region + secret key) |
google-vertex | Google Vertex AI | Vertex AI (requires project ID + location + credentials) |
Cloud provider configuration
Azure OpenAI
Azure endpoints require additional fields:- Resource name — Your Azure OpenAI resource name
- API version — The API version string (e.g.
2024-02-01)
Amazon Bedrock
Bedrock endpoints require:- Region — AWS region (e.g.
us-east-1) - API key — AWS Access Key ID
- Secret key — AWS Secret Access Key
Google Vertex AI
Vertex endpoints require:- Project ID — Your GCP project ID
- Location — GCP region (e.g.
us-central1) - Google credentials — Service account JSON credentials
Model ID mapping
Some custom endpoints use different model identifiers than the official APIs. Use the Model ID Mapping field to remap built-in model IDs to the IDs your endpoint expects. For example, if your Ollama instance names Claude asclaude-v2: