Creating a custom endpoint
Open Models & Providers
Navigate to Settings → Agent → Models & Providers and scroll to the Custom Endpoints section.
Supported API specifications
| Spec | Description | Example services |
|---|---|---|
anthropic | Anthropic Messages API | Self-hosted Claude, any Anthropic-compatible proxy |
openai-chat-completions | OpenAI Chat Completions | vLLM, Ollama, LiteLLM, Together AI |
openai-responses | OpenAI Responses API | Direct OpenAI-compatible responses endpoints |
google | Google Generative AI | Self-hosted Gemini-compatible services |
azure | Azure OpenAI | Azure OpenAI Service |
amazon-bedrock | AWS Bedrock | Amazon Bedrock (requires region + secret key) |
google-vertex | Google Vertex AI | Vertex AI (requires project ID + location + credentials) |
Cloud provider configuration
Azure OpenAI
Azure endpoints require additional fields:- Resource name — Your Azure OpenAI resource name
- API version — The API version string (e.g.
2024-02-01)
Amazon Bedrock
Bedrock endpoints require:- Region — AWS region (e.g.
us-east-1) - API key — AWS Access Key ID
- Secret key — AWS Secret Access Key
Google Vertex AI
Vertex endpoints require:- Project ID — Your GCP project ID
- Location — GCP region (e.g.
us-central1) - Google credentials — Service account JSON credentials
Model ID mapping
Some custom endpoints use different model identifiers than the official APIs. Use the Model ID Mapping field to remap built-in model IDs to the IDs your endpoint expects. For example, if your Ollama instance names Claude asclaude-v2: