Adding a custom model
Open Models & Providers
Navigate to Settings → Agent → Models & Providers and scroll to the Custom Models section.
Configure the model
Fill in:
- Model ID — The identifier your endpoint expects (e.g.
llama-3.3-70b) - Display name — A friendly name shown in the model selector
- Description — Optional description of the model’s strengths
- Context window size — Maximum token context (default: 128,000)
- Endpoint — Which endpoint to route this model to
- Extended thinking — Whether the model supports chain-of-thought reasoning
- Capabilities — Toggle supported input/output modalities
Capabilities
Each custom model can declare what it supports: Input modalities:- Text (always on)
- Image
- Video
- Audio
- File (PDF)
- Text (always on)
- Image
- Audio
- Video
- File
Custom model with a custom endpoint
A typical workflow for self-hosted models:- Create a custom endpoint pointing to your local server (e.g.
http://localhost:11434/v1for Ollama) - Create a custom model with the correct model ID for that server
- Select the custom model from the model picker in the chat sidebar