Skip to main content
stagewise supports multiple LLM providers out of the box. You can route requests through the stagewise proxy (default), connect directly to official APIs, or use fully custom endpoints. Open the configuration page at Settings → Agent → Models & Providers.

Built-in model providers and models

ProviderModels
AnthropicClaude Opus 4.7, Opus 4.6, Sonnet 4.6, Haiku 4.5
OpenAIGPT-5.4, GPT-5.3 Codex, GPT-5.3 Instant
GoogleGemini 3.1 Pro, Gemini 3 Flash, Gemini 3.1 Flash Lite
Moonshot AIKimi K2.5
AlibabaQwen 3-32B, Qwen 3-Coder 30B-A3B

Endpoint modes

Each of the built-in provider can be configured with one of three endpoint modes:

stagewise (default)

Requests are routed through the stagewise API. This is the simplest setup — no API key required. Usage is billed through your stagewise account.

Official API

Connect directly to the provider’s official API endpoint. You need to supply your own API key. This is the standard BYOK (Bring Your Own Key) setup. See the BYOK setup guide for step-by-step instructions.

Custom endpoint

Route requests to a custom endpoint that implements one of the supported API specifications. This cannot only be used to register a custom endpoint for a known model (i.e. Azure, AWS Bedrock or Google Vertex), but also to use self-hosted models via Ollama or LM Studio. See the Custom providers guide for details.

Switching models

You can switch the active model at any time from the model selector in the chat sidebar. The model choice applies per agent — each agent instance can use a different model.