These are the most commonly used model providers that offer a wide range of capabilities:
ProviderDescriptionCapabilities
AnthropicProviders of Claude models, known for long context windows and strong reasoningChat, Edit, Apply, Embeddings
OpenAICreators of GPT models with strong coding capabilitiesChat, Edit, Apply, Embeddings
AzureMicrosoft’s cloud platform offering OpenAI modelsChat, Edit, Apply, Embeddings
Amazon BedrockAWS service offering access to various foundation modelsChat, Edit, Apply, Embeddings
OllamaRun open-source models locally with a simple interfaceChat, Edit, Apply, Embeddings, Autocomplete
Google GeminiGoogle’s multimodal AI modelsChat, Edit, Apply, Embeddings
DeepSeekSpecialized code models with strong performanceChat, Edit, Apply
MistralHigh-performance open models with commercial offeringsChat, Edit, Apply, Embeddings
xAIGrok models from xAIChat, Edit, Apply
Vertex AIGoogle Cloud’s machine learning platformChat, Edit, Apply, Embeddings
InceptionOn-premises open-source model runnersChat, Edit, Apply

Additional Model Providers

Beyond the top-level providers, Continue supports many other options:

Hosted Services

ProviderDescription
GroqUltra-fast inference for various open models
Together AIPlatform for running a variety of open models
DeepInfraHosting for various open source models
OpenRouterGateway to multiple model providers
CohereModels specialized for semantic search and text generation
NVIDIAGPU-accelerated model hosting
CloudflareEdge-based AI inference services
HuggingFacePlatform for open source models

Local Model Options

ProviderDescription
LM StudioDesktop app for running models locally
llama.cppOptimized C++ implementation for running LLMs
LlamaStackStack for running Llama models locally
llamafileSelf-contained executable model files

Enterprise Solutions

ProviderDescription
SambaNovaEnterprise AI platform
Watson xIBM’s enterprise AI platform
SagemakerAWS machine learning platform
NebiusCloud-based machine learning platform

How to Choose a Model Provider

When selecting a model provider, consider:
  1. Hosting preference: Do you need local models for offline use or privacy, or are you comfortable with cloud services?
  2. Performance requirements: Different providers offer varying levels of speed, quality, and context length.
  3. Specific capabilities: Some models excel at code generation, others at embeddings or reasoning tasks.
  4. Pricing: Costs vary significantly between providers, from free local options to premium cloud services.
  5. API key requirements: Most cloud providers require API keys that you’ll need to configure.

Configuration Format

You can add models to your config.yaml file like this:
models:
  - name: My Model
    provider: openai # Choose provider from the lists above
    model: gpt-4o # Specific model name
    apiKey: ${{ secrets.OPENAI_API_KEY }}
    roles:
      - chat
      - edit
      - apply
For more detailed configuration, visit the specific provider pages linked above.