Using custom models
You can add custom models that aren't available in the app by default.
How to add custom models
Custom models are .yaml
files in ~/.intellibar/models
.
Here's an example of a model file:
# ~/.intellibar/models/deepseek-r1-distill-qwen-32b.yaml
url: https://openrouter.ai/api/v1
id: deepseek/deepseek-r1-distill-qwen-32b
name: DeepSeek R1 32B
apiKey: sk-or-v1-xxxx
Model files support the following fields:
url
— The URL of the endpoint. Can be both remote and localhost.id
— The model id/name that will be passed to the endpoint.apiKey
— The API key that will be passed to the endpoint. Defaults ton/a
. Optional.apiType
— The type of API. Possible values:openai
,anthropic
,google
. Optional. Defaults toopenai
.name
— A friendly name that will be displayed inside the app. Defaults to usingid
. Optional.
Common use cases
- Models on OpenRouter
- Models on Azure / AWS / HuggingFace
- Local models with llama.cpp
- Local models with Ollama running on a custom port