I’ve recently configured my two favorite tools for LLM-assisted development (aider and Continue) to use o3-mini and DeepSeek-R1, with both models deployed in Azure AI Foundry. Here’s what I did:
Deploying o3-mini and DeepSeek-R1 in Azure AI
First of all, model deployment – I’ve created a new instance of o3-mini in swedencentral
using the Azure OpenAI Service, and a DeepSeek-R1 instance in francecentral
using the Azure AI Foundry (it was either that or eastus
🥶).
For simplicity reasons (and for making sure Aider supports them), I’ve named the models o3-mini
and DeepSeek-R1
.
If you don’t have access to o3-mini in Azure, you can request it here.
Also, note that when deploying DeepSeek (and other models) using Azure AI Foundry, make sure to filter by Deployment options
: Serverless API
, to make sure you’re paying by token used, and not allocating a machine that’ll cost you thousands of euros per month. Just fyi.
Configuring Aider
Aider lets you pair program with LLMs, to edit code in your local git repository. Start a new project or work with an existing code base. Aider works best with Claude 3.5 Sonnet, DeepSeek V3, o1 & GPT-4o and can connect to almost any LLM.*
- But not to o3-mini running in Azure, without a little hack for the latest version (0.73).
Now, I’ve returned to aider after trying it briefly sometime last year, so I some of my configuration can probably be improved. That being said, I’ve created the following files:
.aider.conf.yml
Created in my home directory, see here for options. Only thing it does is specifying the default model so I don’t have to type it again and again.
model: azure/o3-mini
.aider.model.settings.yml
For some reason this setting didn’t make it to v0.73, and will most likely be (mostly) obsolete in the following versions. See the docs for details on this.
And note the reasoning_effort
key, that’ll come in handy when you want to have o3-mini think more or less. People seem to like the o3-mini-high.
Placed in the home directory as well.
- name: azure/o3-mini
edit_format: diff
weak_model_name: azure/gpt-4o-mini
use_repo_map: true
use_temperature: false
editor_model_name: azure/gpt-4o
editor_edit_format: editor-diff
extra_params:
extra_body:
reasoning_effort: high
aider.env
Doesn’t matter where you place it, but a man’s gotta have some principles doesn’t he?
I just have these keys in there. Note that in some cases aider will pick up other keys such as AZURE_OPENAI_API_KEY
, AZURE_OPENAI_API_VERSION
, which may or may not be what you want. This is why I strongly recommend to be as explicit as possible about the .env file it should use.
AZURE_API_BASE=https://<OPENAI_RESOURCE>.openai.azure.com/
AZURE_API_VERSION=2024-12-01-preview
AZURE_API_KEY=<WELL_YOU_KNOW>
Running
I generally run aider as follows
aider --architect --no-auto-commits --env-file ~/aider.env
Configuring Continue
The leading open-source AI code assistant. You can connect any models and any context to create custom autocomplete and chat experiences inside the IDE
Continue is a different beast, I’ve been using it for quite some time now, despite its shortcomings (just take a look at its JetBrains extension reviews; or try to send a repository map as context 😉). It’s that useful, when it works.
o3-mini
Adding support for o3-mini is rather straightforward. Just make sure you’re running the latest pre-release version: 0.9.261
for Visual Studio Code and 0.0.87
for JetBrains.
Then, all you need to do is add a new entry to the models
array in Continue’s config.json
.
{
"models": [
//.....
{
"title": "O3-mini",
"model": "o3-mini",
"deployment": "<MODEL_DEPLOYMENT>",
"apiBase": "https://<OPENAI_RESOURCE>.openai.azure.com/",
"apiKey": "<API_KEY>",
"apiVersion": "2024-12-01-preview",
"systemMessage": "<SYSTEM_MESSAGE>",
"apiType": "azure",
"provider": "azure",
"contextLength": 128000,
"completionOptions": {
"stream": true
}
},
]
}
DeepSeek-R1
DeepSeek is similar but with some subtle, not-so-obvious, changes. Most important ones being that the apiType
is set to openai
, and apiBase
requires /models
to be appended to the deployment endpoint.
{
"models": [
//.....
{
"title": "DeepSeek-R1",
"apiBase": "https://<DEPLOYMENT>.services.ai.azure.com/models",
"model": "<MODEL_DEPLOYMENT>",
"apiKey": "<API_KEY>",
"provider": "azure",
"apiType": "openai",
"systemMessage": "SYSTEM_PROMPT",
"contextLength": 128000,
"apiVersion": "2024-05-01-preview"
},
]
}
It looks like this

That’s it, now go try this out yourself!