In this tutorial, we'll look at how to set up FastAPI with Postgres, Uvicorn, and Docker. For production environments, we'll add on Gunicorn, Traefik, and Let's Encrypt.
Contents
Project Setup
Start by creating a project directory:
$ mkdir fastapi-docker-traefik && cd fastapi-docker-traefik
$ python3.11 -m venv venv
$ source venv/bin/activate
Feel free to swap out virtualenv and Pip for Poetry or Pipenv. For more, review Modern Python Environments.
Then, create the following files and folders:
├── app
│ ├── __init__.py
│ └── main.py
└── requirements.txt
The following command will create the project structure:
$ mkdir app && \ touch app/__init__.py app/main.py requirements.txt
Add FastAPI and Uvicorn, an ASGI server, to requirements.txt:
fastapi==0.89.1
uvicorn==0.20.0
Install them:
(venv)$ pip install -r requirements.txt
Next, let's create a simple FastAPI application in app/main.py:
# app/main.py
from fastapi import FastAPI
app = FastAPI(title="FastAPI, Docker, and Traefik")
@app.get("/")
def read_root():
return {"hello": "world"}
Run the application:
(venv)$ uvicorn app.main:app
Navigate to 127.0.0.1:8000. You should see:
{
"hello": "world"
}
Kill the server once done. Exit then remove the virtual environment as well.
Docker
Install Docker, if you don't already have it, then add a Dockerfile to the project root:
# Dockerfile
# pull the official docker image
FROM python:3.11.1-slim
# set work directory
WORKDIR /app
# set env variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# copy project
COPY . .
So, we started with a slim
Docker image for Python 3.11.1. We then set up a working directory along with two environment variables:
PYTHONDONTWRITEBYTECODE
: Prevents Python from writing pyc files to disc (equivalent topython -B
option)PYTHONUNBUFFERED
: Prevents Python from buffering stdout and stderr (equivalent topython -u
option)
Finally, we copied over the requirements.txt file, installed the dependencies, and copied over the project.
Review Docker Best Practices for Python Developers for more on structuring Dockerfiles as well as some best practices for configuring Docker for Python-based development.
Next, add a docker-compose.yml file to the project root:
# docker-compose.yml
version: '3.8'
services:
web:
build: .
command: uvicorn app.main:app --host 0.0.0.0
volumes:
- .:/app
ports:
- 8008:8000
Review the Compose file reference for info on how this file works.
Build the image:
$ docker-compose build
Once the image is built, run the container:
$ docker-compose up -d
Navigate to http://localhost:8008 to again view the hello world sanity check.
Check for errors in the logs if this doesn't work via
docker-compose logs -f
.
Postgres
To configure Postgres, we need to add a new service to the docker-compose.yml file, set up an ORM, and install asyncpg.
First, add a new service called db
to docker-compose.yml:
# docker-compose.yml
version: '3.8'
services:
web:
build: .
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn app.main:app --host 0.0.0.0'
volumes:
- .:/app
ports:
- 8008:8000
environment:
- DATABASE_URL=postgresql://fastapi_traefik:fastapi_traefik@db:5432/fastapi_traefik
depends_on:
- db
db:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- 5432
environment:
- POSTGRES_USER=fastapi_traefik
- POSTGRES_PASSWORD=fastapi_traefik
- POSTGRES_DB=fastapi_traefik
volumes:
postgres_data:
To persist the data beyond the life of the container we configured a volume. This config will bind postgres_data
to the "/var/lib/postgresql/data/" directory in the container.
We also added an environment key to define a name for the default database and set a username and password.
Review the "Environment Variables" section of the Postgres Docker Hub page for more info.
Take note of the new command in the web
service:
bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn app.main:app --host 0.0.0.0'
while !</dev/tcp/db/5432; do sleep 1
will continue until Postgres is up. Once up, uvicorn app.main:app --host 0.0.0.0
runs.
Next, add a new file called config.py to the "app" directory, where we'll define environment-specific configuration variables:
# app/config.py
import os
from pydantic import BaseSettings, Field
class Settings(BaseSettings):
db_url: str = Field(..., env='DATABASE_URL')
settings = Settings()
Here, we defined a Settings
class with a db_url
attribute. BaseSettings, from pydantic, validates the data so that when we create an instance of Settings
, db_url
will be automatically loaded from the environment variable.
We could have used
os.getenv()
, but as the number of environment variables increases, this becomes very repetitive. By usingBaseSettings
, you can specify the environment variable name and it will automatically be loaded.You can learn more about pydantic settings management here.
We'll use ormar for communicating with the database.
Add ormar, an async mini ORM for Python, to requirements.txt along with asyncpg and psycopg2:
asyncpg==0.27.0
fastapi==0.89.1
ormar==0.12.1
psycopg2-binary==2.9.5
uvicorn==0.20.0
Feel free to swap ormar for the ORM of your choice. Looking for some async options? Check out the Awesome FastAPI repo and this Twitter thread.
Next, create a app/db.py file to set up a model:
# app/db.py
import databases
import ormar
import sqlalchemy
from .config import settings
database = databases.Database(settings.db_url)
metadata = sqlalchemy.MetaData()
class BaseMeta(ormar.ModelMeta):
metadata = metadata
database = database
class User(ormar.Model):
class Meta(BaseMeta):
tablename = "users"
id: int = ormar.Integer(primary_key=True)
email: str = ormar.String(max_length=128, unique=True, nullable=False)
active: bool = ormar.Boolean(default=True, nullable=False)
engine = sqlalchemy.create_engine(settings.db_url)
metadata.create_all(engine)
This will create a pydanic model and a SQLAlchemy table.
ormar uses SQLAlchemy for creating databases/tables and constructing database queries, databases for executing the queries asynchronously, and pydantic for data validation. Note that each ormar.Model
is also a pydantic.BaseModel
, so all pydantic methods are also available on a model. Since the tables are created using SQLAlchemy (under the hood), database migration is possible via Alembic.
Check out Alembic usage, from the official ormar documentation, for more on using Alembic with ormar.
Next, update app/main.py to connect to the database and add a dummy user:
# app/main.py
from fastapi import FastAPI
from app.db import database, User
app = FastAPI(title="FastAPI, Docker, and Traefik")
@app.get("/")
async def read_root():
return await User.objects.all()
@app.on_event("startup")
async def startup():
if not database.is_connected:
await database.connect()
# create a dummy entry
await User.objects.get_or_create(email="[email protected]")
@app.on_event("shutdown")
async def shutdown():
if database.is_connected:
await database.disconnect()
Here, we used FastAPI's event handlers to create a database connection. @app.on_event("startup")
creates a database connection pool before the app starts up.
await User.objects.get_or_create(email="[email protected]")
The above line in the startup event adds a dummy entry to our table once the connection has been established. get_or_create
makes sure that the entry is created only if it doesn't already exist.
The shutdown event closes all connections to the database. We also added a route to display all the entries in the users
table.
Build the new image and spin up the two containers:
$ docker-compose up -d --build
Ensure the users
table was created:
$ docker-compose exec db psql --username=fastapi_traefik --dbname=fastapi_traefik
psql (15.1)
Type "help" for help.
fastapi_traefik=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------------+-----------------+----------+------------+------------+-------------------------------------
fastapi_traefik | fastapi_traefik | UTF8 | en_US.utf8 | en_US.utf8 |
postgres | fastapi_traefik | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | fastapi_traefik | UTF8 | en_US.utf8 | en_US.utf8 | =c/fastapi_traefik +
| | | | | fastapi_traefik=CTc/fastapi_traefik
template1 | fastapi_traefik | UTF8 | en_US.utf8 | en_US.utf8 | =c/fastapi_traefik +
| | | | | fastapi_traefik=CTc/fastapi_traefik
(4 rows)
fastapi_traefik=# \c fastapi_traefik
You are now connected to database "fastapi_traefik" as user "fastapi_traefik".
fastapi_traefik=# \dt
List of relations
Schema | Name | Type | Owner
--------+-------+-------+-----------------
public | users | table | fastapi_traefik
(1 row)
fastapi_traefik=# \q
You can check that the volume was created as well by running:
$ docker volume inspect fastapi-docker-traefik_postgres_data
You should see something similar to:
[
{
"CreatedAt": "2023-01-31T15:59:10Z",
"Driver": "local",
"Labels": {
"com.docker.compose.project": "fastapi-docker-traefik",
"com.docker.compose.version": "2.12.2",
"com.docker.compose.volume": "postgres_data"
},
"Mountpoint": "/var/lib/docker/volumes/fastapi-docker-traefik_postgres_data/_data",
"Name": "fastapi-docker-traefik_postgres_data",
"Options": null,
"Scope": "local"
}
]
Navigate to 127.0.0.1:8008. You should see:
[
{
"id": 1,
"email": "[email protected]",
"active": true
}
]
Production Dockerfile
For deployment of our application, we need to add Gunicorn, a WSGI server, to spawn instances of Uvicorn. Rather than writing our own production Dockerfile, we can leverage uvicorn-gunicorn, a pre-built Docker image with Uvicorn and Gunicorn for high-performance web applications maintained by the core FastAPI author.
Create a new Dockerfile called Dockerfile.prod for use with production builds:
# Dockerfile.prod
FROM tiangolo/uvicorn-gunicorn:python3.11-slim
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
That's it. The tiangolo/uvicorn-gunicorn:python3.11.1-slim
image does much of the work for us. We just copied over the requirements.txt file, installed the dependencies, and then copied over all the project files.
Next, create a new compose file called docker-compose.prod.yml for production:
# docker-compose.prod.yml
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
ports:
- 8009:80
environment:
- DATABASE_URL=postgresql://fastapi_traefik_prod:fastapi_traefik_prod@db:5432/fastapi_traefik_prod
depends_on:
- db
db:
image: postgres:15-alpine
volumes:
- postgres_data_prod:/var/lib/postgresql/data/
expose:
- 5432
environment:
- POSTGRES_USER=fastapi_traefik_prod
- POSTGRES_PASSWORD=fastapi_traefik_prod
- POSTGRES_DB=fastapi_traefik_prod
volumes:
postgres_data_prod:
Compare this file to docker-compose.yml. What's different?
The uvicorn-gunicorn
Docker image that we used uses a prestart.sh script to run commands before the app starts. We can use this to wait for Postgres.
Modify Dockerfile.prod like so:
# Dockerfile.prod
FROM tiangolo/uvicorn-gunicorn:python3.11-slim
RUN apt-get update && apt-get install -y netcat
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
Then, add a prestart.sh file to the root of the project:
# prestart.sh
echo "Waiting for postgres connection"
while ! nc -z db 5432; do
sleep 0.1
done
echo "PostgreSQL started"
exec "$@"
Update the file permissions locally:
$ chmod +x prestart.sh
Bring down the development containers (and the associated volumes with the -v
flag):
$ docker-compose down -v
Then, build the production images and spin up the containers:
$ docker-compose -f docker-compose.prod.yml up -d --build
Test that 127.0.0.1:8009 works.
Traefik
Next, let's add Traefik, a reverse proxy, into the mix.
New to Traefik? Check out the offical Getting Started guide.
Traefik vs Nginx: Traefik is a modern, HTTP reverse proxy and load balancer. It's often compared to Nginx, a web server and reverse proxy. Since Nginx is primarily a webserver, it can be used to serve up a webpage as well as serve as a reverse proxy and load balancer. In general, Traefik is simpler to get up and running while Nginx is more versatile.
Traefik:
- Reverse proxy and load balancer
- Automatically issues and renews SSL certificates, via Let's Encrypt, out-of-the-box
- Use Traefik for simple, Docker-based microservices
Nginx:
- Web server, reverse proxy, and load balancer
- Slightly faster than Traefik
- Use Nginx for complex services
Add a new file called traefik.dev.toml:
# traefik.dev.toml
# listen on port 80
[entryPoints]
[entryPoints.web]
address = ":80"
# Traefik dashboard over http
[api]
insecure = true
[log]
level = "DEBUG"
[accessLog]
# containers are not discovered automatically
[providers]
[providers.docker]
exposedByDefault = false
Here, since we don't want to expose the db
service, we set exposedByDefault to false
. To manually expose a service we can add the "traefik.enable=true"
label to the Docker Compose file.
Next, update the docker-compose.yml file so that our web
service is discovered by Traefik and add a new traefik
service:
# docker-compose.yml
version: '3.8'
services:
web:
build: .
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn app.main:app --host 0.0.0.0'
volumes:
- .:/app
expose: # new
- 8000
environment:
- DATABASE_URL=postgresql://fastapi_traefik:fastapi_traefik@db:5432/fastapi_traefik
depends_on:
- db
labels: # new
- "traefik.enable=true"
- "traefik.http.routers.fastapi.rule=Host(`fastapi.localhost`)"
db:
image: postgres:15-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- 5432
environment:
- POSTGRES_USER=fastapi_traefik
- POSTGRES_PASSWORD=fastapi_traefik
- POSTGRES_DB=fastapi_traefik
traefik: # new
image: traefik:v2.9.6
ports:
- 8008:80
- 8081:8080
volumes:
- "./traefik.dev.toml:/etc/traefik/traefik.toml"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
volumes:
postgres_data:
First, the web
service is only exposed to other containers on port 8000
. We also added the following labels to the web
service:
traefik.enable=true
enables Traefik to discover the servicetraefik.http.routers.fastapi.rule=Host(`fastapi.localhost`)
when the request hasHost=fastapi.localhost
, the request is redirected to this service
Take note of the volumes within the traefik
service:
./traefik.dev.toml:/etc/traefik/traefik.toml
maps the local config file to the config file in the container so that the settings are kept in sync/var/run/docker.sock:/var/run/docker.sock:ro
enables Traefik to discover other containers
To test, first bring down any existing containers:
$ docker-compose down -v
$ docker-compose -f docker-compose.prod.yml down -v
Build the new development images and spin up the containers:
$ docker-compose up -d --build
Navigate to http://fastapi.localhost:8008/. You should see:
[
{
"id": 1,
"email": "[email protected]",
"active": true
}
]
You can test via cURL as well:
$ curl -H Host:fastapi.localhost http://0.0.0.0:8008
Next, check out the dashboard at fastapi.localhost:8081:
Bring the containers and volumes down once done:
$ docker-compose down -v
Let's Encrypt
We've successfully created a working example of FastAPI, Docker, and Traefik in development mode. For production, you'll want to configure Traefik to manage TLS certificates via Let's Encrypt. In short, Traefik will automatically contact the certificate authority to issue and renew certificates.
Since Let's Encrypt won't issue certificates for localhost
, you'll need to spin up your production containers on a cloud compute instance (like a DigitalOcean droplet or an AWS EC2 instance). You'll also need a valid domain name. If you don't have one, you can create a free domain at Freenom.
We used a DigitalOcean droplet to provision a compute instance with Docker and deployed the production containers to test out the Traefik config.
Assuming you configured a compute instance and set up a free domain, you're now ready to set up Traefik in production mode.
Start by adding a production version of the Traefik config to a file called traefik.prod.toml:
# traefik.prod.toml
[entryPoints]
[entryPoints.web]
address = ":80"
[entryPoints.web.http]
[entryPoints.web.http.redirections]
[entryPoints.web.http.redirections.entryPoint]
to = "websecure"
scheme = "https"
[entryPoints.websecure]
address = ":443"
[accessLog]
[api]
dashboard = true
[providers]
[providers.docker]
exposedByDefault = false
[certificatesResolvers.letsencrypt.acme]
email = "[email protected]"
storage = "/certificates/acme.json"
[certificatesResolvers.letsencrypt.acme.httpChallenge]
entryPoint = "web"
Make sure to replace
[email protected]
with your actual email address.
What's happening here:
entryPoints.web
sets the entry point for our insecure HTTP application to port 80entryPoints.websecure
sets the entry point for our secure HTTPS application to port 443entryPoints.web.http.redirections.entryPoint
redirects all insecure requests to the secure portexposedByDefault = false
unexposes all servicesdashboard = true
enables the monitoring dashboard
Finally, take note of:
[certificatesResolvers.letsencrypt.acme]
email = "[email protected]"
storage = "/certificates/acme.json"
[certificatesResolvers.letsencrypt.acme.httpChallenge]
entryPoint = "web"
This is where the Let's Encrypt config lives. We defined where the certificates will be stored along with the verification type, which is an HTTP Challenge.
Next, assuming you updated your domain name's DNS records, create two new A records that both point at your compute instance's public IP:
fastapi-traefik.your-domain.com
- for the web servicedashboard-fastapi-traefik.your-domain.com
- for the Traefik dashboard
Make sure to replace
your-domain.com
with your actual domain.
Next, update docker-compose.prod.yml like so:
# docker-compose.prod.yml
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
expose: # new
- 80
environment:
- DATABASE_URL=postgresql://fastapi_traefik_prod:fastapi_traefik_prod@db:5432/fastapi_traefik_prod
depends_on:
- db
labels: # new
- "traefik.enable=true"
- "traefik.http.routers.fastapi.rule=Host(`fastapi-traefik.your-domain.com`)"
- "traefik.http.routers.fastapi.tls=true"
- "traefik.http.routers.fastapi.tls.certresolver=letsencrypt"
db:
image: postgres:15-alpine
volumes:
- postgres_data_prod:/var/lib/postgresql/data/
expose:
- 5432
environment:
- POSTGRES_USER=fastapi_traefik_prod
- POSTGRES_PASSWORD=fastapi_traefik_prod
- POSTGRES_DB=fastapi_traefik_prod
traefik: # new
build:
context: .
dockerfile: Dockerfile.traefik
ports:
- 80:80
- 443:443
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./traefik-public-certificates:/certificates"
labels:
- "traefik.enable=true"
- "traefik.http.routers.dashboard.rule=Host(`dashboard-fastapi-traefik.your-domain.com`) && (PathPrefix(`/`)"
- "traefik.http.routers.dashboard.tls=true"
- "traefik.http.routers.dashboard.tls.certresolver=letsencrypt"
- "traefik.http.routers.dashboard.service=api@internal"
- "traefik.http.routers.dashboard.middlewares=auth"
- "traefik.http.middlewares.auth.basicauth.users=testuser:$$apr1$$jIKW.bdS$$eKXe4Lxjgy/rH65wP1iQe1"
volumes:
postgres_data_prod:
traefik-public-certificates:
Again, make sure to replace
your-domain.com
with your actual domain.
What's new here?
In the web
service, we added the following labels:
traefik.http.routers.fastapi.rule=Host(`fastapi-traefik.your-domain.com`)
changes the host to the actual domaintraefik.http.routers.fastapi.tls=true
enables HTTPStraefik.http.routers.fastapi.tls.certresolver=letsencrypt
sets the certificate issuer as Let's Encrypt
Next, for the traefik
service, we added the appropriate ports and a volume for the certificates directory. The volume ensures that the certificates persist even if the container is brought down.
As for the labels:
traefik.http.routers.dashboard.rule=Host(`dashboard-fastapi-traefik.your-domain.com`)
defines the dashboard host, so it can can be accessed at$Host/dashboard/
traefik.http.routers.dashboard.tls=true
enables HTTPStraefik.http.routers.dashboard.tls.certresolver=letsencrypt
sets the certificate resolver to Let's Encrypttraefik.http.routers.dashboard.middlewares=auth
enablesHTTP BasicAuth
middlewaretraefik.http.middlewares.auth.basicauth.users
defines the username and hashed password for logging in
You can create a new password hash using the htpasswd utility:
# username: testuser
# password: password
$ echo $(htpasswd -nb testuser password) | sed -e s/\\$/\\$\\$/g
testuser:$$apr1$$jIKW.bdS$$eKXe4Lxjgy/rH65wP1iQe1
Feel free to use an env_file
to store the username and password as environment variables
USERNAME=testuser
HASHED_PASSWORD=$$apr1$$jIKW.bdS$$eKXe4Lxjgy/rH65wP1iQe1
Finally, add a new Dockerfile called Dockerfile.traefik:
# Dockerfile.traefik
FROM traefik:v2.9.6
COPY ./traefik.prod.toml ./etc/traefik/traefik.toml
Next, spin up the new container:
$ docker-compose -f docker-compose.prod.yml up -d --build
Ensure the two URLs work:
Also, make sure that when you access the HTTP versions of the above URLs, you're redirected to the HTTPS versions.
Finally, Let's Encrypt certificates have a validity of 90 days. Treafik will automatically handle renewing the certificates for you behind the scenes, so that's one less thing you'll have to worry about!
Conclusion
In this tutorial, we walked through how to containerize a FastAPI application with Postgres for development. We also created a production-ready Docker Compose file, set up Traefik and Let's Encrypt to serve the application via HTTPS, and enabled a secure dashboard to monitor our services.
In terms of actual deployment to a production environment, you'll probably want to use a:
- Fully-managed database service -- like RDS or Cloud SQL -- rather than managing your own Postgres instance within a container.
- Non-root user for the services.
You can find the code in the fastapi-docker-traefik repo.