Dockerizing Celery and Django

Part 1, Chapter 4


Objectives

By the end of this chapter, you will be able to:

  1. Explain what Docker Compose is used for and why you may want to use it
  2. Use Docker Compose to create and manage Django, Postgres, Redis, and Celery
  3. Speed up the development of an application using Docker and Docker Compose

Docker Compose

Docker Compose is a tool used for defining and running multi-container Docker applications. It uses YAML files to configure the application's services and performs the creation and start-up processes for all of the containers with a single command.

We've already looked at how to serve up an instance of Redis with Docker using a single command:

$ docker run -p 6379:6379 --name some-redis -d redis

Well, in this chapter, we'll take this a step further and containerize our entire infrastructure to simplify development. Before we do that though, let's look at the why: Why should we serve up our development environment in Docker containers with Docker Compose?

  1. Instead of having to run each process (e.g., Django, Celery worker, Celery beat, Flower, Redis, Postgres, etc.) manually, each from a different terminal window, after we containerize each service, Docker Compose enables us to manage and run the containers using a single command.
  2. Docker Compose will also simplify configuration. The Celery config is currently tied to our Django app's config. This is not ideal. With Docker Compose, we can easily create different configurations for both Django and Celery all from a single YAML file.
  3. Docker, in general, allows us to create isolated, reproducible, and portable development environments. So, you won't have to mess around with a virtual environment or install tools like Postgres and Redis on your local OS.

Install Docker Compose

Start by downloading and installing Docker if you haven't already done so.

If you're on a Mac or Windows machine, Docker Desktop will install both Docker and Docker Compose. Linux users will have to download and install them separately.

$ docker --version
Docker version 20.10.16, build aa7e414

$ docker compose version
Docker Compose version v2.6.0

Note: If docker compose version is not working for you, try docker-compose version and check Migrate to Compose V2

Config File Structure

Let's start with our config file structure, which should help you better understand the entire workflow:

├── compose
│   ├── local
│   │   └── django
│   │       ├── Dockerfile
│   │       ├── celery
│   │       │   ├── beat
│   │       │   │   └── start
│   │       │   ├── flower
│   │       │   │   └── start
│   │       │   └── worker
│   │       │       └── start
│   │       ├── entrypoint
│   │       └── start
│   └── production
│       ├── django
│       │   ├── Dockerfile
│       │   ├── celery
│       │   │   ├── beat
│       │   │   │   └── start
│       │   │   ├── flower
│       │   │   │   └── start
│       │   │   └── worker
│       │   │       └── start
│       │   ├── entrypoint
│       │   └── start
│       └── nginx
│           ├── Dockerfile
│           └── nginx.conf
├── django_celery_example
│     # files omitted for brevity
├── docker-compose.prod.yml
├── docker-compose.yml
├── manage.py
├── polls
│     # files omitted for brevity
└── requirements.txt

Don't create the new files and folders just yet; we'll create them throughout the remainder of the chapter.

With Docker Compose, you describe the desired end state of your environment using a declarative syntax in a docker-compose.yml file. The above file structure uses two such files: One for development and the other for production -- docker-compose.yml and docker-compose.prod.yml, respectively.

The "compose" folder holds configuration files, shell scripts, and the associated Dockerfiles for each environment.

This above config structure is based on the config found in the cookiecutter-django project, which is clean and easy to maintain.

Application Services

Docker Setup Diagram

Start by adding a docker-compose.yml file to the project root:

version: '3.8'

services:
  web:
    build:
      context: .
      dockerfile: ./compose/local/django/Dockerfile
    image: django_celery_example_web
    # '/start' is the shell script used to run the service
    command: /start
    # this volume is used to map the files and folders on the host to the container
    # so if we change code on the host, code in the docker container will also be changed
    volumes:
      - .:/app
    ports:
      - 8010:8000
    # env_file is used to manage the env variables of our project
    env_file:
      - ./.env/.dev-sample
    depends_on:
      - redis
      - db

  db:
    image: postgres:16-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data/
    environment:
      - POSTGRES_DB=hello_django
      - POSTGRES_USER=hello_django
      - POSTGRES_PASSWORD=hello_django

  redis:
    image: redis:7-alpine

  celery_worker:
    build:
      context: .
      dockerfile: ./compose/local/django/Dockerfile
    image: django_celery_example_celery_worker
    command: /start-celeryworker
    volumes:
      - .:/app
    env_file:
      - ./.env/.dev-sample
    depends_on:
      - redis
      - db

  celery_beat:
    build:
      context: .
      dockerfile: ./compose/local/django/Dockerfile
    image: django_celery_example_celery_beat
    command: /start-celerybeat
    volumes:
      - .:/app
    env_file:
      - ./.env/.dev-sample
    depends_on:
      - redis
      - db

  flower:
    build:
      context: .
      dockerfile: ./compose/local/django/Dockerfile
    image: django_celery_example_celery_flower
    command: /start-flower
    volumes:
      - .:/app
    env_file:
      - ./.env/.dev-sample
    ports:
      - 5557:5555
    depends_on:
      - redis
      - db

volumes:
  postgres_data:

Here, we defined six services:

  1. web is the Django dev server
  2. db is the Postgres server
  3. redis is the Redis service, which will be used as the Celery message broker and result backend
  4. celery_worker is the Celery worker process
  5. celery_beat is the Celery beat process for scheduled tasks
  6. flower is the Celery dashboard

Review the web, db, and redis services on your own, taking note of the comments. To simplify things, the web, celery_worker, celery_beat, and flower services will all use the same Dockerfile.

Environment Variables

Create a new folder to store environment variables in the project root called .env. Then, add a new file to that folder called .dev-sample:

DEBUG=1
SECRET_KEY=dbaa1_i7%*3r9-=z-+_mz4r-!qeed@(-a_r(g@k8jo8y3r27%m
DJANGO_ALLOWED_HOSTS=*

SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=hello_django
SQL_USER=hello_django
SQL_PASSWORD=hello_django
SQL_HOST=db
SQL_PORT=5432

CELERY_BROKER=redis://redis:6379/0
CELERY_BACKEND=redis://redis:6379/0

The database login credentials must match the db service's environment variables:

Django Variable Postgres Value Value
SQL_DATABASE POSTGRES_DB hello_django
SQL_USER POSTGRES_USER hello_django
SQL_PASSWORD POSTGRES_PASSWORD hello_django

Next, update the DATABASES, CELERY_BROKER_URL, and CELERY_RESULT_BACKEND settings in settings.py:

DATABASES = {
    "default": {
        "ENGINE": os.environ.get("SQL_ENGINE", "django.db.backends.sqlite3"),
        "NAME": os.environ.get("SQL_DATABASE", os.path.join(BASE_DIR, "db.sqlite3")),
        "USER": os.environ.get("SQL_USER", "user"),
        "PASSWORD": os.environ.get("SQL_PASSWORD", "password"),
        "HOST": os.environ.get("SQL_HOST", "localhost"),
        "PORT": os.environ.get("SQL_PORT", "5432"),
    }
}

CELERY_BROKER_URL = os.environ.get("CELERY_BROKER", "redis://127.0.0.1:6379/0")
CELERY_RESULT_BACKEND = os.environ.get("CELERY_BACKEND", "redis://127.0.0.1:6379/0")

Make sure to import os at the top:

import os

Dockerfile

Next, create the following files and folders in the project root:

└── compose
    └── local
        └── django
            └── Dockerfile

Next, update the Dockerfile:

FROM python:3.11-slim-buster

ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1

RUN apt-get update \
  # dependencies for building Python packages
  && apt-get install -y build-essential \
  # psycopg2 dependencies
  && apt-get install -y libpq-dev \
  # Translations dependencies
  && apt-get install -y gettext \
  # Additional dependencies
  && apt-get install -y git \
  # cleaning up unused files
  && apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
  && rm -rf /var/lib/apt/lists/*

# Requirements are installed here to ensure they will be cached.
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt

COPY ./compose/local/django/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint

COPY ./compose/local/django/start /start
RUN sed -i 's/\r$//g' /start
RUN chmod +x /start

COPY ./compose/local/django/celery/worker/start /start-celeryworker
RUN sed -i 's/\r$//g' /start-celeryworker
RUN chmod +x /start-celeryworker

COPY ./compose/local/django/celery/beat/start /start-celerybeat
RUN sed -i 's/\r$//g' /start-celerybeat
RUN chmod +x /start-celerybeat

COPY ./compose/local/django/celery/flower/start /start-flower
RUN sed -i 's/\r$//g' /start-flower
RUN chmod +x /start-flower

WORKDIR /app

ENTRYPOINT ["/entrypoint"]

A Dockerfile is a text file that contains the commands required to build an image.

Notes:

  1. RUN sed -i 's/\r$//g' /entrypoint is used to process the line endings of the shell scripts, which converts Windows line endings to UNIX line endings.
  2. We copied the different service start shell scripts to the root directory of the final image.
  3. Since the source code will be housed under the "/app" directory of the container (from the .:/app volume in the Docker Compose file), we set the working directory to /app.

Entrypoint

We used a depends_on key for the web service to ensure that it does not start until both the redis and the db services are up. However, just because the db container is up does not mean the database is up and ready to handle connections. So, we can use a shell script called entrypoint to ensure that we can actually connect to the database before we spin up the web service.

compose/local/django/entrypoint:

#!/bin/bash

# if any of the commands in your code fails for any reason, the entire script fails
set -o errexit
# fail exit if one of your pipe command fails
set -o pipefail
# exits if any of your variables is not set
set -o nounset

postgres_ready() {
python << END
import sys

import psycopg2

try:
    psycopg2.connect(
        dbname="${SQL_DATABASE}",
        user="${SQL_USER}",
        password="${SQL_PASSWORD}",
        host="${SQL_HOST}",
        port="${SQL_PORT}",
    )
except psycopg2.OperationalError:
    sys.exit(-1)
sys.exit(0)

END
}
until postgres_ready; do
  >&2 echo 'Waiting for PostgreSQL to become available...'
  sleep 1
done
>&2 echo 'PostgreSQL is available'

exec "$@"

Notes:

  1. We defined a postgres_ready function that gets called in loop. The code will then continue to loop until the Postgres server is available.
  2. exec "$@" is used to make the entrypoint a pass through to ensure that Docker runs the command the user passes in (command: /start, in our case). For more, review this Stack Overflow answer.

Again, this entrypoint script and the above Dockerfile will be used with the web, celery_worker, celery_beat, and flower services to ensure they don't run their respective start scripts until Postgres is up and running.

Why didn't we wait for Redis to be up in the entrypoint script? Postgres usually starts much slower than Redis, so we can assume that Redis will be up once Postgres is up.

Start Scripts

Let's add the start scripts.

Start by adding the files and folders to the "compose/local/django" folder so it looks like this:

└── django
    ├── Dockerfile
    ├── celery
    │   ├── beat
    │   │   └── start
    │   ├── flower
    │   │   └── start
    │   └── worker
    │       └── start
    ├── entrypoint
    └── start

Now, update each of the four start scripts.

compose/local/django/start:

#!/bin/bash

set -o errexit
set -o pipefail
set -o nounset

python manage.py migrate
python manage.py runserver 0.0.0.0:8000

compose/local/django/celery/beat/start:

#!/bin/bash

set -o errexit
set -o nounset

rm -f './celerybeat.pid'
celery -A django_celery_example beat -l INFO

compose/local/django/celery/worker/start:

#!/bin/bash

set -o errexit
set -o nounset

celery -A django_celery_example worker -l INFO

compose/local/django/celery/flower/start:

#!/bin/bash

set -o errexit
set -o nounset

worker_ready() {
    celery -A django_celery_example inspect ping
}

until worker_ready; do
  >&2 echo 'Celery workers not available'
  sleep 1
done
>&2 echo 'Celery workers is available'

celery -A django_celery_example  \
    --broker="${CELERY_BROKER}" \
    flower

In this final script, we used the same logic from our entrypoint to ensure that Flower doesn't start until the workers are ready.

Basic Workflow

Docker Build Diagram

With the config done, let's look at how everything works together in order to better understand the whole workflow.

Make sure you have a requirements.txt file in the project root:

django==5.0
celery==5.3.6
redis==5.0.1
flower==2.0.1
psycopg2-binary==2.9.9         # new

Start by building the images:

$ docker compose build

Once the images are built, spin up the containers in detached mode:

$ docker-compose up -d

This will spin up each of the containers based on the order defined in the depends_on option:

  1. redis and db containers first
  2. Then the web, celery_worker, celery_beat, and flower containers

Once the containers are up, the entrypoint scripts will execute and then, once Postgres is up, the respective start scripts will execute. The Django migrations will be applied and the development server will run. The Django app should then be available.

Make sure you can view the Django welcome screen at http://localhost:8010/. You should be able to view the Flower dashboard at http://localhost:5557/ as well.

Troubleshooting

If you run into problems, you can view the logs at:

$ docker compose logs -f

Try to fix the issue, and then re-build the images and spin up the containers again.

For Mac Silicon users, if you get some weird errors, you might need to run export DOCKER_DEFAULT_PLATFORM=linux/amd64 before running the Docker Compose commands. For more details, please check out this GitHub issue.

Useful Commands

To enter the shell of a specific container that's up and running, run the following command:

$ docker compose exec <service-name> bash

# for example:
# docker compose exec web bash

If you want to run a command against a new container that's not currently running, run:

$ docker compose run --rm web bash

The --rm option tells docker to delete the container after you exit the bash shell.

Simple Test

Let's test things out by entering the Django shell of the running web service:

$ docker compose exec web python manage.py shell

Thn, run the following code:

>>> from django_celery_example.celery import divide
>>>
>>> divide.delay(1, 2)
<AsyncResult: 2225dcc3-2299-4b9b-96aa-5cf2bd4a73ca>

Take note of the task ID (2225dcc3-2299-4b9b-96aa-5cf2bd4a73ca in the above case).

Open a new terminal window, navigate to the project directory, and view the logs of the Celery worker:

$ docker compose logs celery_worker

You should see something similar to:

django-celery-project-celery_worker-1  | [2024-01-02 02:30:28,994: INFO/MainProcess] Task django_celery_example.celery.divide[2225dcc3-2299-4b9b-96aa-5cf2bd4a73ca] received
django-celery-project-celery_worker-1  | [2024-01-02 02:30:34,003: INFO/ForkPoolWorker-16] Task django_celery_example.celery.divide[2225dcc3-2299-4b9b-96aa-5cf2bd4a73ca] succeeded in 5.006151466994197s: 0.5

In the first window, exit from the shell.

Now, let's enter shell of the redis service:

$ docker compose exec redis sh

We used sh since bash is not available in this container.

Next, using the task ID from above, let's see the task result directly from Redis:

$ redis-cli
127.0.0.1:6379> MGET celery-task-meta-2225dcc3-2299-4b9b-96aa-5cf2bd4a73ca
1) "{\"status\": \"SUCCESS\", \"result\": 0.5, \"traceback\": null, \"children\": [], \"date_done\": \"2024-01-02T02:30:33.998373\", \"task_id\": \"2225dcc3-2299-4b9b-96aa-5cf2bd4a73ca\"}"

Make sure you can see the result in the Flower Dashboard as well.

Conclusion

In this chapter, we looked at how to use Docker and Docker Compose to run Django, Postgres, Redis, and Celery. You should be able to spin up each service from a single terminal window with Docker Compose.

Your final project structure after this chapter should now look like this:

├── .env
│   └── .dev-sample
├── celerybeat-schedule
├── compose
│   └── local
│       └── django
│           ├── Dockerfile
│           ├── celery
│           │   ├── beat
│           │   │   └── start
│           │   ├── flower
│           │   │   └── start
│           │   └── worker
│           │       └── start
│           ├── entrypoint
│           └── start
├── django_celery_example
│   ├── __init__.py
│   ├── asgi.py
│   ├── celery.py
│   ├── settings.py
│   ├── urls.py
│   └── wsgi.py
├── docker-compose.yml
├── manage.py
├── polls
│   ├── __init__.py
│   ├── admin.py
│   ├── apps.py
│   ├── migrations
│   │   └── __init__.py
│   ├── models.py
│   ├── tests.py
│   └── views.py
└── requirements.txt

Be sure to remove the old SQLite database file, db.sqlite3, and the "venv" folder if you haven't already.




Mark as Completed