django and docker

Dockerizing Django with Postgres, Gunicorn, and Nginx

Posted by Michael Herman on Nov 2, 2018 | Last updated on Nov 12, 2018

This is a step-by-step tutorial that details how to configure Django to run on Docker along with Postgres, Nginx, and Gunicorn. We’ll also look at how to serve Django static and media files via Nginx.

Dependencies:

  1. Django v2.1
  2. Docker v18.06.1
  3. Python v3.7

Project Setup

Assuming you have Pipenv installed, start by creating a new Django project:

$ mkdir django-on-docker && cd django-on-docker
$ mkdir app && cd app
$ pipenv install django==2.1
$ pipenv shell
(django-on-docker)$ django-admin.py startproject hello_django .
(django-on-docker)$ python manage.py migrate
(django-on-docker)$ python manage.py runserver

Navigate to http://localhost:8000/ to view the Django welcome screen. Kill the server and exit from the Pipenv shell once done.

Your project directory should look like:

└── app
    ├── Pipfile
    ├── Pipfile.lock
    ├── hello_django
    │   ├── __init__.py
    │   ├── settings.py
    │   ├── urls.py
    │   └── wsgi.py
    └── manage.py

Docker

Install Docker, if you don’t already have it, then add a Dockerfile to the “app” directory:

# pull official base image
FROM python:3.7-alpine

# set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# set work directory
WORKDIR /usr/src/app

# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./Pipfile /usr/src/app/Pipfile
RUN pipenv install --skip-lock --system --dev

# copy project
COPY . /usr/src/app/

So, we start with an Alpine-based Docker image for Python 3.7. We then set some environment variables along with a working directory. Finally, we install Pipenv, copy over the local Pipfile, install the dependencies, and copy over the Django project itself.

Review Docker for Python Developers for more on structuring Dockerfiles as well as some best practices for configuring Docker for Python-based development.

Next, add a docker-compose.yml to the project root:

version: '3.7'

services:
  web:
    build: ./app
    command: python /usr/src/app/manage.py runserver 0.0.0.0:8000
    volumes:
      - ./app/:/usr/src/app/
    ports:
      - 8000:8000
    environment:
      - SECRET_KEY=please_change_me

Review the Compose file reference for info on how this file works.

Update the SECRET_KEY in settings.py:

SECRET_KEY = os.getenv('SECRET_KEY')

Build the image:

$ docker-compose build

Once the image is built, run the container:

$ docker-compose up -d

Navigate to http://localhost:8000/ to again view the welcome screen.

Postgres

To configure Postgres, we’ll need to add a new service to the docker-compose.yml file, update the Django settings, and install Psycopg2.

First, add a new service called db to docker-compose.yml:

version: '3.7'

services:
  web:
    build: ./app
    command: python /usr/src/app/manage.py runserver 0.0.0.0:8000
    volumes:
      - ./app/:/usr/src/app/
    ports:
      - 8000:8000
    environment:
      - SECRET_KEY=please_change_me
      - SQL_ENGINE=django.db.backends.postgresql
      - SQL_DATABASE=postgres
      - SQL_USER=postgres
      - SQL_PASSWORD=postgres
      - SQL_HOST=db
      - SQL_PORT=5432
    depends_on:
      - db
  db:
    image: postgres:10.5-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data/

volumes:
  postgres_data:

To persist the data beyond the life of the container we configure a volume. This config will bind postgres_data to the “/var/lib/postgresql/data/” directory in the container.

Update the DATABASES dict in settings.py:

DATABASES = {
    'default': {
        'ENGINE': os.getenv('SQL_ENGINE', 'django.db.backends.sqlite3'),
        'NAME': os.getenv('SQL_DATABASE', os.path.join(BASE_DIR, 'db.sqlite3')),
        'USER': os.getenv('SQL_USER', 'user'),
        'PASSWORD': os.getenv('SQL_PASSWORD', 'password'),
        'HOST': os.getenv('SQL_HOST', 'localhost'),
        'PORT': os.getenv('SQL_PORT', '5432'),
    }
}

Update the Dockerfile to install the appropriate packages along with Psycopg2:

# pull official base image
FROM python:3.7-alpine

# set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# set work directory
WORKDIR /usr/src/app

# install psycopg2
RUN apk update \
    && apk add --virtual build-deps gcc python3-dev musl-dev \
    && apk add postgresql-dev \
    && pip install psycopg2 \
    && apk del build-deps

# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./Pipfile /usr/src/app/Pipfile
RUN pipenv install --skip-lock --system --dev

# copy project
COPY . /usr/src/app/

Review this GitHub Issue for more info on installing Psycopg2 in an Alpine-based Docker Image.

Build the new image and spin up the two containers:

$ docker-compose up -d --build

Run the migrations:

$ docker-compose exec web python manage.py migrate --noinput

Ensure the default Django tables were created:

$ docker-compose exec db psql -U postgres

psql (10.5)
Type "help" for help.

postgres=# \l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges
-----------+----------+----------+------------+------------+-----------------------
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 |
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
(3 rows)

postgres=# \c postgres
You are now connected to database "postgres" as user "postgres".
postgres=# \dt
                   List of relations
 Schema |            Name            | Type  |  Owner
--------+----------------------------+-------+----------
 public | auth_group                 | table | postgres
 public | auth_group_permissions     | table | postgres
 public | auth_permission            | table | postgres
 public | auth_user                  | table | postgres
 public | auth_user_groups           | table | postgres
 public | auth_user_user_permissions | table | postgres
 public | django_admin_log           | table | postgres
 public | django_content_type        | table | postgres
 public | django_migrations          | table | postgres
 public | django_session             | table | postgres
(10 rows)

postgres=# \q

You can check that the volume was created as well by running:

$ docker volume inspect django-on-docker_postgres_data

You should see something similar to:

[
    {
        "CreatedAt": "2018-11-10T21:27:47Z",
        "Driver": "local",
        "Labels": {
            "com.docker.compose.project": "django-on-docker",
            "com.docker.compose.version": "1.22.0",
            "com.docker.compose.volume": "postgres_data"
        },
        "Mountpoint": "/var/lib/docker/volumes/django-on-docker_postgres_data/_data",
        "Name": "django-on-docker_postgres_data",
        "Options": null,
        "Scope": "local"
    }
]

Next, add an entrypoint.sh file to the “app” directory to verify Postgres is healthy before applying the migrations and running the Django development server:

#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
    echo "Waiting for postgres..."

    while ! nc -z $SQL_HOST $SQL_PORT; do
      sleep 0.1
    done

    echo "PostgreSQL started"
fi

python manage.py flush --no-input
python manage.py migrate

exec "$@"

Update the file permissions locally:

$ chmod +x app/entrypoint.sh

Then, update the Dockerfile to copy over the entrypoint.sh file and run it as the Docker entrypoint command:

# pull official base image
FROM python:3.7-alpine

# set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1

# set work directory
WORKDIR /usr/src/app

# install psycopg2
RUN apk update \
    && apk add --virtual build-deps gcc python3-dev musl-dev \
    && apk add postgresql-dev \
    && pip install psycopg2 \
    && apk del build-deps

# install dependencies
RUN pip install --upgrade pip
RUN pip install pipenv
COPY ./Pipfile /usr/src/app/Pipfile
RUN pipenv install --skip-lock --system --dev

# copy entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh

# copy project
COPY . /usr/src/app/

# run entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]

Add the DATABASE environment variable to docker-compose.yml for the entrypoint.sh script:

version: '3.7'

services:
  web:
    build: ./app
    command: python /usr/src/app/manage.py runserver 0.0.0.0:8000
    volumes:
      - ./app/:/usr/src/app/
    ports:
      - 8000:8000
    environment:
      - SECRET_KEY=please_change_me
      - SQL_ENGINE=django.db.backends.postgresql
      - SQL_DATABASE=postgres
      - SQL_USER=postgres
      - SQL_PASSWORD=postgres
      - SQL_HOST=db
      - SQL_PORT=5432
      - DATABASE=postgres
    depends_on:
      - db
  db:
    image: postgres:10.5-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data/

volumes:
  postgres_data:

Test it out again:

  1. Re-build the images
  2. Run the containers
  3. Try http://localhost:8000/

Despite adding Postgres, we can still create an independent Docker image for Django. To test, build a new image and then run a new container:

$ docker build -f ./app/Dockerfile -t hello_django:latest ./app
$ docker run -p 8001:8000 -e "SECRET_KEY=please_change_me" \
    hello_django python /usr/src/app/manage.py runserver 0.0.0.0:8000

You should be able to view the welcome page at http://localhost:8001.

Gunicorn

Moving along, add Gunicorn, a production-grade WSGI server, to the Pipfile:

[[source]]

url = "https://pypi.python.org/simple"
verify_ssl = true
name = "pypi"


[packages]

django= "==2.1"
gunicorn= "==19.9.0"


[dev-packages]



[requires]

python_version = "3.7"

Update the default command in docker-compose.yml to run Gunicorn rather than the Django development server:

version: '3.7'

services:
  web:
    build: ./app
    command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - ./app/:/usr/src/app/
    ports:
      - 8000:8000
    environment:
      - SECRET_KEY=please_change_me
      - SQL_ENGINE=django.db.backends.postgresql
      - SQL_DATABASE=postgres
      - SQL_USER=postgres
      - SQL_PASSWORD=postgres
      - SQL_HOST=db
      - SQL_PORT=5432
      - DATABASE=postgres
    depends_on:
      - db
  db:
    image: postgres:10.5-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data/

volumes:
  postgres_data:

Test it out:

$ docker-compose up -d --build
$ open http://localhost:8000/

Nginx

Next, let’s let’s add Nginx into the mix to act as a reverse proxy for Gunicorn to handle client requests as well as serve up static files.

Add the service to docker-compose.yml:

nginx:
  build: ./nginx
  ports:
    - 1337:80
  depends_on:
    - web

Then, in the local project root, create the following files and folders:

└── nginx
    ├── Dockerfile
    └── nginx.conf

Dockerfile:

FROM nginx:1.15.0-alpine

RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d

nginx.conf:

upstream hello_django {
    server web:8000;
}

server {

    listen 80;

    location / {
        proxy_pass http://hello_django;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }

}

Review Using NGINX and NGINX Plus as an Application Gateway with uWSGI and Django for more info on configuring Nginx to work with Django.

Test it out again at http://localhost:1337. Then, update the web service so that port 8000 is only exposed internally, to other services:

web:
  build: ./app
  command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
  volumes:
    - ./app/:/usr/src/app/
  expose:
    - 8000
  environment:
    - SECRET_KEY=please_change_me
    - SQL_ENGINE=django.db.backends.postgresql
    - SQL_DATABASE=postgres
    - SQL_USER=postgres
    - SQL_PASSWORD=postgres
    - SQL_HOST=db
    - SQL_PORT=5432
    - DATABASE=postgres
  depends_on:
    - db

Your project structure should now look like:

├── app
│   ├── Dockerfile
│   ├── Pipfile
│   ├── Pipfile.lock
│   ├── entrypoint.sh
│   ├── hello_django
│   │   ├── __init__.py
│   │   ├── settings.py
│   │   ├── urls.py
│   │   └── wsgi.py
│   └── manage.py
├── docker-compose.yml
└── nginx
    ├── Dockerfile
    └── nginx.conf

Since Gunicorn is an application server, it will not serve up static files. So, how should both static and media files be handled in this particular configuration?

Static Files

Update settings.py:

STATIC_URL = '/staticfiles/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')

Then, collect the static files in entrypoint.sh:

#!/bin/sh

if [ "$DATABASE" = "postgres" ]
then
    echo "Waiting for postgres..."

    while ! nc -z $SQL_HOST $SQL_PORT; do
      sleep 0.1
    done

    echo "PostgreSQL started"
fi

python manage.py flush --no-input
python manage.py migrate
python manage.py collectstatic --no-input

exec "$@"

Add a volume to the web and nginx services so that each container will share a directory named “staticfiles”:

version: '3.7'

services:
  web:
    build: ./app
    command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - ./app/:/usr/src/app/
      - static_volume:/usr/src/app/staticfiles
    expose:
      - 8000
    environment:
      - SECRET_KEY=please_change_me
      - SQL_ENGINE=django.db.backends.postgresql
      - SQL_DATABASE=postgres
      - SQL_USER=postgres
      - SQL_PASSWORD=postgres
      - SQL_HOST=db
      - SQL_PORT=5432
      - DATABASE=postgres
    depends_on:
      - db
  db:
    image: postgres:10.5-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data/
  nginx:
    build: ./nginx
    volumes:
      - static_volume:/usr/src/app/staticfiles
    ports:
      - 1337:80
    depends_on:
      - web

volumes:
  postgres_data:
  static_volume:

Update the Nginx configuration to route static file requests to the “staticfiles” folder:

upstream hello_django {
    server web:8000;
}

server {

    listen 80;

    location / {
        proxy_pass http://hello_django;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }

    location /staticfiles/ {
        alias /usr/src/app/staticfiles/;
    }

}

Now, any request to http://localhost:1337staticfiles/* will be served from the “staticfiles” directory.

To test, first re-build the images and spin up the new containers per usual. When the collectstatic command is run, static files will be placed in the “staticfiles” directory. Then, navigate to http://localhost:1337/admin and ensure the static assets load correctly. You can also verify in the logs - via docker-compose logs -f - that requests to the static files are served up successfully via Nginx.

Media Files

To test out the handling of media files, start by creating a new Django app:

$ docker-compose exec web python manage.py startapp upload

Add the new app to the INSTALLED_APPS list in settings.py:

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',

    'upload',
]

app/upload/views.py:

from django.shortcuts import render
from django.core.files.storage import FileSystemStorage


def image_upload(request):
    if request.method == 'POST' and request.FILES['image_file']:
        image_file = request.FILES['image_file']
        fs = FileSystemStorage()
        filename = fs.save(image_file.name, image_file)
        image_url = fs.url(filename)
        print(image_url)
        return render(request, 'upload.html', {
            'image_url': image_url
        })
    return render(request, 'upload.html')

Add a “templates”, directory to the “app/upload” directory, and then add a new template called upload.html:

{% block content %}

  <form action="{% url "upload" %}" method="post" enctype="multipart/form-data">
    {% csrf_token %}
    <input type="file" name="image_file">
    <input type="submit" value="submit" />
  </form>

  {% if image_url %}
    <p>File uploaded at: <a href="{{ image_url }}">{{ image_url }}</a></p>
  {% endif %}

{% endblock %}

app/hello_django/urls.py:

from django.contrib import admin
from django.urls import path

from upload.views import image_upload

urlpatterns = [
    path('', image_upload, name='upload'),
    path('admin/', admin.site.urls),
]

app/hello_django/settings.py:

MEDIA_URL = '/mediafiles/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'mediafiles')

Add another volume to the web and nginx services:

version: '3.7'

services:
  web:
    build: ./app
    command: gunicorn hello_django.wsgi:application --bind 0.0.0.0:8000
    volumes:
      - ./app/:/usr/src/app/
      - static_volume:/usr/src/app/staticfiles
      - media_volume:/usr/src/app/mediafiles
    expose:
      - 8000
    environment:
      - SECRET_KEY=please_change_me
      - SQL_ENGINE=django.db.backends.postgresql
      - SQL_DATABASE=postgres
      - SQL_USER=postgres
      - SQL_PASSWORD=postgres
      - SQL_HOST=db
      - SQL_PORT=5432
      - DATABASE=postgres
    depends_on:
      - db
  db:
    image: postgres:10.5-alpine
    volumes:
      - postgres_data:/var/lib/postgresql/data/
  nginx:
    build: ./nginx
    volumes:
      - static_volume:/usr/src/app/staticfiles
      - media_volume:/usr/src/app/mediafiles
    ports:
      - 1337:80
    depends_on:
      - web

volumes:
  postgres_data:
  static_volume:
  media_volume:

Update the Nginx config again:

upstream hello_django {
    server web:8000;
}

server {

    listen 80;

    location / {
        proxy_pass http://hello_django;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_redirect off;
    }

    location /staticfiles/ {
        alias /usr/src/app/staticfiles/;
    }

    location /mediafiles/ {
        alias /usr/src/app/mediafiles/;
    }

}

Re-build:

$ docker-compose up -d --build

Test it out one final time. You should be able to upload an image at http://localhost:1337/, and then view the image at http://localhost:1337/mediafiles/IMAGE_FILE_NAME.


Cheers! You can find the code in the django-on-docker repo.


Microservices with Docker, Flask, and React

Get the full course. Learn how to build, test, and deploy microservices powered by Docker, Flask, and React!


Table of Contents


Join our mailing list to be notified about course updates and new tutorials.