Continuously Deploying Django to DigitalOcean with Docker and GitLab

Last updated May 14th, 2024

In this tutorial, we'll look at how to configure GitLab CI to continuously deploy a Django and Docker application to DigitalOcean.

Dependencies:

  1. Django v5.0.6
  2. Docker v25.0.3
  3. Python v3.12.3

Contents

Objectives

By the end of this tutorial, you will be able to:

  1. Deploy Django to DigitalOcean with Docker
  2. Configure GitLab CI to continuously deploy Django to DigitalOcean
  3. Set up Passwordless SSH Login
  4. Configure DigitalOcean's Managed Databases for data persistence

Project Setup

Along with Django and Docker, the demo project that we'll be using includes Postgres, Nginx, and Gunicorn.

Curious about how this project was developed? Check out the Dockerizing Django with Postgres, Gunicorn, and Nginx blog post.

Start by cloning down the base project:

$ git clone https://gitlab.com/testdriven/django-gitlab-digitalocean.git --branch base --single-branch
$ cd django-gitlab-digitalocean

To test locally, build the images and spin up the containers:

$ docker-compose up -d --build

Navigate to http://localhost:8000/. You should see:

{
  "hello": "world"
}

DigitalOcean

Let's set up DigitalOcean to work with our application.

First, you'll need to sign up for a DigitalOcean account (if you don't already have one), and then generate an access token so you can access the DigitalOcean API.

Add the token to your environment:

$ export DIGITAL_OCEAN_ACCESS_TOKEN=[your_digital_ocean_token]

Droplet

Next, create a new Droplet with Docker pre-installed:

$ curl -X POST \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
    -d '{"name":"django-docker","region":"sfo3","size":"s-2vcpu-4gb","image":"docker-20-04"}' \
    "https://api.digitalocean.com/v2/droplets"

Check the status:

$ curl \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
    "https://api.digitalocean.com/v2/droplets?name=django-docker"

If you have jq installed, then you can parse the JSON response like so:

$ curl \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
    "https://api.digitalocean.com/v2/droplets?name=django-docker" \
  | jq '.droplets[0].status'

The root password should be emailed to you. Retrieve it. Then, once the status of the droplet is active, SSH into the instance as root and update the password when prompted.

Next, generate a new SSH key:

$ ssh-keygen -t rsa

Save the key to /root/.ssh/id_rsa and don't set a password. This will generate a public and private key -- id_rsa and id_rsa.pub, respectively. To set up passwordless SSH login, copy the public key over to the authorized_keys file and set the proper permissions:

$ cat ~/.ssh/id_rsa.pub
$ vi ~/.ssh/authorized_keys
$ chmod 600 ~/.ssh/authorized_keys
$ chmod 600 ~/.ssh/id_rsa

Copy the contents of the private key:

$ cat ~/.ssh/id_rsa

Set it as an environment variable on your local machine:

export PRIVATE_KEY='-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEA04up8hoqzS1+APIB0RhjXyObwHQnOzhAk5Bd7mhkSbPkyhP1
...
iWlX9HNavcydATJc1f0DpzF0u4zY8PY24RVoW8vk+bJANPp1o2IAkeajCaF3w9nf
q/SyqAWVmvwYuIhDiHDaV2A==
-----END RSA PRIVATE KEY-----'

Add the key to the ssh-agent:

$ ssh-add - <<< "${PRIVATE_KEY}"

To test, run:

$ ssh -o StrictHostKeyChecking=no root@<YOUR_INSTANCE_IP> whoami

root

Then, create a new directory for the app:

$ ssh -o StrictHostKeyChecking=no root@<YOUR_INSTANCE_IP> mkdir /app

Database

Moving along, let's spin up a production Postgres database via DigitalOcean's Managed Databases:

$ curl -X POST \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
    -d '{"name":"django-docker-db","region":"sfo3","engine":"pg","version":"16","size":"db-s-2vcpu-4gb","num_nodes":1}' \
    "https://api.digitalocean.com/v2/databases"

Check the status:

$ curl \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
    "https://api.digitalocean.com/v2/databases?name=django-docker-db" \
  | jq '.databases[0].status'

It should take a few minutes to spin up. Once the status is online, grab the connection information:

$ curl \
    -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer '$DIGITAL_OCEAN_ACCESS_TOKEN'' \
    "https://api.digitalocean.com/v2/databases?name=django-docker-db" \
  | jq '.databases[0].connection'

Example response:

{
  "protocol": "postgresql",
  "uri": "postgresql://doadmin:na9tcfew9jw13a2m@django-docker-db-do-user-778274-0.a.db.ondigitalocean.com:25060/defaultdb?sslmode=require",
  "database": "defaultdb",
  "host": "django-docker-db-do-user-778274-0.a.db.ondigitalocean.com",
  "port": 25060,
  "user": "doadmin",
  "password": "na9tcfew9jw13a2m",
  "ssl": true
}

GitLab CI

Sign up for a GitLab account (if necessary), and then create a new project (again, if necessary).

Build Stage

Next, add a GitLab CI/CD config file called .gitlab-ci.yml to the project root:

image:
  name: docker/compose:1.29.1
  entrypoint: [""]

services:
  - docker:dind

stages:
  - build

variables:
  DOCKER_HOST: tcp://docker:2375
  DOCKER_DRIVER: overlay2

build:
  stage: build
  before_script:
    - export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
    - export WEB_IMAGE=$IMAGE:web
    - export NGINX_IMAGE=$IMAGE:nginx
  script:
    - apk add --no-cache bash
    - chmod +x ./setup_env.sh
    - bash ./setup_env.sh
    - docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
    - docker pull $IMAGE:web || true
    - docker pull $IMAGE:nginx || true
    - docker-compose -f docker-compose.ci.yml build
    - docker push $IMAGE:web
    - docker push $IMAGE:nginx

Here, we defined a single build stage where we:

  1. Set the IMAGE, WEB_IMAGE, and NGINX_IMAGE environment variables
  2. Install bash
  3. Set the appropriate permissions for setup_env.sh
  4. Run setup_env.sh
  5. Log in to the GitLab Container Registry
  6. Pull the images if they exist
  7. Build the images
  8. Push the images up to the registry

Add the setup_env.sh file to the project root:

#!/bin/sh

echo DEBUG=0 >> .env
echo SQL_ENGINE=django.db.backends.postgresql >> .env
echo DATABASE=postgres >> .env

echo SECRET_KEY=$SECRET_KEY >> .env
echo SQL_DATABASE=$SQL_DATABASE >> .env
echo SQL_USER=$SQL_USER >> .env
echo SQL_PASSWORD=$SQL_PASSWORD >> .env
echo SQL_HOST=$SQL_HOST >> .env
echo SQL_PORT=$SQL_PORT >> .env

This file will create the required .env file, based on the environment variables found in your GitLab project's CI/CD settings (Settings > CI / CD > Variables). Add the variables based on the connection information.

For example:

  1. SECRET_KEY: 9zYGEFk2mn3mWB8Bmg9SAhPy6F4s7cCuT8qaYGVEnu7huGRKW9
  2. SQL_DATABASE: defaultdb
  3. SQL_HOST: django-docker-db-do-user-778274-0.a.db.ondigitalocean.com
  4. SQL_PASSWORD: na9tcfew9jw13a2m
  5. SQL_PORT: 25060
  6. SQL_USER: doadmin

gitlab config

Once done, commit and push your code up to GitLab to trigger a new build. Make sure it passes. You should see the images in the GitLab Container Registry:

gitlab config

Deploy Stage

Next, add a deploy stage to .gitlab-ci.yml and create a global before_script that's used for both stages:

image:
  name: docker/compose:1.29.1
  entrypoint: [""]

services:
  - docker:dind

stages:
  - build
  - deploy

variables:
  DOCKER_HOST: tcp://docker:2375
  DOCKER_DRIVER: overlay2

before_script:
  - export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
  - export WEB_IMAGE=$IMAGE:web
  - export NGINX_IMAGE=$IMAGE:nginx
  - apk add --no-cache openssh-client bash
  - chmod +x ./setup_env.sh
  - bash ./setup_env.sh
  - docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY

build:
  stage: build
  script:
    - docker pull $IMAGE:web || true
    - docker pull $IMAGE:nginx || true
    - docker-compose -f docker-compose.ci.yml build
    - docker push $IMAGE:web
    - docker push $IMAGE:nginx

deploy:
  stage: deploy
  script:
    - mkdir -p ~/.ssh
    - echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
    - cat ~/.ssh/id_rsa
    - chmod 700 ~/.ssh/id_rsa
    - eval "$(ssh-agent -s)"
    - ssh-add ~/.ssh/id_rsa
    - ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
    - chmod +x ./deploy.sh
    - scp  -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root@$DIGITAL_OCEAN_IP_ADDRESS:/app
    - bash ./deploy.sh

So, in the deploy stage we:

  1. Add the private SSH key to the ssh-agent
  2. Copy over the .env and docker-compose.prod.yml files to the remote server
  3. Set the appropriate permissions for deploy.sh
  4. Run deploy.sh

Add deploy.sh to the project root:

#!/bin/sh

ssh -o StrictHostKeyChecking=no root@$DIGITAL_OCEAN_IP_ADDRESS << 'ENDSSH'
  cd /app
  export $(cat .env | xargs)
  docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
  docker pull $IMAGE:web
  docker pull $IMAGE:nginx
  docker compose -f docker-compose.prod.yml up -d
ENDSSH

So, after SSHing into the server, we

  1. Navigate to the deployment directory
  2. Add the environment variables
  3. Log in to the GitLab Container Registry
  4. Pull the images
  5. Spin up the containers

Add the DIGITAL_OCEAN_IP_ADDRESS and PRIVATE_KEY environment variables to GitLab.

Update the setup_env.sh file:

#!/bin/sh

echo DEBUG=0 >> .env
echo SQL_ENGINE=django.db.backends.postgresql >> .env
echo DATABASE=postgres >> .env

echo SECRET_KEY=$SECRET_KEY >> .env
echo SQL_DATABASE=$SQL_DATABASE >> .env
echo SQL_USER=$SQL_USER >> .env
echo SQL_PASSWORD=$SQL_PASSWORD >> .env
echo SQL_HOST=$SQL_HOST >> .env
echo SQL_PORT=$SQL_PORT >> .env
echo WEB_IMAGE=$IMAGE:web  >> .env
echo NGINX_IMAGE=$IMAGE:nginx  >> .env
echo CI_REGISTRY_USER=$CI_REGISTRY_USER   >> .env
echo CI_JOB_TOKEN=$CI_JOB_TOKEN  >> .env
echo CI_REGISTRY=$CI_REGISTRY  >> .env
echo IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME >> .env

Next, add the server's IP to the ALLOWED_HOSTS list in the Django settings.

Commit and push your code to trigger a new build. Once the build passes, navigate to the IP of your instance. You should see:

{
  "hello": "world"
}

Test

Finally, update the deploy stage so that it only runs when changes are made to the main branch:

deploy:
  stage: deploy
  script:
    - mkdir -p ~/.ssh
    - echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
    - cat ~/.ssh/id_rsa
    - chmod 700 ~/.ssh/id_rsa
    - eval "$(ssh-agent -s)"
    - ssh-add ~/.ssh/id_rsa
    - ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
    - chmod +x ./deploy.sh
    - scp  -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root@$DIGITAL_OCEAN_IP_ADDRESS:/app
    - bash ./deploy.sh
  only:
    - main

To test, create a new develop branch. Add an exclamation point after world in urls.py:

def home(request):
    return JsonResponse({'hello': 'world!'})

Commit and push your changes to GitLab. Ensure only the build stage runs. Once the build passes open a PR against the main branch and merge the changes. This will trigger a new pipeline with both stages -- build and deploy. Ensure the deploy works as expected:

{
  "hello": "world!"
}

--

That's it! You can find the final code in the django-gitlab-digitalocean repo.

Featured Course

Test-Driven Development with Django, Django REST Framework, and Docker

In this course, you'll learn how to set up a development environment with Docker in order to build and deploy a RESTful API powered by Python, Django, and Django REST Framework.

Featured Course

Test-Driven Development with Django, Django REST Framework, and Docker

In this course, you'll learn how to set up a development environment with Docker in order to build and deploy a RESTful API powered by Python, Django, and Django REST Framework.