Deployment

Part 1, Lesson 10



With the routes up and tested, let’s get this app deployed!


Follow the instructions here to sign up for AWS (if necessary) and create an IAM user (again, if necessary), making sure to add the credentials to an ~/.aws/credentials file.

Need help with IAM? Review the Controlling Access to Amazon EC2 Resources article.

Then, create a new Docker host with Docker Machine:

$ docker-machine create --driver amazonec2 testdriven-prod

For more, review the Amazon Web Services (AWS) EC2 example from Docker.

Once done, set it as the active host and point the Docker client at it:

$ docker-machine env testdriven-prod
$ eval $(docker-machine env testdriven-prod)

Learn more about the eval command here.

Run the following command to view the currently running machines:

$ docker-machine ls

Create a new compose file called docker-compose-prod.yml and add the contents of the other compose file minus the volumes. Also, update the FLASK_ENV environment variable to production.

What would happen if you left the volumes in?

Spin up the containers, create the database, add the seed, and run the tests:

$ docker-compose -f docker-compose-prod.yml up -d --build

$ docker-compose -f docker-compose-prod.yml run users python manage.py recreate_db

$ docker-compose -f docker-compose-prod.yml run users python manage.py seed_db

$ docker-compose -f docker-compose-prod.yml run users python manage.py test

Add port 5001 to the AWS Security Group. Then, grab the IP associated with the machine:

$ docker-machine ip testdriven-prod

Test it out in the browser, making sure to replace DOCKER_MACHINE_IP with the actual IP:

  1. http://DOCKER_MACHINE_IP:5001/users/ping
  2. http://DOCKER_MACHINE_IP:5001/users

Config

What about the app config and environment variables? Are these set up right? Are we using the production config? To check, run:

$ docker-compose -f docker-compose-prod.yml run users env

You should see the APP_SETTINGS variable assigned to project.config.DevelopmentConfig.

To update this, change the environment variables within docker-compose-prod.yml:

environment:
  - APP_SETTINGS=project.config.ProductionConfig
  - DATABASE_URL=postgres://postgres:postgres@users-db:5432/users_prod
  - DATABASE_TEST_URL=postgres://postgres:postgres@users-db:5432/users_test

Update:

$ docker-compose -f docker-compose-prod.yml up -d

Re-create the db and apply the seed again:

$ docker-compose -f docker-compose-prod.yml run users python manage.py recreate_db

$ docker-compose -f docker-compose-prod.yml run users python manage.py seed_db

Ensure the app is still running and check the environment variables again.

Gunicorn

To use Gunicorn, first add the dependency to the requirements.txt file:

gunicorn==19.8.1

Create a new file in “users” called entrypoint-prod.sh:

#!/bin/sh

echo "Waiting for postgres..."

while ! nc -z users-db 5432; do
  sleep 0.1
done

echo "PostgreSQL started"

gunicorn -b 0.0.0.0:5000 manage:app

Add a new Dockerfile called Dockerfile-prod:

# base image
FROM python:3.6.5-alpine

# install dependencies
RUN apk update && \
    apk add --virtual build-deps gcc python-dev musl-dev && \
    apk add postgresql-dev && \
    apk add netcat-openbsd

# set working directory
WORKDIR /usr/src/app

# add and install requirements
COPY ./requirements.txt /usr/src/app/requirements.txt
RUN pip install -r requirements.txt

# add entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint-prod.sh
RUN chmod +x /usr/src/app/entrypoint-prod.sh

# add app
COPY . /usr/src/app

# run server
CMD ["/usr/src/app/entrypoint-prod.sh"]

Then, change the build context for the users in docker-compose-prod.yml to reference the new Dockerfile:

build:
  context: ./services/users
  dockerfile: Dockerfile-prod

Update:

$ docker-compose -f docker-compose-prod.yml up -d --build

The --build flag is necessary since we need to install the new dependency.

Nginx

Next, let’s get Nginx up and running as a reverse proxy to the web server. Create a new folder called “nginx” in the “services” folder, and then add a Dockerfile-prod file:

FROM nginx:1.15.0-alpine

RUN rm /etc/nginx/conf.d/default.conf
COPY /prod.conf /etc/nginx/conf.d

Add a new config file called prod.conf to the “nginx” folder as well:

server {

  listen 80;

  location / {
    proxy_pass        http://users:5000;
    proxy_redirect    default;
    proxy_set_header  Host $host;
    proxy_set_header  X-Real-IP $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header  X-Forwarded-Host $server_name;
  }

}

Add an nginx service to the docker-compose-prod.yml:

nginx:
  build:
    context: ./services/nginx
    dockerfile: Dockerfile-prod
  restart: always
  ports:
    - 80:80
  depends_on:
    - users

Then, remove the exposed ports from the users service and only expose port 5000 to other containers:

expose:
  - '5000'

It’s important to note that ports are exposed by default if there is shared network, so explicitly using EXPOSE to make a port available to other containers is unnecessary. However, it’s still a good practice to use EXPOSE so that other developers can see the ports that are being exposed. It’s a form of documentation, in other words.

Build the image and run the container:

$ docker-compose -f docker-compose-prod.yml up -d --build nginx

Add port 80 to the Security Group on AWS. Test the site in the browser again, this time at http://DOCKER_MACHINE_IP/users.

Let’s update this locally as well. First, add nginx to the docker-compose-dev.yml file:

nginx:
  build:
    context: ./services/nginx
    dockerfile: Dockerfile-dev
  restart: always
  ports:
    - 80:80
  depends_on:
    - users

Add Dockerfile-dev to “nginx”:

FROM nginx:1.15.0-alpine

RUN rm /etc/nginx/conf.d/default.conf
COPY /dev.conf /etc/nginx/conf.d

Add dev.conf to “nginx”:

server {

  listen 80;

  location / {
    proxy_pass        http://users:5000;
    proxy_redirect    default;
    proxy_set_header  Host $host;
    proxy_set_header  X-Real-IP $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header  X-Forwarded-Host $server_name;
  }

}

Next, we need to point Docker back to localhost:

$ eval $(docker-machine env -u)

Run the nginx container:

$ docker-compose -f docker-compose-dev.yml up -d --build nginx

Test it out at http://localhost/users!

Did you notice that you can access the site locally with or without the ports - http://localhost/users or http://localhost:5001/users. Why? On prod, you can only access the site at http://DOCKER_MACHINE_IP/users, though. Why?