Deployment

Part 1, Lesson 10



With the routes up and tested, let’s get this app deployed!


Follow the instructions here to sign up for AWS (if necessary) and create an IAM user (again, if necessary), making sure to add the credentials to an ~/.aws/credentials file.

Need help with IAM? Review the Controlling Access to Amazon EC2 Resources article.

Then, create the new host:

$ docker-machine create --driver amazonec2 testdriven-prod

For more, review the Amazon Web Services (AWS) EC2 example from Docker.

Once done, set it as the active host and point the Docker client at it:

$ docker-machine env testdriven-prod
$ eval $(docker-machine env testdriven-prod)

Run the following command to view the currently running Machines:

$ docker-machine ls

Create a new compose file called docker-compose-prod.yml and add the contents of the other compose file minus the FLASK_DEBUG environment variable (alternatively, you could set it to FLASK_DEBUG=1) and the volumes.

What would happen if you left the volumes in?

Spin up the containers, create the database, seed, and run the tests:

$ docker-compose -f docker-compose-prod.yml up -d --build

$ docker-compose -f docker-compose-prod.yml \
  run users python manage.py recreate_db

$ docker-compose -f docker-compose-prod.yml \
  run users python manage.py seed_db

$ docker-compose -f docker-compose-prod.yml \
  run users python manage.py test

Add port 5001 to the AWS Security Group. Grab the IP and make sure to test in the browser.

Config

What about the app config and environment variables? Are these set up right? Are we using the production config? To check, run:

$ docker-compose -f docker-compose-prod.yml run users env

You should see the APP_SETTINGS variable assigned to project.config.DevelopmentConfig.

To update this, change the environment variables within docker-compose-prod.yml:

environment:
  - APP_SETTINGS=project.config.ProductionConfig
  - DATABASE_URL=postgres://postgres:postgres@users-db:5432/users_prod
  - DATABASE_TEST_URL=postgres://postgres:postgres@users-db:5432/users_test

Update:

$ docker-compose -f docker-compose-prod.yml up -d

Re-create the db and apply the seed again:

$ docker-compose -f docker-compose-prod.yml \
  run users python manage.py recreate_db

$ docker-compose -f docker-compose-prod.yml \
  run users python manage.py seed_db

Ensure the app is still running and check the environment variables again.

Gunicorn

To use Gunicorn, first add the dependency to the requirements.txt file:

gunicorn==19.7.1

Create a new file in “users” called entrypoint-prod.sh:

#!/bin/sh

echo "Waiting for postgres..."

while ! nc -z users-db 5432; do
  sleep 0.1
done

echo "PostgreSQL started"

gunicorn -b 0.0.0.0:5000 manage:app

Update the permissions:

$ chmod +x services/users/entrypoint-prod.sh

Add a new Dockerfile called Dockerfile-prod:

FROM python:3.6.4

# install environment dependencies
RUN apt-get update -yqq \
  && apt-get install -yqq --no-install-recommends \
    netcat \
  && apt-get -q clean

# set working directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

# add requirements
COPY ./requirements.txt /usr/src/app/requirements.txt

# install requirements
RUN pip install -r requirements.txt

# add entrypoint.sh
COPY ./entrypoint-prod.sh /usr/src/app/entrypoint-prod.sh

# add app
COPY . /usr/src/app

# run server
CMD ["./entrypoint-prod.sh"]

Then, change the build context for the users in docker-compose-prod.yml to reference the new Dockerfile:

build:
  context: ./services/users
  dockerfile: Dockerfile-prod

Update:

$ docker-compose -f docker-compose-prod.yml up -d --build

The --build flag is necessary since we need to install the new dependency.

Nginx

Next, let’s get Nginx up and running as a reverse proxy to the web server. Create a new folder called “nginx” in the “services” folder, and then add a Dockerfile:

FROM nginx:1.13.8

RUN rm /etc/nginx/conf.d/default.conf
COPY /flask.conf /etc/nginx/conf.d

Add a new config file called flask.conf to the “nginx” folder as well:

server {

  listen 80;

  location / {
    proxy_pass        http://users:5000;
    proxy_redirect    default;
    proxy_set_header  Host $host;
    proxy_set_header  X-Real-IP $remote_addr;
    proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header   X-Forwarded-Host $server_name;
  }

}

Add an nginx service to the docker-compose-prod.yml:

nginx:
  container_name: nginx
  build: ./services/nginx
  restart: always
  ports:
    - 80:80
  depends_on:
    - users

Then, remove the exposed ports from the users service and only expose port 5000 to other containers:

expose:
  - '5000'

Build the image and run the container:

$ docker-compose -f docker-compose-prod.yml up -d --build nginx

Add port 80 to the Security Group on AWS. Test the site in the browser again, this time at http://DOCKER_MACHINE_PROD_IP/users.

Let’s update this locally as well. First, add nginx to the docker-compose-dev.yml file:

nginx:
  container_name: nginx
  build: ./services/nginx
  restart: always
  ports:
    - 80:80
  depends_on:
    - users

Next, we need to update the active host. To check which host is currently active, run:

$ docker-machine active
testdriven-prod

Change the active machine to testdriven-dev:

$ eval "$(docker-machine env testdriven-dev)"

Run the nginx container:

$ docker-compose -f docker-compose-dev.yml up -d --build nginx

Grab the IP and test it out!

Did you notice that you can access the site locally with or without the ports - http://DOCKER_MACHINE_DEV_IP/users or http://DOCKER_MACHINE_DEV_IP:5001/users. Why? On prod, you can only access the site at http://DOCKER_MACHINE_PROD_IP/users, though. Why?