Deployment

With the routes up and tested, let's get this app deployed!


Follow the instructions here to sign up for AWS (if necessary) and create an IAM user (again, if necessary), making sure to add the credentials to an ~/.aws/credentials file. Then create the new host:

$ docker-machine create --driver amazonec2 aws

For more, review the Amazon Web Services (AWS) EC2 example from Docker.

Once done, set it as the active host and point the Docker client at it:

$ docker-machine env aws
$ eval $(docker-machine env aws)

Run the following command to view the currently running Machines:

$ docker-machine ls

Create a new compose file called docker-compose-prod.yml and add the contents of the other compose file minus the volumes.

What would happen if you left the volumes in?

Spin up the containers, create the database, seed, and run the tests:

$ docker-compose -f docker-compose-prod.yml up -d --build
$ docker-compose -f docker-compose-prod.yml run users-service python manage.py recreate_db
$ docker-compose -f docker-compose-prod.yml run users-service python manage.py seed_db
$ docker-compose -f docker-compose-prod.yml run users-service python manage.py test

Add port 5001 to the Security Group.

Grab the IP and make sure to test in the browser.

Config

What about the app config and environment variables? Are these set up right? Are we using the production config? To check, run:

$ docker-compose -f docker-compose-prod.yml run users-service env

You should see the APP_SETTINGS variable assigned to project.config.DevelopmentConfig.

To update this, change the environment variables within docker-compose-prod.yml:

environment:
  - APP_SETTINGS=project.config.ProductionConfig
  - DATABASE_URL=postgres://postgres:postgres@users-db:5432/users_prod
  - DATABASE_TEST_URL=postgres://postgres:postgres@users-db:5432/users_test

Update:

$ docker-compose -f docker-compose-prod.yml up -d

Re-create the db and apply the seed again:

$ docker-compose -f docker-compose-prod.yml run users-service python manage.py recreate_db
$ docker-compose -f docker-compose-prod.yml run users-service python manage.py seed_db

Ensure the app is still running and check the environment variables again.

Gunicorn

To use Gunicorn, first add the dependency to the requirements.txt file:

gunicorn==19.7.1

Then update docker-compose-prod.yml by adding a command key to the users-service:

command: gunicorn -b 0.0.0.0:5000 manage:app

This will override the CMD within services/users/Dockerfile, python manage.py runserver -h 0.0.0.0.

Update:

$ docker-compose -f docker-compose-prod.yml up -d --build

The --build flag is necessary since we need to install the new dependency.

Nginx

Next, let's get Nginx up and running as a reverse proxy to the web server. Create a new folder called "nginx" in the project root, and then add a Dockerfile:

FROM nginx:1.13.0

RUN rm /etc/nginx/conf.d/default.conf
ADD /flask.conf /etc/nginx/conf.d

Add a new config file called flask.conf to the "nginx" folder as well:

server {

    listen 80;

    location / {
        proxy_pass http://users-service:5000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

}

Add an nginx service to the docker-compose-prod.yml:

nginx:
  container_name: nginx
  build: ./nginx/
  restart: always
  ports:
    - 80:80
  depends_on:
    users-service:
      condition: service_started
  links:
    - users-service

And remove the exposed ports from the users service and only expose port 5000 to other containers:

expose:
  - '5000'

Build the image and run the container:

$ docker-compose -f docker-compose-prod.yml up -d --build nginx

Add port 80 to the Security Group on AWS. Test the site in the browser again.

Let's update this locally as well.

First, add nginx to the docker-compose.yml file:

nginx:
  container_name: nginx
  build: ./nginx/
  restart: always
  ports:
    - 80:80
  depends_on:
    users-service:
      condition: service_started
  links:
    - users-service

Next, we need to update the active host. To check which host is currently active run:

$ docker-machine active
aws

Change the active machine to dev:

$ eval "$(docker-machine env dev)"

Run the nginx container:

$ docker-compose up -d --build nginx

Grab the IP and test it out!

Did you notice that you can access the site locally with or without the ports - http://YOUR-IP/users or http://YOUR-IP:5001/users. Why?



Deployment

With the routes up and tested, let's get this app deployed!


Follow the instructions here to sign up for AWS (if necessary) and create an IAM user (again, if necessary), making sure to add the credentials to an ~/.aws/credentials file. Then create the new host:

$ docker-machine create --driver amazonec2 aws

For more, review the Amazon Web Services (AWS) EC2 example from Docker.

Once done, set it as the active host and point the Docker client at it:

$ docker-machine env aws
$ eval $(docker-machine env aws)

Run the following command to view the currently running Machines:

$ docker-machine ls

Create a new compose file called docker-compose-prod.yml and add the contents of the other compose file minus the volumes.

What would happen if you left the volumes in?

Spin up the containers, create the database, seed, and run the tests:

$ docker-compose -f docker-compose-prod.yml up -d --build
$ docker-compose -f docker-compose-prod.yml run users-service python manage.py recreate_db
$ docker-compose -f docker-compose-prod.yml run users-service python manage.py seed_db
$ docker-compose -f docker-compose-prod.yml run users-service python manage.py test

Add port 5001 to the Security Group.

Grab the IP and make sure to test in the browser.

Config

What about the app config and environment variables? Are these set up right? Are we using the production config? To check, run:

$ docker-compose -f docker-compose-prod.yml run users-service env

You should see the APP_SETTINGS variable assigned to project.config.DevelopmentConfig.

To update this, change the environment variables within docker-compose-prod.yml:

environment:
  - APP_SETTINGS=project.config.ProductionConfig
  - DATABASE_URL=postgres://postgres:postgres@users-db:5432/users_prod
  - DATABASE_TEST_URL=postgres://postgres:postgres@users-db:5432/users_test

Update:

$ docker-compose -f docker-compose-prod.yml up -d

Re-create the db and apply the seed again:

$ docker-compose -f docker-compose-prod.yml run users-service python manage.py recreate_db
$ docker-compose -f docker-compose-prod.yml run users-service python manage.py seed_db

Ensure the app is still running and check the environment variables again.

Gunicorn

To use Gunicorn, first add the dependency to the requirements.txt file:

gunicorn==19.7.1

Then update docker-compose-prod.yml by adding a command key to the users-service:

command: gunicorn -b 0.0.0.0:5000 manage:app

This will override the CMD within services/users/Dockerfile, python manage.py runserver -h 0.0.0.0.

Update:

$ docker-compose -f docker-compose-prod.yml up -d --build

The --build flag is necessary since we need to install the new dependency.

Nginx

Next, let's get Nginx up and running as a reverse proxy to the web server. Create a new folder called "nginx" in the project root, and then add a Dockerfile:

FROM nginx:1.13.0

RUN rm /etc/nginx/conf.d/default.conf
ADD /flask.conf /etc/nginx/conf.d

Add a new config file called flask.conf to the "nginx" folder as well:

server {

    listen 80;

    location / {
        proxy_pass http://users-service:5000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }

}

Add an nginx service to the docker-compose-prod.yml:

nginx:
  container_name: nginx
  build: ./nginx/
  restart: always
  ports:
    - 80:80
  depends_on:
    users-service:
      condition: service_started
  links:
    - users-service

And remove the exposed ports from the users service and only expose port 5000 to other containers:

expose:
  - '5000'

Build the image and run the container:

$ docker-compose -f docker-compose-prod.yml up -d --build nginx

Add port 80 to the Security Group on AWS. Test the site in the browser again.

Let's update this locally as well.

First, add nginx to the docker-compose.yml file:

nginx:
  container_name: nginx
  build: ./nginx/
  restart: always
  ports:
    - 80:80
  depends_on:
    users-service:
      condition: service_started
  links:
    - users-service

Next, we need to update the active host. To check which host is currently active run:

$ docker-machine active
aws

Change the active machine to dev:

$ eval "$(docker-machine env dev)"

Run the nginx container:

$ docker-compose up -d --build nginx

Grab the IP and test it out!

Did you notice that you can access the site locally with or without the ports - http://YOUR-IP/users or http://YOUR-IP:5001/users. Why?