Using Docker to isolate my environment has been a game-changer for the teams I've been a part of and me. It is a massive timesaver by installing Docker and then having new developers run docker compose up -d to get going. No more having long lists of things to install and then troubleshoot when they don't install cleanly to get going.

I know there are many different opinions and articles for how to get going with Django on Docker. I'm no expert on the best base image for different situations. I won't get into that. There is already plenty of discussion and debate in various corners of the web.

My plan for this post is to walk through one way to set up Docker for developing and deploying Django-based web applications that use a Vue frontend via container-based deployments and Github Actions automation.

Base Image

Instead of having a single image, I break things into a base image so that in Github Actions, I can leverage some caching. This base image only gets rebuilt if requirements change.

In this Dockerfile.base, I start with using a node image to install all the node dependencies by copying in the files required for the install and then running yarn install.

Then I do the same for the python requirements, but those require a bit more setup. I set PYTHONFAULTHANDLER to get a bit more traceback on faults. I also set PYTHONUNBUFFERED so that output isn't buffered but sent immediately to console. Then, I add a user that will run the processes on the final container because you don't want to run processes as root. I then install a bunch of required pre-requisites, followed by the pip install.

I close up this base image build by copying in installed node_modules from the previous layer.

# Transient Node Build Image
FROM node:14.16-alpine
WORKDIR /app
COPY .npmrc package.json yarn.lock webpack.config.js .babelrc /app/
RUN yarn install

# Python Runtime Image
FROM python:3.9-alpine3.14

ENV PYTHONFAULTHANDLER=1 \
    PYTHONUNBUFFERED=1

WORKDIR /app

# Install dependencies & setup user
RUN adduser -D appuser && \
    apk add --no-cache --virtual .build-deps g++ gcc libffi-dev musl-dev libevent-dev openssl-dev python3-dev make && \
    apk add --no-cache git postgresql-dev binutils && \
    apk add --no-cache jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev && \
    apk add --no-cache yarn

# Copy requirements
COPY requirements.txt .

RUN pip install -U pip && \
    pip install --no-cache-dir -r requirements.txt && \
    apk del --no-cache .build-deps

# Copy node_modules and yarn
COPY --from=0 /app/node_modules /app/node_modules

Main Image

In this main image, I'm pulling the latest copy of the base image that Github Actions built based on the Dockerfile.base config.

Specifying GIT_COMMIT, GIT_VERSION, and VERSION as ARG parameters allow those environment variables to be passed into this image so I can reference as environment variables in processes that run on the container.

The rest of this is pretty simple. I copy in the code, set ownership over the copied in files to the user I use for process execution, build the frontend bundle and then run gunicorn.

ARG BASE_TAG=latest
FROM ghcr.io/[repo]/[image-name]:$BASE_TAG

ARG GIT_COMMIT
ARG GIT_VERSION
ARG VERSION

WORKDIR /app

# Copy full source
COPY . .

RUN chown -R appuser:appuser /app

RUN yarn build

USER appuser
CMD gunicorn [your package].wsgi --reload --bind 0.0.0.0:$PORT --threads 4 --log-file -
EXPOSE 8000

Docker Compose Configuration

For local development, I build up the services I need using Docker Compose to avoid having to install things like Redis and Postgres on my local machine. This makes it easy to try out different versions and keeps my laptop relatively clean.

The Postgres image will create the database defined by POSTGRES_DB on boot up if it doesn't already exist. I like to map a local .data/ directory to the path that Postgres uses for its data rather than using docker volumes as it makes it easy for me swap out different database states if switching between branches with migrations by just moving .data/ directories to other temporary named folders between reboots.

The healthcheck sections for both Redis and Postgres mean that I can have the Django service wait to come online until those required services are healthy and ready.

Even though the base image has the built frontend requirements, that's really for production deployments. I like to run the webpack dev server for development to get all the fun hot loading of CSS/JS changes. Since that runs on a different port, I run it as a standalone service. For this, I can use a stock node image, map the local source directory into the container and run yarn install and yarn start on bootup.

Finally, I startup the django container using the Dockerfile we've just created. I override default Django settings by defining env variables in the environment section. Here we are defining REDIS_URL and DATABASE_URL to use the docker compose services. The Django project's settings.py module reads these variables from the environment.

version: '3'

networks:
    localdev:

services:
    postgres:
        image: postgres:13-alpine
        container_name: postgres
        restart: unless-stopped
        environment:
            POSTGRES_DB: myapp
            POSTGRES_USER: myapp
            POSTGRES_PASSWORD: myapp
        volumes:
            - ./.data:/var/lib/postgresql/data/
            - ./:/app
        ports:
            - "54321:5432"
        networks:
            - localdev
        healthcheck:
            test: ["CMD", "pg_isready", "-U", "myapp"]
            interval: 1s
            timeout: 3s
            retries: 30
    redis:
        image: redis:4.0.14-alpine
        container_name: redis
        restart: unless-stopped
        networks:
            - localdev
        healthcheck:
            test: ["CMD", "redis-cli", "ping"]
            interval: 1s
            timeout: 3s
            retries: 30
    npm:
        image: node:14.16-alpine
        container_name: frontend
        volumes:
            - ./:/app
        working_dir: /app
        networks:
            - localdev
        ports:
            - "8080:8080"
        command: /bin/sh -c 'yarn install && yarn start'
    django:
        image: django
        build:
            context: .
            dockerfile: Dockerfile
        container_name: django
        working_dir: /app
        volumes:
            - ./:/app
        ports:
            - "8000:8000"
        depends_on:
            postgres:
                condition: service_healthy
            redis:
                condition: service_healthy
        environment:
            DATABASE_URL: postgres://myapp:myapp@postgres/myapp
            REDIS_URL: redis://redis:6379
        networks:
            - localdev

Github Actions

On Every Push

On every push, detected changes to requirements.txt or yarn.lock files trigger building a new base image.

The build-base-image job does the following:

  1. Sets up an output variable called tag that we'll store the image tag in
  2. Checks out the code
  3. Computes the tag output variable based on a hash of the dependencies files
  4. Sets up the Docker Buildx action
  5. Authenticates with the Github container registry
  6. Sets up a cache for the docker layers
  7. Builds and pushes the base image to the registry

If the building and publishing of this base image is successful, then we run, in parallel, the lint-and-test and build-and-push-images steps. We run these in parallel to save on some build time at the expense of potentially wasted compute time cost building images when lint-and-test fails.

The lint-and-test step is what you'd expect from most CI setups:

  1. Setup services
  2. Check out code
  3. Install dependencies
  4. Run lints and tests
  5. Upload code coverage reports

The build-and-push-images step takes the current code and builds an image based off the latest base image and pushes it to the container registry:

  1. Check out code
  2. Creates tags for the image
  3. Sets up the Docker Buildx action
  4. Authenticates with the Github container registry
  5. Sets up a cache for the docker layers
  6. Builds and pushes the production image to the registry
name: Test and Build
on:
  push:
    branches: "**" # All Branches
    tags-ignore: "**" # Releases also push tags (and so trigger both events). Don't double release these images.
jobs:
  build-base-image:
    name: Build Base Image
    runs-on: ubuntu-latest
    outputs:
      tag: ${{ steps.base-tags.outputs.tag }}
    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Prep Tag Data
        id: base-tags
        run: echo "::set-output name=tag::${{ hashFiles('**/requirements.txt', '**/yarn.lock') }}";

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v1

      - name: Login to GH Container Registry
        uses: docker/login-action@v1
        with:
          registry: ghcr.io
          username: ${{ secrets.CR_UN }}
          password: ${{ secrets.CR_PAT }}

      - name: Cache Docker layers
        uses: actions/cache@v2
        id: base-image-cache
        with:
          path: /tmp/.buildx-cache
          key: ${{ runner.os }}-buildx-${{ steps.base-tags.outputs.tag }}-v2

      - name: Build and Push Base Image
        uses: docker/build-push-action@v2
        if: steps.base-image-cache.outputs.cache-hit != 'true'
        with:
          context: .
          file: ./.docker/Dockerfile.base
          push: true
          # Tag :latest and a hash of requirements.txt + yarn.lock
          tags: |
            ghcr.io/${{ github.repository }}-base:${{ steps.base-tags.outputs.tag }}
            ghcr.io/${{ github.repository }}-base:latest
          cache-from: type=local,src=/tmp/.buildx-cache
          cache-to: type=local,dest=/tmp/.buildx-cache

  lint-and-test:
    name: Linting and Testing
    runs-on: ubuntu-18.04
    needs: build-base-image
    services:
      image: postgres
        env:
          POSTGRES_DB: postgres
          POSTGRES_USER: postgres
          POSTGRES_PASSWORD: postgres
        ports:
          - 5432:5432
        options: --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5
      redis:
        image: redis
        ports:
          - 6379:6379
        options: --entrypoint redis-server
    env:
      PYTHONDONTWRITEBYTECODE: 1
      PYTHON_VERSION: 3.9
      DATABASE_SSL: "off"
      DATABASE_URL: "postgresql://postgres:postgres@localhost:5432/postgres"

    steps:
      -
        name: Cancel Previous Runs
        uses: styfle/cancel-workflow-action@0.8.0
        with:
          access_token: ${{ github.token }}

      - uses: actions/checkout@v2

      - name: Setup Python
        uses: actions/setup-python@v1
        if: steps.cache-venv.outputs.cache-hit == false
        with:
          python-version: ${{env.PYTHON_VERSION}}

      - name: Setup Python Cache
        uses: actions/cache@v2
        id: cache-venv  # name for referring later
        with:
            path: /opt/hostedtoolcache/Python/
            key: v2-${{ runner.os }}-python-${{env.PYTHON_VERSION}}-venv-${{ hashFiles('**/requirements*.txt') }}
            restore-keys: |
                ${{ runner.os }}-venv-

      - name: Install Python Dependencies
        if: steps.cache-venv.outputs.cache-hit == false
        run: pip install -r requirements.txt

      - name: Print Versions
        run: pip freeze

      - name: Get Yarn Cache Directory Path
        id: yarn-cache-dir-path
        run: echo "::set-output name=dir::$(yarn cache dir)"

      - name: Cache Yarn
        uses: actions/cache@v2
        id: yarn-cache # use this to check for `cache-hit` (`steps.yarn-cache.outputs.cache-hit != 'true'`)
        with:
          path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
          key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
          restore-keys: |
            ${{ runner.os }}-yarn-

      - name: Install
        run: yarn --prefer-offline

      - name: Test Frontend
        run: yarn test

      - name: Build Frontend
        run: yarn build

      - name: Collect Static
        run: python manage.py collectstatic

      - name: Linting
        run: flake8

      - name: Checking for Missing Migrations
        run: python manage.py makemigrations --check --dry-run

      - name: Running Python Tests
        run: coverage run -m pytest -vv --nomigrations

      - name: Uploading Coverage Report
        uses: codecov/codecov-action@v2
        with:
          token: ${{ secrets.CODECOV_TOKEN }}
          files: ./coverage.xml
          fail_ci_if_error: true

  build-and-push-images:
    name: Build and Push Images
    needs: build-base-image
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Prep Tag Data
        id: tags
        run: |
          export REF=${GITHUB_REF#refs/*/};
          export VERSION=${REF//[^[:alpha:][:digit:]\.\-\_]/};
          echo "::set-output name=version::$VERSION";
          echo "::set-output name=minor::${VERSION%.*}.x";
          echo "::set-output name=major::${VERSION%.*.*}.x.x";
          echo "::set-output name=sha7::${GITHUB_SHA::7}";

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v1

      - name: Login to GH Container Registry
        uses: docker/login-action@v1
        with:
          registry: ghcr.io
          username: ${{ secrets.CR_UN }}
          password: ${{ secrets.CR_PAT }}

      - name: Cache Docker layers
        uses: actions/cache@v2
        with:
          path: /tmp/.buildx-cache
          key: ${{ runner.os }}-buildx-${{ github.sha }}
          restore-keys: |
            ${{ runner.os }}-buildx-

      - name: Build and Push Dev Images
        uses: docker/build-push-action@v2
        with:
          context: .
          file: ./.docker/Dockerfile
          push: true
          # Tag :branchname, and :commit-sha
          tags: |
            ghcr.io/${{ github.repository }}:${{ steps.tags.outputs.version }}
            ghcr.io/${{ github.repository }}:${{ steps.tags.outputs.sha7 }}
          build-args: |
            BASE_TAG=${{ needs.build-base-image.outputs.tag }}
            GIT_COMMIT=${{ github.sha }}
            GIT_VERSION=${{ github.ref }}
            VERSION=${{ steps.tags.outputs.sha7 }}
          cache-from: type=local,src=/tmp/.buildx-cache
          cache-to: type=local,dest=/tmp/.buildx-cache

  deploy:
    name: Deploy QA
    needs: [build-and-push-images, lint-and-test]
    # Only run on pushes to master
    if: ${{ github.event.ref == 'refs/heads/master' }}
    runs-on: ubuntu-latest
    env:
      HEROKU_API_KEY: ${{ secrets.HEROKU_API_KEY }}
    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v1

      - name: Login to GH Container Registry
        uses: docker/login-action@v1
        with:
          registry: ghcr.io
          username: ${{ secrets.CR_UN }}
          password: ${{ secrets.CR_PAT }}

      - name: Install Heroku
        run: curl https://cli-assets.heroku.com/install.sh | sh

      - name: Heroku Container Login
        run: heroku container:login

      - name: Push All Processes
        run: heroku container:push -R -a ${{ secrets.HEROKU_APP_NAME }} --arg IMAGE=master
        working-directory: ./deploy

      - name: Release
        run: heroku container:release --app ${{ secrets.HEROKU_APP_NAME }} web worker release

A keen eye will notice on Push All Processes there is a working-directory specified. In this ./deploy path, we have the following Dockerfiles:

  • Dockerfile.release
  • Dockerfile.web
  • Dockerfile.worker

The heroku container:push -R command will use each Dockerfile.* definition to create containers for each process/service we need. These are very simple Dockerfiles that all use the image we just pushed. They are used to execute different things.

The web one is the main one and is what runs the web dynos. You might not need the worker, but I typically find it helpful to have a django-rq worker available to offload tasks from the web request/response cycle. Finally, the release is for running release scripts, most commonly a python manage.py migrate command to migrate the database on release.

Dockerfile.release:

ARG IMAGE=latest
FROM ghcr.io/[org]/[repo]:${IMAGE}
CMD python manage.py migrate

Dockerfile.web:

ARG IMAGE=latest
FROM ghcr.io/[org]/[repo]:${IMAGE}

Dockerfile.worker:

ARG IMAGE=latest
FROM ghcr.io/[org]/[repo]:${IMAGE}
CMD python manage.py rqworker [queue1] [queue2]

Additional Steps

So far, I've outlined a development to deployment pipeline that deploys code when it lands on master after lints and tests pass. If you only need a single environment, then this pipeline will suffice. However, as your project grows and you desire more rigor in the process, you may very well end up with a "staging" or "qa" environment where QA teams can run integration and regression testing and product owners can do acceptance testing.

Once you arrive at this point, you can then leverage the Releases feature of Github to "promote" the image to your production environment. The auto-deploys from master will continue to happen, but that environment now assumes the role of "staging."

You then would create a new Heroku app and add the HEROKU_PROD_APP_NAME secret to your repository and add an addition workflow to your .github folder.

Let's call this one prod.yml:

name: Promote Tag and Production Release
on:
  release:
    types: [published]
jobs:
  promote-tag:
    name: Promote Docker Tag
    runs-on: ubuntu-18.04
    outputs:
      version: ${{ steps.tags.outputs.version }}
    steps:
      -
        name: Prep Tag Data
        id: tags
        run: |
          export REF=${GITHUB_REF#refs/*/};
          export VERSION=${REF//[^[:alpha:][:digit:]\.\-\_]/};
          echo "::set-output name=version::$VERSION";
          echo "::set-output name=minor::${VERSION%.*}.x";
          echo "::set-output name=major::${VERSION%.*.*}.x.x";
          echo "::set-output name=sha7::${GITHUB_SHA::7}";
      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v1
      -
        name: Login to GH Container Registry
        uses: docker/login-action@v1
        with:
          registry: ghcr.io
          username: ${{ secrets.CR_UN }}
          password: ${{ secrets.CR_PAT }}
      -
        name: Pull Verified Image
        run: docker pull ghcr.io/${{ github.repository }}:${{ steps.tags.outputs.sha7 }}
      -
        name: Tag Verified Image
        run: |
          docker image tag ghcr.io/${{ github.repository }}:${{ steps.tags.outputs.sha7 }} ghcr.io/${{ github.repository }}:latest
          docker image tag ghcr.io/${{ github.repository }}:${{ steps.tags.outputs.sha7 }} ghcr.io/${{ github.repository }}:${{ steps.tags.outputs.version }}
          docker image tag ghcr.io/${{ github.repository }}:${{ steps.tags.outputs.sha7 }} ghcr.io/${{ github.repository }}:${{ steps.tags.outputs.minor }}
          docker image tag ghcr.io/${{ github.repository }}:${{ steps.tags.outputs.sha7 }} ghcr.io/${{ github.repository }}:${{ steps.tags.outputs.major }}
      -
        name: Push Promoted Image Tags
        run: docker image push --all-tags ghcr.io/${{ github.repository }}
  deploy:
    name: Deploy
    runs-on: ubuntu-latest
    env:
      VERSION: ${{ needs.promote-tag.outputs.version }}
    steps:
      - name: Setup Docker Buildx
        uses: docker/setup-buildx-action@v1

      - name: Login to GH Container Registry
        uses: docker/login-action@v1
        with:
          registry: ghcr.io
          username: ${{ secrets.GH_USERNAME }}
          password: ${{ secrets.GH_PAT }}

      - name: Install Heroku
        run: curl https://cli-assets.heroku.com/install.sh | sh

      - name: Heroku Container Login
        run: heroku container:login

      - name: Push All Processes
        run: heroku container:push -R -a ${{ secrets.HEROKU_PROD_APP_NAME }} --arg VERSION=$VERSION
        working-directory: ./deploy

      - name: Release
        run: heroku container:release -a ${{ secrets.HEROKU_PROD_APP_NAME }} web worker release

As you can tell, there is a lot of duplication in these Github Actions steps. The new Github Composite Actions feature should be able to tidy this up. I'll save that for a future post.