Deploy Python applications with Docker, Makefile and Virtualenv

The process of taking code from development, to staging, to production is always a challenge. It seems to me that on every project I’ve worked, and even on different iterations of the same project, that process has changed as teams have tried to identify the fastest and easiest way to ensure that code will run in any environment. Recently, consensus seems to have coalesced around Docker. Docker is an almost magical solution that allows you to utilize prebuilt images of thousands of different software setups as base environments to build your application on. The learning curve is fast and once you’ve wrapped you’re head around it, it makes some of the most tedious elements of development (building and deploying environments) blazing fast and super reliable.

I’m primarily a Python developer. For years, my go-to for deploying Python code has been to use a virtual environment with pip to install packages and a Makefile to invoke the virtual environment and call the application. A lot of discussion around Docker suggests that virtual environments are unnecessary, that you can skip the effort (and MBs) of managing packages via a virtual environment and just install directly to the Docker container. For the most part this is true, but for a number of edge cases you’ll need a virtual environment to install packages. More so for the sake of continuity with older versions of an application, maintaining your virtual environment setup just makes sense.

So, if you find yourself compelled to employ docker but aren’t ready to give up your virtual environment, this is an easy method to integrate the two. I will demonstrate how to set up your Dockerfile to call a Makefile that sets up your virtual environment and then invokes the virtual environment when you call your application from the makefile.

FROM python:3.7.3-alpine3.9
RUN apk add build-base
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
COPY requirements.txt /myapp/requirements.txt
COPY Makefile /myapp/Makefile
RUN python3 -m pip install virtualenv
RUN cd myapp && make venv
view raw Dockerfile hosted with ❤ by GitHub
venv:
if [ -d .py ] ; \
then \
echo "virtualenv already built, skipping…"; \
else \
python3 -m venv .py; \
.py/bin/python3 -m pip install -r requirements.txt; \
fi
myapp_run: venv
.py/bin/python3 myapp.py
view raw Makefile hosted with ❤ by GitHub
import numpy as np
from pandas import DataFrame, Series
import pandas as pd
print("Hello World")
view raw myapp.py hosted with ❤ by GitHub
python-dateutil==2.7.5
numpy
pandas==0.23.4
psycopg2==2.7.6.1
urllib3==1.24.1

When you build your image from the Dockerfile you’ll need to install the build-base application for alpine which will include the make utility. You can also install all package dependencies in the Docker container, as we do here for psycopg2 in line 3. This Dockerfile then copies the requirements.txt and Makefile into your new docker container. In a more real-life set up, you might clone the git repository for your application into your container in this step. Line 6 installs virtualenv on your docker container and Line 7 calls the make file to build the virtual environment.

The venv stage of the make file checks to see if a virtual environment already exists. If it does it invokes it and makes it callable to other stages in the makefile. If not, it generates a new virtualenv and installs packages from the requirements.txt. Subsequent makefile stages inherit the venv variable.

Once you’ve built and launched the container your can call the make myapp_run to call your application with the packages installed on your virtual environment.