It has been about a year since I started using a virtual server on the laptop to locally host my blogging efforts with Pelican. Up to now I have used a combination of Vagrant (with Virtualbox in the background) and a Puppet built CentOS VM. At work we are increasingly introducing the concept of containerised services in the form of Docker containers, and I felt that revisiting my blogging setup with a view to making use of a container would be a great way of getting to know the technology a little better.
I should begin by stating that, although you will frequently see the question "What is better, Docker or Vagrant?", it is not really a fair comparison. The two technologies fulfil different use cases and overlap only in a minimal way.
Vagrant is generally more focused on providing a complete development environment, therefore your Vagrant box might provide a web server and a database within the same box.
Docker is an application oriented virtual environment with the Docker container only providing what is absolutely necessary for the application to run, for example just a web server. Should you require a database then that would be provided by a separate Docker container.
So essentially, I am moving from a virtual machine focused on providing a whole system to one that provides only bare essentials for getting the job done.
As there are hundreds of guides out there on how to install and get going with Docker, I am not going to repeat my experience for you. Probably the most helpful thing I could do would be to go ahead and deconstruct my Dockerfile, so here goes.
This container is based on a clean Debian linux distribution. The latest tag indicates that this will be latest stable release (debian:jessie at the time of writing this). The MAINTAINER field is just metadata indicating who the author is; quite self explanatory really!
FROM debian:latest MAINTAINER Chris Ramsay <email@example.com>
This next block below gets the machine to run a general software update and in addition install some basic requirements: Python, pip (which is a Python package manager) and some development tools; those tools are purely to assist in building some of the packages later.
# Update & add prerequisites RUN apt-get -y update && apt-get install -y \ python \ python-dev \ python-pip \ git
Now that all the Python prerequisites are in place it is time to go ahead and
add the packages that will be required for Pelican, some of the Pelican plugins
and the build and deployment process, which I orchestrate using the Fabric
Python library. Note that I also include a custom
.bashrc file so that
I can give the terminal a nice command prompt and some handy aliases.
WORKDIR /srv ADD requirements.txt /srv/requirements.txt ADD files/bashrc /root/.bashrc
requirements.txt file that is brought in by the
ADD command looks like this:
Fabric Jinja2 Markdown MarkupSafe Pygments Unidecode argparse beautifulsoup4 blinker docutils ecdsa feedgenerator paramiko pelican pycrypto python-dateutil pytz six smartypants typogrify wsgiref
Below, the final block uses Python's
pip package manager to install the
packages from the requirements file above:
RUN pip install -r requirements.txt
So if we have got here successfully it means that it is time to move a bunch of files to the right place in the container OS in order to allow us to be able to securely commit code changes to remote git repositories and digitally sign commits using GNUPG. All these important files are never stored in the Docker image itself as they would become available to anyone with the ability to download my images and run Docker. In order to make them accessible to the Docker instance they are mounted in at run time from the host machine:
-v $wd/git/.gitconfig:/root/.gitconfig \ -v $wd/gnupg:/root/.gnupg \ -v $wd/ssh:/root/.ssh \
The three volume mount commands take place in a handy little script as described in the next section. That is pretty all there is to it; just the most basic things that I need to spend happy hours playing with the blog.
I have created a small shell script for when it comes to actually running the container; it looks something like this:
#!/bin/bash ### # Add more mounts for your projects after the git/gnupg/ssh ones # e.g. -v ~/stuff/myproject:/project \ wd=$(pwd) docker rm -f pelicanbox docker run \ --name pelicanbox \ -p 80:8000 \ -v $wd/git/.gitconfig:/root/.gitconfig \ -v $wd/gnupg:/root/.gnupg \ -v $wd/ssh:/root/.ssh \ -v ~/work/blog:/project \ -v ~/extras/pelican-plugins:/project/pelican-plugins \ -v ~/extras/pelican-themes:/project/pelican-themes \ -ti \ chrisramsay/docker-pelicanbox:latest \ /bin/bash
The script above just saves me the hassle of having to remember what to type on the command line each time. Once I am logged into the container, I just navigate to the build directory and run the command to serve the site over HTTP, then point the browser to (in my case) 192.168.99.100.
In summation, my early days with Docker have gone well and it is beginning to prove itself to be a lightweight replacement for the Vagrant boxes I used to run. As a bonus I have found that my laptop battery is lasting longer too!