Using Drone CI to build and push images to Quay.io

As my experimentation with Drone continues, the latest task was to get Docker images building and published to Quay.io (more on why we're using Quay here).

Drone uses a .drone.yml file to determine what happens in response to a `git push` (or other events). We tend to use the build section (not shown below) for running our tests, and the publish section for actually building and publishing the image from a Dockerfile. Read on for an excerpt from a single-page JS app I helped deploy today.

Read More

Example Drone CI Kubernetes manifests

I've been evaluating Drone for CI at work, with the goal to get it running on Kubernetes. I figured I'd share some very preliminary manifests, for anyone who else may be tinkering. How to use these is outside of the scope of this article, but Google Container Engine is an easy way to get going.

Read on for some minimal Kubernetes manifests for Drone CI.

Read More

The journey to Quay.io - An introspection and review

The journey to Quay.io - An introspection and review

We've been putting together version 2.0 of our Continuous Integration/Deploy system at work, which led to me kicking the tires of a bunch of different build/test systems. The goal was to put together a CI/CD pipeline that the entire team would feel comfortable using. Our deploy target was a number of Kubernetes clusters.

With much of our services already being adapted to work with Docker and Docker Compose, we were ready to figure out which combination of systems would build and test our images.

Read More

Networked, multi-container image crawling with Docker and fig

Like many, I’ve been playing with Docker as it has been developing. I feltreasonably comfortable creating images and running isolated containers, but I had yet to tinker with automatically provisioning and networking multiple containers together.

To remedy this, I spent a weekend writing a simple distributed dockerized image crawler using Twisted, Docker, and fig. The full source (under a BSD license) can be found in the Github repo.

The README covers a lot of this in more detail, but the application is comprised of three container types:

  • An HTTP web API for enqueuing jobs. For the sake of this example, you only run one of these, but it’d be easy to adapt the code to support multiple instances.
  • A worker container that carries out the image crawling. You can run as many as these as you’d like.
  • A Redis container that is used for tracking job state and results.

The worker daemons connect to the HTTP web API container over ZeroMQ, PULLing jobs that the HTTP daemon PUSHes. If this were anything but a toy app, you’d probably want a proper message broker that could offer stronger deliverability and persistence guarantees.

Assorted Observations

  • I may have been doing something wrong, but boot2docker seems to be pretty rough with volume mounting. It looks like it’d be pretty easy to mount a directory within the boot2docker VM, but the host machine->container sharing didn’t work in real-time. I expect this will improve with time, or I could have missed something.
  • fig is a great idea, and is very handy. I’m looking forward to seeing where this concept is taken over the next few years.
  • There is a great python:2.7 image on Docker Hub that serves as a great starting point. They’ve got all sorts of other versions and alternative implementations (PyPy) covered.