Example Drone CI Kubernetes manifests

I've been evaluating Drone for CI at work, with the goal to get it running on Kubernetes. I figured I'd share some very preliminary manifests, for anyone who else may be tinkering. How to use these is outside of the scope of this article, but Google Container Engine is an easy way to get going.

Read on for some minimal Kubernetes manifests for Drone CI.

Read More

Where does Google Cloud fit in the IaaS market?

Where does Google Cloud fit in the IaaS market?

After spending the last five years working almost exclusively within the Amazon Web Services ecosystem, 2015 has been full of lots of Google Cloud (GC) work for me. This wasn't necessarily a conscious decision, but more a side effect of a new job.

After making the switch my initial impression was lukewarm. In many ways, Google Cloud is following in the footsteps of AWS, but trails by a substantial margin. It was difficult to anticipate what differentiating niche they'd carve out as recently as Q1 of 2015. Sure, Google was able to look at some of the mistakes AWS made and do slightly better. I just didn't see a lot that would make me recommend GC over AWS for general-purpose usage cases.

However, the last few months have brought some interesting developments that have brightened my outlook on Google Cloud's future. Read on to see the madness explained...

Read More

The journey to Quay.io - An introspection and review

The journey to Quay.io - An introspection and review

We've been putting together version 2.0 of our Continuous Integration/Deploy system at work, which led to me kicking the tires of a bunch of different build/test systems. The goal was to put together a CI/CD pipeline that the entire team would feel comfortable using. Our deploy target was a number of Kubernetes clusters.

With much of our services already being adapted to work with Docker and Docker Compose, we were ready to figure out which combination of systems would build and test our images.

Read More

Linode NextGen

As of today, Linode announced the completion of their upgrade effort, dubbed“Linode NextGen“. Upgrading an internaitonal fleet of servers is nothing to sneeze at, but they succeeded with flying colors.

In the last few months, we saw Linode:

  • Double the amount of RAM per instance
  • Bump all instances up to eight virtual cores (from four)
  • Invested heavily in improving their network
  • Bump instance outbound cap by 5x
  • Increased outbound monthly transfer by 10x

They have allowed much higher usage of their resources without compromising on performance. This wasn’t a matter of just upping the quotas and calling it a day.

Why does it matter?

One only need take a look at lowendbox or ServerBear to see that there are plenty of affordable options for VPS providers. Linode is still nowhere close to being the cheapest, but that’s not really Linode’s game. They’re going to give you something a little faster, a little more roomy, and they’re going to keep you happy wth their support. While it’s entirely possible you’ll find someone with similar specs, you’ll be hard pressed to find a competitor with the strength of this offering from top to bottom (price, hardware, network, service/support).

Linode has traditionally been a little more expensive, very developer-centric, and has (in my experience) had a pretty good customer service story. These latest round of upgrades don’t push Linode down into the “budget” category (nor should they), but they do make a good chunk of their competitors in the same category/price range look inadequate. For example, I’m not sure how I could justify using Rackspace after these adjustments for my own purposes.

We all win

Regardless of whether you use Linode or even like them, let’s be clear about one thing: When upgrades and bigger jumps like this happen, we all win. Other providers are going to look at this and will have to decide whether their current offerings need a shot in the arm. Linode is no industry juggernaut, but they are well known enough for this to cause a few ripples.

Let’s sit back and see who makes the next big jump.

Amazon Route 53 DNS failover

While it is no longer shiny and new, I just recently got a chance to sit down andplay with Amazon Route 53’s DNS failover feature. So far, I have found it to be simple and very useful for simple cases where DNS fail-over is acceptable.

My usage case

I run EVE Market Data Relay (EMDR), which is a distributed EVE Online market data distribution system. All pieces of the infrastructure have at least one redundant copy, and the only single point of failure is the DNS service itself. We can afford to lose a little bit of data during fail-over, but a complete outage is something we can’t have.

Sitting at the top of the system are two HTTP gateways on different machines at different ISPs. These are set up as a weighted record set, with each gateway weighing in at 50/50 (requests are divided evenly).

We introduce the Route 53 magic by adding in a health check and associating it with each of the two resource record sets. The health check involves Route 53 servers around the world periodically calling a pre-determined URL on the two HTTP gateways in search for a non-error HTTP status code. If any of the entries fails more than three times (they check roughly every 30 seconds), said entry is removed from the weighted set.

By the time that Route 53 picks up on the failure, yanks the entry from the weighted set, and most fast ISP DNS servers notice the change, about two minutes have elapsed.

Why this is a good fit for EMDR

With EVE Market Data Relay, it’s not the end of the world if 50% of user-submitted data gets lost over the minute and a half it takes for Route 53 to remove the unhealthy gateway. It’s highly likely that another user will re-submit the very same data that was lost. Even if we never see the data, the loss of a few data points here and there doesn’t hurt us much in our case.

With that said, DNS failover in general can be sub-optimal in a few basic cases:

  • You don’t want to leave failover up to the many crappy ISP DNS servers around the net. Not all will pick up the change in a timely manner.
  • You can’t afford to lose some requests here and there. DNS failover isn’t seamless, so your application would need to be smart enough on both ends if data loss is unacceptable.

For more simple cases like mine, it’s wonderful.

Price

In my case, Route 53 is health checking two servers that are external to AWS, which means I spend a whopping $1.50/month on Route 53’s DNS failover.

Assorted useful bits of documentation

More details on how the health checks work can be found on the Route 53 documentation.

Refer to the Amazon Route 53 DNS failover documentation for the full run-down.