Linode NextGen

As of today, Linode announced the completion of their upgrade effort, dubbed“Linode NextGen“. Upgrading an internaitonal fleet of servers is nothing to sneeze at, but they succeeded with flying colors.

In the last few months, we saw Linode:

  • Double the amount of RAM per instance
  • Bump all instances up to eight virtual cores (from four)
  • Invested heavily in improving their network
  • Bump instance outbound cap by 5x
  • Increased outbound monthly transfer by 10x

They have allowed much higher usage of their resources without compromising on performance. This wasn’t a matter of just upping the quotas and calling it a day.

Why does it matter?

One only need take a look at lowendbox or ServerBear to see that there are plenty of affordable options for VPS providers. Linode is still nowhere close to being the cheapest, but that’s not really Linode’s game. They’re going to give you something a little faster, a little more roomy, and they’re going to keep you happy wth their support. While it’s entirely possible you’ll find someone with similar specs, you’ll be hard pressed to find a competitor with the strength of this offering from top to bottom (price, hardware, network, service/support).

Linode has traditionally been a little more expensive, very developer-centric, and has (in my experience) had a pretty good customer service story. These latest round of upgrades don’t push Linode down into the “budget” category (nor should they), but they do make a good chunk of their competitors in the same category/price range look inadequate. For example, I’m not sure how I could justify using Rackspace after these adjustments for my own purposes.

We all win

Regardless of whether you use Linode or even like them, let’s be clear about one thing: When upgrades and bigger jumps like this happen, we all win. Other providers are going to look at this and will have to decide whether their current offerings need a shot in the arm. Linode is no industry juggernaut, but they are well known enough for this to cause a few ripples.

Let’s sit back and see who makes the next big jump.

Amazon Route 53 DNS failover

While it is no longer shiny and new, I just recently got a chance to sit down andplay with Amazon Route 53’s DNS failover feature. So far, I have found it to be simple and very useful for simple cases where DNS fail-over is acceptable.

My usage case

I run EVE Market Data Relay (EMDR), which is a distributed EVE Online market data distribution system. All pieces of the infrastructure have at least one redundant copy, and the only single point of failure is the DNS service itself. We can afford to lose a little bit of data during fail-over, but a complete outage is something we can’t have.

Sitting at the top of the system are two HTTP gateways on different machines at different ISPs. These are set up as a weighted record set, with each gateway weighing in at 50/50 (requests are divided evenly).

We introduce the Route 53 magic by adding in a health check and associating it with each of the two resource record sets. The health check involves Route 53 servers around the world periodically calling a pre-determined URL on the two HTTP gateways in search for a non-error HTTP status code. If any of the entries fails more than three times (they check roughly every 30 seconds), said entry is removed from the weighted set.

By the time that Route 53 picks up on the failure, yanks the entry from the weighted set, and most fast ISP DNS servers notice the change, about two minutes have elapsed.

Why this is a good fit for EMDR

With EVE Market Data Relay, it’s not the end of the world if 50% of user-submitted data gets lost over the minute and a half it takes for Route 53 to remove the unhealthy gateway. It’s highly likely that another user will re-submit the very same data that was lost. Even if we never see the data, the loss of a few data points here and there doesn’t hurt us much in our case.

With that said, DNS failover in general can be sub-optimal in a few basic cases:

  • You don’t want to leave failover up to the many crappy ISP DNS servers around the net. Not all will pick up the change in a timely manner.
  • You can’t afford to lose some requests here and there. DNS failover isn’t seamless, so your application would need to be smart enough on both ends if data loss is unacceptable.

For more simple cases like mine, it’s wonderful.

Price

In my case, Route 53 is health checking two servers that are external to AWS, which means I spend a whopping $1.50/month on Route 53’s DNS failover.

Assorted useful bits of documentation

More details on how the health checks work can be found on the Route 53 documentation.

Refer to the Amazon Route 53 DNS failover documentation for the full run-down.

namecheap.com EssentialSSL and Amazon ELB

For those using the SSL capabilities of Amazon Elastic Load Balancer (ELB),you often need to upload a Certificate Chain to avoid SSL errors in some browsers.

We use Namecheap and their Comodo EssentialSSL wildcard certs. If you specify “Other” as your server type, you’ll get a collection of files that comprise the Certificate Chain/CA Bundle, instead of a single file (like you’d get if you specified Apache during the CSR submission process). If you haven’t purchased your cert yet, save yourself some trouble and just say you’re using Apache. If you specified “Other” or have found yourself with a bunch of *.crt files, read on.

I am going to assume that you are using OpenSSL. I am also going to assume that you have the same files I do. If this isn’t the case, you could try downloading their CA Bundle, but this may or may not work (or be up to date).

cat EssentialSSLCA_2.crt ComodoUTNSGCCA.crt UTNAddTrustSGCCA.crt 
    AddTrustExternalCARoot.crt > ca-chain.crt

This is all you need to paste into the “Certificate Chain” field in the SSL Cert selection dialog on the AWS Management Console.

While you’re here, have another tip: If the dialog complains about an invalid Private Key or Public Key Certificate, you probably need to PEM encode it. My key was RSA, so this is what PEM-encoding looked like for me:

openssl rsa -in mycert.com.key -out mycert.com.pem

This is then safe to paste into the “Private key” field. If for whatever reason your *.crt file came back in another format, you could also use this same set of steps to encode it (though, my was sent to me PEM encoded already).

Fabric task for notifying New Relic of a code deploy

We’ve been doing some playing around with New Relic lately at Pathwright.One of the neat things it does is track when code deploys happen, and how they affect responsiveness and resource consumption.

In order to notify New Relic when a deploy happens, you simply POST to their web-based API with the information you’d like to include (change logs, commit hashes, etc).

We currently do this via a Fabric task, which I figured I’d share. We tend to run this from our deploy task. Enjoy!

import socket
import requests
from fabric.api import run, cd

def notify_newrelic_of_deploy(old_commit_hash):
    """
    New Relic tracks deploy events. Send a notification via their HTTP API.

    :param str old_commit_hash: The previously deployed git hash. This is
        easily retrieved on a remote machine by running 'git rev-parse HEAD'.
    """

    with cd(env.REMOTE_CODEBASE_PATH):
        new_commit_hash = run('git rev-parse HEAD')
        changes = run('git --no-pager log %s..HEAD' % old_commit_hash)

    headers = {
        # Adjust this to reflect your API key.
        'x-api-key': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
    }
    payload = {
        'deployment[app_name]': 'Your App Name',
        # This is also very important to update with your own value.
        'application_id': '1234567',
        'deployment[description]': 'Fabric deploy is fun deploy',
        'deployment[revision]': new_commit_hash,
        'deployment[changelog]': changes,
        'deployment[user]': '%s@%s' % (LOCAL_USERNAME, socket.gethostname()),
    }

    requests.post("https://rpm.newrelic.com/deployments.xml",
                  data=payload, headers=headers)