Linode NextGen

As of today, Linode announced the completion of their upgrade effort, dubbed“Linode NextGen“. Upgrading an internaitonal fleet of servers is nothing to sneeze at, but they succeeded with flying colors.

In the last few months, we saw Linode:

  • Double the amount of RAM per instance
  • Bump all instances up to eight virtual cores (from four)
  • Invested heavily in improving their network
  • Bump instance outbound cap by 5x
  • Increased outbound monthly transfer by 10x

They have allowed much higher usage of their resources without compromising on performance. This wasn’t a matter of just upping the quotas and calling it a day.

Why does it matter?

One only need take a look at lowendbox or ServerBear to see that there are plenty of affordable options for VPS providers. Linode is still nowhere close to being the cheapest, but that’s not really Linode’s game. They’re going to give you something a little faster, a little more roomy, and they’re going to keep you happy wth their support. While it’s entirely possible you’ll find someone with similar specs, you’ll be hard pressed to find a competitor with the strength of this offering from top to bottom (price, hardware, network, service/support).

Linode has traditionally been a little more expensive, very developer-centric, and has (in my experience) had a pretty good customer service story. These latest round of upgrades don’t push Linode down into the “budget” category (nor should they), but they do make a good chunk of their competitors in the same category/price range look inadequate. For example, I’m not sure how I could justify using Rackspace after these adjustments for my own purposes.

We all win

Regardless of whether you use Linode or even like them, let’s be clear about one thing: When upgrades and bigger jumps like this happen, we all win. Other providers are going to look at this and will have to decide whether their current offerings need a shot in the arm. Linode is no industry juggernaut, but they are well known enough for this to cause a few ripples.

Let’s sit back and see who makes the next big jump.

Amazon Route 53 DNS failover

While it is no longer shiny and new, I just recently got a chance to sit down andplay with Amazon Route 53’s DNS failover feature. So far, I have found it to be simple and very useful for simple cases where DNS fail-over is acceptable.

My usage case

I run EVE Market Data Relay (EMDR), which is a distributed EVE Online market data distribution system. All pieces of the infrastructure have at least one redundant copy, and the only single point of failure is the DNS service itself. We can afford to lose a little bit of data during fail-over, but a complete outage is something we can’t have.

Sitting at the top of the system are two HTTP gateways on different machines at different ISPs. These are set up as a weighted record set, with each gateway weighing in at 50/50 (requests are divided evenly).

We introduce the Route 53 magic by adding in a health check and associating it with each of the two resource record sets. The health check involves Route 53 servers around the world periodically calling a pre-determined URL on the two HTTP gateways in search for a non-error HTTP status code. If any of the entries fails more than three times (they check roughly every 30 seconds), said entry is removed from the weighted set.

By the time that Route 53 picks up on the failure, yanks the entry from the weighted set, and most fast ISP DNS servers notice the change, about two minutes have elapsed.

Why this is a good fit for EMDR

With EVE Market Data Relay, it’s not the end of the world if 50% of user-submitted data gets lost over the minute and a half it takes for Route 53 to remove the unhealthy gateway. It’s highly likely that another user will re-submit the very same data that was lost. Even if we never see the data, the loss of a few data points here and there doesn’t hurt us much in our case.

With that said, DNS failover in general can be sub-optimal in a few basic cases:

  • You don’t want to leave failover up to the many crappy ISP DNS servers around the net. Not all will pick up the change in a timely manner.
  • You can’t afford to lose some requests here and there. DNS failover isn’t seamless, so your application would need to be smart enough on both ends if data loss is unacceptable.

For more simple cases like mine, it’s wonderful.

Price

In my case, Route 53 is health checking two servers that are external to AWS, which means I spend a whopping $1.50/month on Route 53’s DNS failover.

Assorted useful bits of documentation

More details on how the health checks work can be found on the Route 53 documentation.

Refer to the Amazon Route 53 DNS failover documentation for the full run-down.

namecheap.com EssentialSSL and Amazon ELB

For those using the SSL capabilities of Amazon Elastic Load Balancer (ELB),you often need to upload a Certificate Chain to avoid SSL errors in some browsers.

We use Namecheap and their Comodo EssentialSSL wildcard certs. If you specify “Other” as your server type, you’ll get a collection of files that comprise the Certificate Chain/CA Bundle, instead of a single file (like you’d get if you specified Apache during the CSR submission process). If you haven’t purchased your cert yet, save yourself some trouble and just say you’re using Apache. If you specified “Other” or have found yourself with a bunch of *.crt files, read on.

I am going to assume that you are using OpenSSL. I am also going to assume that you have the same files I do. If this isn’t the case, you could try downloading their CA Bundle, but this may or may not work (or be up to date).

cat EssentialSSLCA_2.crt ComodoUTNSGCCA.crt UTNAddTrustSGCCA.crt 
    AddTrustExternalCARoot.crt > ca-chain.crt

This is all you need to paste into the “Certificate Chain” field in the SSL Cert selection dialog on the AWS Management Console.

While you’re here, have another tip: If the dialog complains about an invalid Private Key or Public Key Certificate, you probably need to PEM encode it. My key was RSA, so this is what PEM-encoding looked like for me:

openssl rsa -in mycert.com.key -out mycert.com.pem

This is then safe to paste into the “Private key” field. If for whatever reason your *.crt file came back in another format, you could also use this same set of steps to encode it (though, my was sent to me PEM encoded already).

Amazon Elastic Transcoder Review

Amazon Elastic Transcoder was released just a few short days ago.Given that we do a lot of encoding at Pathwright, this was of high interest to us. A year or two ago, we wrote media-nommer which is similar to Amazon’s Transcoder, and it has worked well for us. However, as a small company with manpower constraints we’ve had issues finding time to continue maintaining and improving media-nommer.

With the hope that we could simplify and reduce engineering overhead, we took the new Elastic Transcoder service for a spin at Pathwright.

Web-based management console impressions

One of the strongest points of the AWS Elastic Transcoder is its web management console. It makes it very easy for those that don’t have the time or capability to work with the API to fire off some quick transcoding jobs. With that said, I will make a few points:

  • At the time of this article’s writing (Jan 30, 2013), it is extremely tedious to launch multiple encoding jobs. There’s a lot of typing and manual labor involved. For one-off encodings, it’s great.
  • Transcoding presets are easy to set up. I appreciate that!
  • Some of the form validation Javascript is just outright broken. You may end up typing a value that isn’t OK, hitting “Submit”, only to find nothing happen. The probable intended behavior is to show an error message, but they are very inconsistently rendered (particularly on the job submission page).
  • The job status information could be a lot more detailed. I’d really like to see a numerical figure for how many minutes AWS calculates the video to be, even if this is only available once a job is completed. This lets you know exactly how much you’re being billed for on a per-video basis. Currently, you just get a lump sum bill, which isn’t helpful. It’d also be nice to see when an encoding job was started/finished on the summary (you can do this by listening to an SNS topic, but you probably aren’t doing that if you’re using the web console).

Web API impressions

For the purpose of working Elastic Transcoder into Pathwright, we turned to the excellent boto (which we are regular contributors to). For the most part, this was a very straightforward process, with some caveats:

  • The transcoding job state SNS notifications contain zero information about the encoding minutes you were billed for that particular job. In our case, we bill our users for media management in Pathwright, so we must know how much each encoding job is costing us, and who it belonged to. Each customer gets a bill at the end of the month, without needing to hassle with an AWS account (these aren’t technical users, for the most part). Similarly, the “get job” API request shows no minutes figure, either.
  • If you’re writing something that uses external AWS credentials to manage media, you’ve got some setup work to do. Before you can submit job #1, you’re going to need to create an SNS topic, an IAM role, a Transcoder Pipeline, and any presets you need (if the defaults aren’t sufficient). If you make changes to any of these pieces, you need to sync the changes out to every account that you “manage”. These are all currently required to use Transcoder. This is only likely to be a stumbling block for services and applications that manage external AWS accounts (for example, we encode videos for people, optionally using their own AWS account instead of ours).
  • At the time of this article’s writing, the documentation for the web API is severely limited. There is a lack of example request/response cycles with anything but one or two of the most common scenarios. I’d like to see some of the more complex request/responses.

Some general struggles/pain points

While this article has primarily focused on the issues we ran into, we’ll criticize a little more before offering praise:

  • As is the case for anyone not paying money for premium support, AWS has terrible customer support. If you want help with the Transcoding service, the forums are basically your only option. The responses seen in there so far haven’t been very good or timely. However, it is important to note that this support model is not limited to Elastic Transcoder. It is more of an organizational problem. I am sure this is on their minds, and if there is a group that can figure out how to offer decent support affordably, it’d be Amazon. Just be aware that you’re not going to get the best, fastest support experience without paying up.
  • We do low, medium, and high quality transcodings for each video we serve at Pathwright. Our lower quality encoding is smaller (in terms of dimensions) than the medium and high quality encodings. With media-nommer and ffmpeg, we were able to specify a fixed width and let ffmpeg determine the height (while preserving aspect ratio). The Amazon Transcoder currently requires height and width for each preset, if you want to specify a dimension. Given that our master video files are all kinds of dimensions and aspect ratios, this is a non-starter for us.
  • If you submit an encoding job with an output S3 key name that already exists, the job fails. While you do open yourself up to some issues in doing so, we would appreciate the ability to say “I want to over-write existing files in the output bucket”. There is probably a technical reason for this, but I think this fails the practicality test. A solution can and should be found to allow this.
  • Because of the aforementioned poor support, I still don’t have a good answer to this, but it doesn’t appear that you can do two-pass encodings. This is a bummer for us, as we’ve been able to get some great compression and quality doing this.

Overall verdict

For Pathwright, the Amazon Transcoder isn’t capable enough to get the nod just yet. However, the foundation that has been laid is very solid. The encodings themselves execute quickly, and it’s great not having to worry about the state of your own in-house encoding infrastructure.

The prices are very fair, and are a large savings over Zencoder and Encoding.com at the lower to moderate volumes. The price advantage does taper off as your scale gets very large, and those two services do offer a lot more capabilities. If your needs are basic, Amazon Transcoder is likely to be cheaper and “good enough” for you. If you need live streaming, close captioning, or anything more elaborate, shell out and go with a more full-featured service.

Once some of the gaping feature gaps are filled and the platform has time to mature and stabilize, this could be a good service. If the customer support improves with the features, this could be an excellent service.

Verdict: Wait and see, but so far, so good.