Fabric task for notifying New Relic of a code deploy

We’ve been doing some playing around with New Relic lately at Pathwright.One of the neat things it does is track when code deploys happen, and how they affect responsiveness and resource consumption.

In order to notify New Relic when a deploy happens, you simply POST to their web-based API with the information you’d like to include (change logs, commit hashes, etc).

We currently do this via a Fabric task, which I figured I’d share. We tend to run this from our deploy task. Enjoy!

import socket
import requests
from fabric.api import run, cd

def notify_newrelic_of_deploy(old_commit_hash):
    """
    New Relic tracks deploy events. Send a notification via their HTTP API.

    :param str old_commit_hash: The previously deployed git hash. This is
        easily retrieved on a remote machine by running 'git rev-parse HEAD'.
    """

    with cd(env.REMOTE_CODEBASE_PATH):
        new_commit_hash = run('git rev-parse HEAD')
        changes = run('git --no-pager log %s..HEAD' % old_commit_hash)

    headers = {
        # Adjust this to reflect your API key.
        'x-api-key': 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
    }
    payload = {
        'deployment[app_name]': 'Your App Name',
        # This is also very important to update with your own value.
        'application_id': '1234567',
        'deployment[description]': 'Fabric deploy is fun deploy',
        'deployment[revision]': new_commit_hash,
        'deployment[changelog]': changes,
        'deployment[user]': '%s@%s' % (LOCAL_USERNAME, socket.gethostname()),
    }

    requests.post("https://rpm.newrelic.com/deployments.xml",
                  data=payload, headers=headers)

Switched to Pelican

For the last four years, my blog has been powered by Django. As I have foundmyself becoming more and more busy, I have stopped wanting to hassle with keeping things up to date on the server and the application.

After a weekend of tinkering and conversions, I’m now up and running on Pelican. Generation is relatively quick, I like that it uses Jinja, and blogging in Restructured Text is strangely… relaxing? The only downside is that the documentation, while reasonably complete, has some unclear spots (custom page generation and RSS/Atom feed settings in particular).

Ansible first impressions

After brief visits with Puppet and Chef for config management,I’ve set my sights on Ansible. It’s late and I’ve been staring at this stuff for way too long today, but here are some early observations:

  • I really like that it is written in Python. Puppet and Chef are great pieces of software, but I spend my days staring at Python. It’s nice not having to context switch away.
  • The documentation, while organized somewhat weirdly, is surprisingly thorough and helpful. I found myself much less frustrated and overwhelmed in comparison to my forays into Puppet and Chef.
  • It’s just SSH.
  • The playbook organization and format make a lot of sense to me. It feels a whole lot less complex than Chef in particular.

As far as negatives:

  • The documentation is very good but it could use some organizational tweaking. The related links at the bottom of some of the pages are very erratic and sometimes incomplete.
  • If you’re wanting to use Ansible as a Python API, the docs for this are pretty incomplete.

So far, Ansible looks very promising. I think this is going to be a great fit for us at Pathwright. Perhaps we’ll even have some time to contribute improved docs.

Amazon Elastic Transcoder Review

Amazon Elastic Transcoder was released just a few short days ago.Given that we do a lot of encoding at Pathwright, this was of high interest to us. A year or two ago, we wrote media-nommer which is similar to Amazon’s Transcoder, and it has worked well for us. However, as a small company with manpower constraints we’ve had issues finding time to continue maintaining and improving media-nommer.

With the hope that we could simplify and reduce engineering overhead, we took the new Elastic Transcoder service for a spin at Pathwright.

Web-based management console impressions

One of the strongest points of the AWS Elastic Transcoder is its web management console. It makes it very easy for those that don’t have the time or capability to work with the API to fire off some quick transcoding jobs. With that said, I will make a few points:

  • At the time of this article’s writing (Jan 30, 2013), it is extremely tedious to launch multiple encoding jobs. There’s a lot of typing and manual labor involved. For one-off encodings, it’s great.
  • Transcoding presets are easy to set up. I appreciate that!
  • Some of the form validation Javascript is just outright broken. You may end up typing a value that isn’t OK, hitting “Submit”, only to find nothing happen. The probable intended behavior is to show an error message, but they are very inconsistently rendered (particularly on the job submission page).
  • The job status information could be a lot more detailed. I’d really like to see a numerical figure for how many minutes AWS calculates the video to be, even if this is only available once a job is completed. This lets you know exactly how much you’re being billed for on a per-video basis. Currently, you just get a lump sum bill, which isn’t helpful. It’d also be nice to see when an encoding job was started/finished on the summary (you can do this by listening to an SNS topic, but you probably aren’t doing that if you’re using the web console).

Web API impressions

For the purpose of working Elastic Transcoder into Pathwright, we turned to the excellent boto (which we are regular contributors to). For the most part, this was a very straightforward process, with some caveats:

  • The transcoding job state SNS notifications contain zero information about the encoding minutes you were billed for that particular job. In our case, we bill our users for media management in Pathwright, so we must know how much each encoding job is costing us, and who it belonged to. Each customer gets a bill at the end of the month, without needing to hassle with an AWS account (these aren’t technical users, for the most part). Similarly, the “get job” API request shows no minutes figure, either.
  • If you’re writing something that uses external AWS credentials to manage media, you’ve got some setup work to do. Before you can submit job #1, you’re going to need to create an SNS topic, an IAM role, a Transcoder Pipeline, and any presets you need (if the defaults aren’t sufficient). If you make changes to any of these pieces, you need to sync the changes out to every account that you “manage”. These are all currently required to use Transcoder. This is only likely to be a stumbling block for services and applications that manage external AWS accounts (for example, we encode videos for people, optionally using their own AWS account instead of ours).
  • At the time of this article’s writing, the documentation for the web API is severely limited. There is a lack of example request/response cycles with anything but one or two of the most common scenarios. I’d like to see some of the more complex request/responses.

Some general struggles/pain points

While this article has primarily focused on the issues we ran into, we’ll criticize a little more before offering praise:

  • As is the case for anyone not paying money for premium support, AWS has terrible customer support. If you want help with the Transcoding service, the forums are basically your only option. The responses seen in there so far haven’t been very good or timely. However, it is important to note that this support model is not limited to Elastic Transcoder. It is more of an organizational problem. I am sure this is on their minds, and if there is a group that can figure out how to offer decent support affordably, it’d be Amazon. Just be aware that you’re not going to get the best, fastest support experience without paying up.
  • We do low, medium, and high quality transcodings for each video we serve at Pathwright. Our lower quality encoding is smaller (in terms of dimensions) than the medium and high quality encodings. With media-nommer and ffmpeg, we were able to specify a fixed width and let ffmpeg determine the height (while preserving aspect ratio). The Amazon Transcoder currently requires height and width for each preset, if you want to specify a dimension. Given that our master video files are all kinds of dimensions and aspect ratios, this is a non-starter for us.
  • If you submit an encoding job with an output S3 key name that already exists, the job fails. While you do open yourself up to some issues in doing so, we would appreciate the ability to say “I want to over-write existing files in the output bucket”. There is probably a technical reason for this, but I think this fails the practicality test. A solution can and should be found to allow this.
  • Because of the aforementioned poor support, I still don’t have a good answer to this, but it doesn’t appear that you can do two-pass encodings. This is a bummer for us, as we’ve been able to get some great compression and quality doing this.

Overall verdict

For Pathwright, the Amazon Transcoder isn’t capable enough to get the nod just yet. However, the foundation that has been laid is very solid. The encodings themselves execute quickly, and it’s great not having to worry about the state of your own in-house encoding infrastructure.

The prices are very fair, and are a large savings over Zencoder and Encoding.com at the lower to moderate volumes. The price advantage does taper off as your scale gets very large, and those two services do offer a lot more capabilities. If your needs are basic, Amazon Transcoder is likely to be cheaper and “good enough” for you. If you need live streaming, close captioning, or anything more elaborate, shell out and go with a more full-featured service.

Once some of the gaping feature gaps are filled and the platform has time to mature and stabilize, this could be a good service. If the customer support improves with the features, this could be an excellent service.

Verdict: Wait and see, but so far, so good.