Switched to Pelican

For the last four years, my blog has been powered by Django. As I have foundmyself becoming more and more busy, I have stopped wanting to hassle with keeping things up to date on the server and the application.

After a weekend of tinkering and conversions, I’m now up and running on Pelican. Generation is relatively quick, I like that it uses Jinja, and blogging in Restructured Text is strangely… relaxing? The only downside is that the documentation, while reasonably complete, has some unclear spots (custom page generation and RSS/Atom feed settings in particular).

Ansible first impressions

After brief visits with Puppet and Chef for config management,I’ve set my sights on Ansible. It’s late and I’ve been staring at this stuff for way too long today, but here are some early observations:

  • I really like that it is written in Python. Puppet and Chef are great pieces of software, but I spend my days staring at Python. It’s nice not having to context switch away.
  • The documentation, while organized somewhat weirdly, is surprisingly thorough and helpful. I found myself much less frustrated and overwhelmed in comparison to my forays into Puppet and Chef.
  • It’s just SSH.
  • The playbook organization and format make a lot of sense to me. It feels a whole lot less complex than Chef in particular.

As far as negatives:

  • The documentation is very good but it could use some organizational tweaking. The related links at the bottom of some of the pages are very erratic and sometimes incomplete.
  • If you’re wanting to use Ansible as a Python API, the docs for this are pretty incomplete.

So far, Ansible looks very promising. I think this is going to be a great fit for us at Pathwright. Perhaps we’ll even have some time to contribute improved docs.

Amazon Elastic Transcoder Review

Amazon Elastic Transcoder was released just a few short days ago.Given that we do a lot of encoding at Pathwright, this was of high interest to us. A year or two ago, we wrote media-nommer which is similar to Amazon’s Transcoder, and it has worked well for us. However, as a small company with manpower constraints we’ve had issues finding time to continue maintaining and improving media-nommer.

With the hope that we could simplify and reduce engineering overhead, we took the new Elastic Transcoder service for a spin at Pathwright.

Web-based management console impressions

One of the strongest points of the AWS Elastic Transcoder is its web management console. It makes it very easy for those that don’t have the time or capability to work with the API to fire off some quick transcoding jobs. With that said, I will make a few points:

  • At the time of this article’s writing (Jan 30, 2013), it is extremely tedious to launch multiple encoding jobs. There’s a lot of typing and manual labor involved. For one-off encodings, it’s great.
  • Transcoding presets are easy to set up. I appreciate that!
  • Some of the form validation Javascript is just outright broken. You may end up typing a value that isn’t OK, hitting “Submit”, only to find nothing happen. The probable intended behavior is to show an error message, but they are very inconsistently rendered (particularly on the job submission page).
  • The job status information could be a lot more detailed. I’d really like to see a numerical figure for how many minutes AWS calculates the video to be, even if this is only available once a job is completed. This lets you know exactly how much you’re being billed for on a per-video basis. Currently, you just get a lump sum bill, which isn’t helpful. It’d also be nice to see when an encoding job was started/finished on the summary (you can do this by listening to an SNS topic, but you probably aren’t doing that if you’re using the web console).

Web API impressions

For the purpose of working Elastic Transcoder into Pathwright, we turned to the excellent boto (which we are regular contributors to). For the most part, this was a very straightforward process, with some caveats:

  • The transcoding job state SNS notifications contain zero information about the encoding minutes you were billed for that particular job. In our case, we bill our users for media management in Pathwright, so we must know how much each encoding job is costing us, and who it belonged to. Each customer gets a bill at the end of the month, without needing to hassle with an AWS account (these aren’t technical users, for the most part). Similarly, the “get job” API request shows no minutes figure, either.
  • If you’re writing something that uses external AWS credentials to manage media, you’ve got some setup work to do. Before you can submit job #1, you’re going to need to create an SNS topic, an IAM role, a Transcoder Pipeline, and any presets you need (if the defaults aren’t sufficient). If you make changes to any of these pieces, you need to sync the changes out to every account that you “manage”. These are all currently required to use Transcoder. This is only likely to be a stumbling block for services and applications that manage external AWS accounts (for example, we encode videos for people, optionally using their own AWS account instead of ours).
  • At the time of this article’s writing, the documentation for the web API is severely limited. There is a lack of example request/response cycles with anything but one or two of the most common scenarios. I’d like to see some of the more complex request/responses.

Some general struggles/pain points

While this article has primarily focused on the issues we ran into, we’ll criticize a little more before offering praise:

  • As is the case for anyone not paying money for premium support, AWS has terrible customer support. If you want help with the Transcoding service, the forums are basically your only option. The responses seen in there so far haven’t been very good or timely. However, it is important to note that this support model is not limited to Elastic Transcoder. It is more of an organizational problem. I am sure this is on their minds, and if there is a group that can figure out how to offer decent support affordably, it’d be Amazon. Just be aware that you’re not going to get the best, fastest support experience without paying up.
  • We do low, medium, and high quality transcodings for each video we serve at Pathwright. Our lower quality encoding is smaller (in terms of dimensions) than the medium and high quality encodings. With media-nommer and ffmpeg, we were able to specify a fixed width and let ffmpeg determine the height (while preserving aspect ratio). The Amazon Transcoder currently requires height and width for each preset, if you want to specify a dimension. Given that our master video files are all kinds of dimensions and aspect ratios, this is a non-starter for us.
  • If you submit an encoding job with an output S3 key name that already exists, the job fails. While you do open yourself up to some issues in doing so, we would appreciate the ability to say “I want to over-write existing files in the output bucket”. There is probably a technical reason for this, but I think this fails the practicality test. A solution can and should be found to allow this.
  • Because of the aforementioned poor support, I still don’t have a good answer to this, but it doesn’t appear that you can do two-pass encodings. This is a bummer for us, as we’ve been able to get some great compression and quality doing this.

Overall verdict

For Pathwright, the Amazon Transcoder isn’t capable enough to get the nod just yet. However, the foundation that has been laid is very solid. The encodings themselves execute quickly, and it’s great not having to worry about the state of your own in-house encoding infrastructure.

The prices are very fair, and are a large savings over Zencoder and Encoding.com at the lower to moderate volumes. The price advantage does taper off as your scale gets very large, and those two services do offer a lot more capabilities. If your needs are basic, Amazon Transcoder is likely to be cheaper and “good enough” for you. If you need live streaming, close captioning, or anything more elaborate, shell out and go with a more full-featured service.

Once some of the gaping feature gaps are filled and the platform has time to mature and stabilize, this could be a good service. If the customer support improves with the features, this could be an excellent service.

Verdict: Wait and see, but so far, so good.

MUD tech is fun/cool, but…

As software development evolves, there are an ever-expanding number ofways to put together very complex, elaborate systems that are fun to geek out on. Multi-processing is becoming increasingly prevalent, distributed systems are a boon to cases with massive scalability or reliability requirements, and there are all kinds of neat data stores available. These are all definitely things that have a place in software.

But what does this mean, in the context of a MUD?

Honestly, very little. Even a very large MUD with hundreds of connected players can be ran on a very pedestrian machine with a modest amount RAM, and very little bandwidth. Multi-threading is not a requirement for performance (and can often work against it). Highly distributed, multi-server setups are hitting nails with jackhammers. After all, we’re talking about a genre that primarily features smaller (<100 connected players) games.

An important thing to keep in mind when developing a MUD is that simple, well-thought-out MUD architectures have a huge advantage over those their more complex kin: They are easier to develop, and they are a lot more likely to ever see the light of day.

MUDs are a labor of love

Although you’ll hear people crying about the demise of the text-based genre, there will always be a niche out there for our kind. However, the vast majority of the games in our space are going to remain non-commercial, and mostly developed by volunteers on their spare time.

By making simplicity a high priority, we make sure that it’s easier to make progress when you do have a few moments to sit down and work on your game. While you could write a super-distributed, super-fault-tolerant MUD server, it’s probably going to make future development more complicated, and you may run out of steam before you get anywhere close to being “done”. As someone who would love to see more great MUDs out there, this makes me sad!

Developing and launching a MUD is a labor of love, and it has to be fun and interesting to you. You have a finite amount of time to get your game launched before you probably lose interest and move on to other things. This “time limit” varies from person to person, and some possess the rare ability to regularly, routinely work on a game for years before going public, but those types are very rare now. Your goal should be to open to the public before your “ok, time to move on” timer goes off. Simplicity is one of your biggest allies while pursuing this goal.

But… there are always caveats

A very valid counter-point to this argument for simplicity is that MUDs are a great way to learn new technologies, to experiment, to do things one might not normally do. If one’s goal is to tinker more so than to actually release a game to the world, you can throw this all out the window. Get as complex/geeky/sexy as you’d like, and have a blast. Who cares if you never ship? That’s not the point for you, anyway.

For those that are most concerned with actually “shipping”

Focus on your core functionality. What is your “minimum viable product?” What is the most direct way to get to your opening day? Avoid unnecessary complexity, and remember that you can always refactor and improve performance/scalability as you grow. Nothing is set in stone.

Avoid traps like multi-threading (unless you really have to have it), super scalability, and elaborate distributed setups unless you just really want to play. You’re designing a Gocart, not an IndyCar.

Simplicity. Clarity. Focus. Oh, and ship the damned thing!

Why I won’t be gifting this Christmas

As I slogged through December 2010 with one eye on my wallet and the other on my Christmas shopping list, I arrived at an important realization: Despite not being able to afford it, I'm sitting here in this crowded, noisy, dirty department store because I feel obligated to do so. I feel that I have to go out and buy stuff, because that's what we do starting in November. Will my family and friends even like what I'm getting them? Are these things I'm shelling out for going to end up in the closet or in the trash? Do I feel good about subjecting my friends and family to this same unpleasant experience to buy something for me?

These feelings started surfacing early on in college in 2004, though I have been quiet about them until now. Things were financially tight enough to where I wasn't sure I was going to be able to afford to stay in school. I felt like I had failed my friends and family in being unable to afford to get them much of anything. The TV commercials, posters, radio ads all talked about buying and gifting. But I was working two jobs and barely affording tuition at Clemson. I couldn't swing it. Even as my career progressed and I reached relative financial comfort, I had grown to dread the holiday season for what I felt it to be: a commercial holiday.

2010 was my last “regular” Christmas. Starting with 2011, I laid out my desires going forward: Do not buy a gift for me. I will not buy a gift for you. However, I will gladly donate to a charity of your choice, if you’ll do the same for me. The amount doesn't matter, let's just make something happen.

As expected, I received a very mixed reaction to this. Some were happy that they could cross me off their list, some really understood my reasoning, and others pressured me to re-join the commercialized mosh pit that I resented. I ended up donating to six different charities and spent more time with family (instead of shopping). Financially, there was no difference, since I donated what I would have normally spent (which is great). Best of all: my conscious was clear, and I didn't have my normal holiday blues.

This year, I will be doing the same. I will gladly donate to a charity of your choice if you will donate to mine: Shriner’s Hospital for Children. Simply shoot me an email letting me know that you donated (no need to tell me how much), and which charitable organization you’d like me to donate to.

As far as why I chose Shriner’s:

When I was in high school, the band would send a small detachment to Shriner’s on Christmas Eve. We'd play for the children who were too far from or too sick to go home. Given that they needed an extra and that it seemed like a great thing to do, I volunteered in December 2003.

We arrived, got set up, and were ready to greet the children, who were starting to filter in. Some were wheeled in on gurneys, others walked, some were in wheelchairs. I was surrounded by scared, sick kids who wanted nothing more than to be well and back at home with their families for Christmas. We played our set and for a brief moment, the kids were happy and all smiles. As we started packing up, the patients were carted back to their rooms. Some of them would be spending the holidays alone.

Going forward I’ll be donating to a charity of your choice instead of buying you something. I'd also rather spend time with friends and family instead of scrambling around (away from you) on Black Friday.

This will be the last you hear from me on this, as my intent is not to preach or look down on those that enjoy gift-giving. My intent in sharing this is to help friends and family understand why I'm such a scrooge.