New IMC and IRC extensions for Evennia MUD server

Evennia, the Twisted+Django MUD server, has just finished bringing inshiny new support for IRC and IMC (Inter-mud communication) as of revision 1456. This allows users to bind a local game channel to a remote IRC or IMC room. Evennia transparently sends/receives messages between the game server and the remote IRC/IMC server, while the players are able to talk over said channel just like they would a normal one.

It is even possible to bridge an IRC room to an IMC channel, with the Evennia server acting as a hub for messages. The next step for any eager takers may be to create a Jabber extension (any takers?).

If you’re curious, feel free to drop by #evennia on FreeNode to pester the developers.

django-ses + celery = Sea Cucumber

Maintaining, monitoring, and keeping a mail server in good standing canbe pretty time-consuming. Having to worry about things like PTR records and being blacklisted from a false-positive stinks pretty bad. We also didn’t want to have to run and manage yet another machine. Fortunately, the recently released Amazon Simple Email Service takes care of this for us, with no fuss and at very cheap rates.

We (DUO Interactive) started using django-ses in production a few weeks ago, and things have hummed along without a hitch. We were able to drop django-ses into our deployment with maybe three lines altered. It’s just an email backend, so this is to be expected.

Our initial deployment was for a project running on Amazon EC2, so the latency between it and SES was tiny, and reliability has been great. However, we wanted to be able to make use of SES on our Django projects that were outside of Amazon’s network. Also, even projects internal to AWS should have delivery re-tries and non-blocking sending (more on that later).

Slow-downs and hiccups and errors, oh my!

The big problem we saw with using django-ses on a deployment external to Amazon Web Services was that any kind of momentary slow-down or API error (they happen, but very rarely) resulted in a lost email. The django-ses email backend uses boto’s new SES API, which is blocking, so we also saw email-sending views slow down when there were bumps in network performance. This was obviously just bad design on our part, as views should not block waiting for email to be handed off to an external service.

django-ses is meant to be as simple as possible. We wanted to take django-ses’s simplicity and add the following:

  • Non-blocking calls for email sending from views. The user shouldn’t see a visible slow-down.
  • Automatic re-try for API calls to SES that fail. Ensures messages get delivered.
  • The ability to send emails through SES quickly, reliably, and efficiently from deployments external to Amazon Web Services.

The solution: Sea Cucumber

We ended up taking Harry Marr’s excellent django-ses and adapting it to use the (also awesome) django-celery. Celery has all of the things we needed built in (auto retry, async hand-off of background tasks), and we already have it in use for a number of other purposes. The end result is the now open-sourced Sea Cucumber Django app. It was more appropriate to fork the project, rather than tack something on to django-ses, as what we wanted to do did not mesh well with what was already there.

An additional perk is that combining Sea Cucumber with django-celery’s handy admin views for monitoring tasks lets us have peace of mind that everything is working as it should.

Requirements

  • boto 2.04b+
  • Django 1.2 and up, but we won’t turn down patches for restoring compatibility with earlier versions.
  • Python 2.5+
  • celery 2.x and django-celery 2.x

Using Sea Cucumber

  • You may install Sea Cucumber via pip: pip install seacucumber
  • You’ll also probably want to make sure you have the latest boto: pip install —upgrade boto
  • Register for SES.
  • Look at the Sea Cucumber README.
  • Profit.

Getting help

If you run into any issues, have questions, or would like to offer suggestions or ideas, you are encouraged to drop issues on our issue tracker. We also haunt the #duo room on FreeNode on week days.

Credit where it’s due

Harry Marr put a ton of work into boto’s new SES backend, within a day of Amazon releasing this service. He then went on to write django-ses. We are extremely thankful for all of his hard work, and thank him for cranking out a good chunk of code that Sea Cucumber still uses.

How do YOU deploy Django?

There are a good number of choices for deploying Django projects,whether they be Fabric, Chef, Puppet, Paver, or something else. However, each may be more suitable for some types of deployments than others. For example, Fabric’s lack of parallel execution may leave you frustrated if you have many dozens of hosts to deploy to, and Chef’s complexity and scattered documentation may be somewhat discouraging, as powerful as it seems.

Show and Tell

We (DUO Interactive) have been using Fabric for our deployments. We’ve got a few projects in development and maintenance, most of which are single app server situations (which Fabric is awesome for). To keep our deployments consistent across our various projects, we wrote a smattering of management commands and Fabric methods, and tossed it up on GitHub as django-fabtastic. There is no real documentation aside from comments, but we figured we’d share what has worked for our simple deployments.

We’ve been extremely happy with Fabric and fabtastic for our simple deployments, but a new project looks to be moving towards the more complex end of the deployment spectrum in the future. We’re looking at the possibility of at least five app servers by year’s end, and are frustrated with our Fabric-based deployment (with a handful of app servers) for a few reasons:

  • It can be pretty slow with a good number of app servers.
  • Hosts are deployed to one at a time, with no abiiity to execute deployments in parallel.
  • We sit our EC2 app servers behind one of AWS’s Elastic Load Balancers. At any given time during deployment, users might get an updated app server, or one that has not been deployed to yet. This inconsistency is troubling. We’d still get this even with parallel execution (as it stands right now), but it would possibly be less of a problem.

In an Ideal World

I personally would love to see a deployment system that could do the following:

  1. Fire off an individual deployment step to a large number of nodes at a time (in parallel, not one at a time).
  2. Have the ability to wait until all nodes have completed certain steps before progressing onwards with the deployment. For example, wait until all nodes have checked out the latest version of the source before restarting the app servers. There would obviously still be a small period in time where not all nodes have reloaded the code, but it would potentially be a lot better than what we’re seeing right now.

What do YOU do?

So I ask the readers: What tools do you use to deploy your projects with, and how many Django app servers do you deploy to? What works well for you with your deployment strategy, and what irritations do you face?

Evennia gets Southy

The Django-based Evennia MUD Server took another great step forwardtoday in adding support for the excellent South migration app. This should knock down another barrier for those considering beginning development on their own games. The kinds of schema changes from this point forward should be fully covered by the South migrations that we provide.

For those that aren’t familiar with Evennia, it is the first well-established MUD server built with Django (perhaps the first, period). This makes game development extremely simple, and gives us a lot of power to webbify games (take a look at the included web-based client, for an example). Twisted handles our network layer, and some scheduling. Another unique thing (as far as MUDs go) is that there is actually a good bit of documentation.

Removing some dust

After sufficient embarrassment, I have finally forced myself to updatethe site. In addition to a visual overhaul, I’ve updated the codebase to use Django 1.3, and brought in a lot of the pieces of our stack we use at DUO Interactive.

I’ve also made the updated source for this site available in its entirety on GitHub. As a disclaimer, I don’t have the time or will to support the codebase, and it’s mostly there just so I can get this thing under version control. Perhaps it will be of use to someone as an example, though.