Labs Olympics 2011: How Is Babby?

by

For this year’s Labs Olympics I was on an all-star team comprised of Aaron, Alison, Tim, and myself, better known as the Labs Olympics Winners (note: we did not win, this was just our team name). Alison has a young baby at home and Aaron was out during our first brainstorming session for the birth of his niece so it wasn’t a big surprise that we wound up with a plan to make a sophisticated baby monitor. (It might come as even less of a surprise that we named it How Is Babby in honor of an infamous web meme.)

At first all we knew is that we wanted to use some random gadget or assortment of Arduino sensors to give geek parents a way to monitor their geek children, but it wasn’t until we realized we had a spare Microsoft Kinect sitting around the office that we realized exactly how far we could take it.

Kinect

The Kinect is an impressive device, sporting 4 microphones, RGB and IR cameras, an additional depth sensor, and a motor that allows vertical panning. Getting the Kinect running on Linux is a fairly well documented process. We leaned heavily on instructions from the OpenKinect community, which worked pretty much without issue. After doing the usual

cmake, make, make install

dance, things worked without issue on Ubuntu 11.04.

Also included in the OpenKinect source tarball are bindings for a half dozen languages, including Python. Having a Python wrapper made things incredibly easy to experiment with as I had access to python OpenCV bindings for displaying image data and NumPy for manipulating the matrices that the Kinect driver returns.

With these tools in hand we just had to decide what we actually wanted to get from the Kinect. We decided to take regular snapshots to present via a web interface, and also have a mechanism for the Kinect process to notify the web client when there was motion. Snapshots were extremely easy: with just a single line of code, we were able to bring back the RGB image from the Kinect’s main camera and convert it to a suitable format using OpenCV. Once we made the discovery that there was also the option to bring in an IR image, we added a night-vision mode to our application as well. This way, the parent can adjust the camera to either take a standard image in normal light situations or switch to the IR camera for the night. (Due to a hardware limitation of the Kinect, it is impossible to use the RGB and IR camera at the same time.)

Given the uncertainty in the amount of available light and the fact that the depth sensor provided simpler data to work with (essentially a 2D matrix of depth values refreshed about 30 times per second), we decided to use the depth sensor to detect motion. NumPy’s matrix operations made this a breeze. By averaging the depth of the frame and comparing the deviation across a range of frames, we could flag each individual frame as likely containing motion or not. Depending on the desired sensitivity of the alerts, the application would wait for anywhere from ten to thirty frames of consecutive motion before notifying the web application that the baby was on the move.

The Web Application

As opposed to a traditional baby monitor, which has a dedicated viewing apparatus, we liked the idea of a web console that could be viewed from anywhere, including via a mobile device. The main features of the web app would be viewing, motion alerts, and configuration of features such as SMS notifications and nightvision. The basic web app was built with Django, but we used a few add-on libraries to help accomplish our goals in the two days given for the contest.

We decided that the easiest way to get images to the user was to have the web page embed a single image that the monitoring software would update at a set interval. We used Socket.IO for a very light-weight solution to keep the image updated to the latest version. In the best case scenario, i.e. the user’s browser supports it, Socket.IO will use WebSockets to keep the connection open, but will degrade gracefully and fall back to AJAX or other means to get the job done.

Because our team lacked a designer, we used a CSS framework to take care of cross-browser issues and provide some pre-designed UI elements. Twitter just recently released their Bootstrap framework, so we went with it. It styled all of the UI elements on our site, including a navigation bar, alert boxes, buttons, and a form. Although we had some unresolved trouble with the form elements not lining up properly with their labels, it proved very easy to work with, overall.

The remaining technical component of the website was the AJAX alerts on motion events detected (and logged in a DB table) by the backend. There were a few criteria for how it needed to work, the most important being that alerts needed to be somewhat persistent to the user, so that a user couldn’t miss an all-important alert saying that the baby was moving, just because they were clicking quickly between pages on the site, for instance. This meant that we needed something more sophisticated than Django’s inbuilt messaging framework (django.contrib.messages). The answer came in the form of django-persistent-messages. It was built to work right on top of Django’s messaging system, so it worked seamlessly and was a no-brainer to set up. With django-persistent-messages working, alerts now would not disappear unless dismissed by the user, hopefully averting any potential baby-on-the-move mishaps.

In the end, there were a few features we had to leave unfinished to get the project out the door on time, including audio monitoring and SMS messaging, but we were pretty happy with the results. As usual, all of our code is available on GitHub: How Is Babby.