We've got a problem here at Sunlight. A tiny kitchen and a large staff is a recipe for a disaster of immense magnitude. Each day a member of staff is assigned to kitchen duty; the list on the fridge has a schedule of who is responsible for keeping the dishwasher full and run as needed, wiping down the counters and making sure that dishes are put in the appropriate cabinets. Some staff members do a really great job, others are inconsiderate assholes who let the kitchen go to pot (I admit to being one of the inconsiderate assholes on occasion). We're smart people, there has to be a better way to do this!
Rex the Cleanosaur is a better way.
For the 2011 Sunlight Labs Olympics, Team Awesome created Rex the Cleanosaur, a kitchen duty management application. Rex generates a schedule from current staff members and emails them with their assignments. If you receive an assignment for a day that you will be unavailable, just click the link provided in the assignment email to defer your duty. Another staff member will be automatically scheduled to take your day, but the next day they have kitchen duty, you'll do theirs. Just to keep everything fair, the web app makes deferments publicly available. Rex will chase down and eat habitual deferrers!
We also developed a tablet based interface that will be hung in the kitchen to display the person that has the current kitchen duty. But what if that person is doing a terrible job? How awkward it would be to have to talk to them face-to-face to ask them to do a better job. No worries! The table interface has a "nudge" button that, when tapped, sends the person responsible for the kitchen an anonymous, passive aggressive email telling them to get their act together. If they are doing a great job, you can "throw 'em a bone" to thank them for their excellent work.
The nudges and bones are used to calculate rankings of the best and worst kitchen duty. If the person responsible for kitchen duty has a high nudge count, you know that they will probably need some extra reminders throughout the day. Managers could even use nudge/bone counts as a factor in determining yearly raises! Okay, not really, but you can still berate people with high nudge counts for their lack of consideration for their fellow employees.
So what's next? We plan to add a way for the person with kitchen duty to rate the office as a whole on how well they held up their end of kitchen cleanliness (placing dirty dished in the sink rather than the dishwasher, etc). There are also some early plans for coupons, exemptions from kitchen duty that managers can give away as rewards for good work or to pay off employees that stumble upon secret evil plans their managers are working on.
Team Awesome consists of Chris Rogers, Drew Vogel, and Jeremy Carbaugh.Continue reading
Unlike last year, I wasn't just a mere spectator of the Labs Olympics. I got to participate this year and take a couple days off from the usual watch-dogging we do here at the Sunlight Foundation. My team's goal was to take our combined skills of web development, research and story telling and create a product very different from the usual web applications and data tools we usually strive for.
I was lucky enough to be on a team with Daniel Cloud, Ethan Phelps-Goodman and Eric Mill. Originally, the four of us struggled to come up with a project that would be topical, technical and entertaining. After an extended brainstorming session where we considered projects surrounding campaign finance, the London riots and natural disasters around the world, we decided to create the ultimate data visualization using (drum roll, please) Jell-O! To be clear, our idea was not inspired by the London artists that sculpt things out of Jell-O. Our use of the jiggly substance was completely coincidental.
As we talked about what we could build and what would be of interest, we kept in mind that this year’s competition, unlike last year’s, was not limited to building applications. Our end result could be, and was encouraged to be, tangible. So when we considered mapping areas recently hit by earthquakes -- D.C., Denver and California (of course) -- it occurred to us that we should not only map those areas, but also make those maps dynamic by making them light up, and vibrate, too!. We investigated ways we could embed LED lights in a three dimensional Jell-O mold, and quickly ran into several obstacles. It looked like the Peggy 2 was going to be our LED board until we realized we’d have to solder 625 LEDs. Given the time and skill level that would require, it didn’t seem realistic. We then gave up on embedding LEDs in Jell-O and decided to go for the more obvious choice, a layer of Jell-O in the shape of the United States on top of a horizontally-oriented LCD monitor.
So it was set: we were going to use sophisticated mapping software new to all of us (especially me, since I’m not a developer) to map earthquake and other government data and then distract our audience entirely by putting a sticky mass of gelatin on the table and somehow, someway, make it jiggle on cue.
To create the underlying map visualizations we used the TileMill mapping stack from Development Seed. We collected dentist and diabetes data from the Centers for Disease Control (CDC) to map the change in obesity rates over time and the number of dentists per capita. We mapped earthquake data using information from the U.S. Geological Survey (USGS).
TileMill lived up to its promise of providing an easy to use complete solution for people with little experience mapping, as was true for most of us. Once the maps were designed we exported them to static images and displayed them as a slideshow.
Once the slideshow was created we chose a large monitor to display it on and wrapped the whole thing in Saran wrap. After we sculpted the states using Jello, we placed it on the protected monitor and displayed the maps we generated beneath the translucent dessert. When it came time to display the earthquake data, we had to resort to an over-sized neck massager to get the Jello to jiggle. Our early experiments involved installing vibration motors from a Play Station 3 controller into a layer of Jell-O. While it was definitely a sight to behold, we didn’t get the range of motion we had hoped for. Ultimately, making it shake was not an easy task.
The final product looked very much like an early prototype for a more sophisticated device, the one we had imagined in our planning phases. Perhaps intrepid tinkerers will take what we learned and build upon it to form something bigger, better and more delicious.
I’ll spare you all the suspense and let you know our team, the J-team, didn’t win. Keep checking this blog for a post by the winning team. Everyone who works in a office will appreciate their creativity and problem solving ability.Continue reading
For this year's Labs Olympics I was on an all-star team comprised of Aaron, Alison, Tim, and myself, better known as the Labs Olympics Winners (note: we did not win, this was just our team name). Alison has a young baby at home and Aaron was out during our first brainstorming session for the birth of his niece so it wasn't a big surprise that we wound up with a plan to make a sophisticated baby monitor. (It might come as even less of a surprise that we named it How Is Babby in honor of an infamous web meme.)
At first all we knew is that we wanted to use some random gadget or assortment of Arduino sensors to give geek parents a way to monitor their geek children, but it wasn't until we realized we had a spare Microsoft Kinect sitting around the office that we realized exactly how far we could take it.
The Kinect is an impressive device, sporting 4 microphones, RGB and IR cameras, an additional depth sensor, and a motor that allows vertical panning. Getting the Kinect running on Linux is a fairly well documented process. We leaned heavily on instructions from the OpenKinect community, which worked pretty much without issue. After doing the usual
cmake, make, make installdance, things worked without issue on Ubuntu 11.04.
Also included in the OpenKinect source tarball are bindings for a half dozen languages, including Python. Having a Python wrapper made things incredibly easy to experiment with as I had access to python OpenCV bindings for displaying image data and NumPy for manipulating the matrices that the Kinect driver returns.
With these tools in hand we just had to decide what we actually wanted to get from the Kinect. We decided to take regular snapshots to present via a web interface, and also have a mechanism for the Kinect process to notify the web client when there was motion. Snapshots were extremely easy: with just a single line of code, we were able to bring back the RGB image from the Kinect's main camera and convert it to a suitable format using OpenCV. Once we made the discovery that there was also the option to bring in an IR image, we added a night-vision mode to our application as well. This way, the parent can adjust the camera to either take a standard image in normal light situations or switch to the IR camera for the night. (Due to a hardware limitation of the Kinect, it is impossible to use the RGB and IR camera at the same time.)
Given the uncertainty in the amount of available light and the fact that the depth sensor provided simpler data to work with (essentially a 2D matrix of depth values refreshed about 30 times per second), we decided to use the depth sensor to detect motion. NumPy's matrix operations made this a breeze. By averaging the depth of the frame and comparing the deviation across a range of frames, we could flag each individual frame as likely containing motion or not. Depending on the desired sensitivity of the alerts, the application would wait for anywhere from ten to thirty frames of consecutive motion before notifying the web application that the baby was on the move.
The Web Application
As opposed to a traditional baby monitor, which has a dedicated viewing apparatus, we liked the idea of a web console that could be viewed from anywhere, including via a mobile device. The main features of the web app would be viewing, motion alerts, and configuration of features such as SMS notifications and nightvision. The basic web app was built with Django, but we used a few add-on libraries to help accomplish our goals in the two days given for the contest.
We decided that the easiest way to get images to the user was to have the web page embed a single image that the monitoring software would update at a set interval. We used Socket.IO for a very light-weight solution to keep the image updated to the latest version. In the best case scenario, i.e. the user's browser supports it, Socket.IO will use WebSockets to keep the connection open, but will degrade gracefully and fall back to AJAX or other means to get the job done.
Because our team lacked a designer, we used a CSS framework to take care of cross-browser issues and provide some pre-designed UI elements. Twitter just recently released their Bootstrap framework, so we went with it. It styled all of the UI elements on our site, including a navigation bar, alert boxes, buttons, and a form. Although we had some unresolved trouble with the form elements not lining up properly with their labels, it proved very easy to work with, overall.
The remaining technical component of the website was the AJAX alerts on motion events detected (and logged in a DB table) by the backend. There were a few criteria for how it needed to work, the most important being that alerts needed to be somewhat persistent to the user, so that a user couldn't miss an all-important alert saying that the baby was moving, just because they were clicking quickly between pages on the site, for instance. This meant that we needed something more sophisticated than Django's inbuilt messaging framework (django.contrib.messages). The answer came in the form of django-persistent-messages. It was built to work right on top of Django's messaging system, so it worked seamlessly and was a no-brainer to set up. With django-persistent-messages working, alerts now would not disappear unless dismissed by the user, hopefully averting any potential baby-on-the-move mishaps.
In the end, there were a few features we had to leave unfinished to get the project out the door on time, including audio monitoring and SMS messaging, but we were pretty happy with the results. As usual, all of our code is available on GitHub: How Is Babby.Continue reading
As part of the 2nd Annual Labs Olympics, Team Leaf Peepers built NiceNeighbor, a network designed to put helpful neighbors in contact with each other.
It's been an interesting couple of months on the East Coast and in the DC area in particular, with earthquakes, torrential rains and flooding, terror threats and even a 2-0 start to football season in the mix leaving Washingtonians confounded, confused and generally insecure. Amidst these troubling times we've observed a pattern: In the face of uncertainty, people can tend to be jerks to each other. We hoard things, Jam up the roads and grocery aisles, and get pushy and rude. However, when disaster strikes, we are helpful, compassionate neighbors, each pitching in to face hardship together. It was our team's goal to help encourage this second behavior pattern all the time.
Yes, the team, but wait. Leaf peepers? I recently returned from vacation in Vermont, where Luigi assumed I'd be photographing leaves. Nevermind.
Our juggernaut of raw, unstoppable productive force consisted of Luigi, Caitlin, Casey and myself, covering nearly every discipline represented in the Labs from design to research to front-end and back-end web development. With this veritable cornucopia of skills, we knew we had to bite off something significant.
Getting to work
Despite high confidence in our ability to execute, we were pretty strapped for ideas until late afternoon on the Friday before go time. We had been toying with a voice and SMS interface to guide people in rural areas without broadband internet toward the local public services they need, but Casey discovered in preliminary research that the infrastructure to make such an app worthwhile really wasn't there. We'd already scrapped some decidedly lesser ideas, such as a kitchen cleanliness tracker (pfff!), an rfid/motion sensor combo that played WWE-style entrance music for every Sunlighter as they came into the office each morning, and 'Auto-Tune the Law,' which would have set Sunlight Live to music, pitch-shifting testimony a la T-Pain, which (sadly!) didn't appear to fit the timeline or budget.
So, after reaching consensus--and without a comfortable degree of consideration of our problem domain--we got cracking Monday morning. The plan was to use a plain old Rails/ActiveRecord/Postgres stack to deliver the web interface, and Twilio for SMS and voice. Casey took our basic concept of 'have' and 'need' and set forth on IA and taxonomy, while Caitlin began on a color palette and logo. Luigi dug into the Twilio API, and I spun our project up on Heroku and started modeling.
Good teamwork is everything when dealing with compressed timelines, and we did our best to keep in touch throughout the process. We set up an IRC channel on Freenode that we hung out in each day for answering quick questions, and escalated to face-to-face as needed. Heroku provides an IRC bot to notify the room of deployments, which came in handy for status tracking and letting team members know when to update their code. For copy and user stories, we worked with an EtherPad instance that Eric had stood up for everyone to use, and found it to be great for collaborative typing.
With the lofty goal of a backend, 3 interfaces and loads of location-aware goodies in just a couple of days, we had our work cut out for us. As mentioned above we decided to let Rails and Twilio handle the interfaces, and even though I tend to prefer Python/Django, it felt good to have a chance to play with some of the less-familiar-to-me-bits of Rails such as single-table inheritance for 'needs' and 'haves,' and scoped/nested routing patterns that are new-ish in rails 3. For IP-to-location, geocoding and radius search I used GeoKit, which was a pleasure to work with, though initially it forced me to trade sqlite in development for postgres.
For the SMS and Voice features, Luigi evaluated Twilio and Tropo. Both are excellent telephony systems, with straightforward RESTful APIs. But Luigi figured out how to get a custom phone number through Twilio first (719-522-NICE), and so that's what he chose. When working with telephony systems, outgoing activity is straightforward: make an API call. But how does one handle incoming activity? Twilio expects developers to implement endpoints in their app using a custom XML-based markup language, TwiML, while Tropo allows developers to host scripts on Tropo's cloud. Tropo also supports an endpoint-based solution, similar to Twilio. On top of all that, Tropo offers a new service called SMSified that makes development even more straightforward if one only needs to support SMS.
By the end of Monday, we had a solid start--Catilin had a great logo that pulled inspiration from the letters 'NN' back-to-back to form a Mr. Rogers-esque cardigan, we had hello world in SMS, an admin scaffold, an auth system, some models and a sense for how requests and offers would be delegated. But to poorly paraphrase Bret Michaels, every Monday has its Tuesday. While working with Caitlin to help her get started integrating her markup/css into the project, Luigi mistakenly deleted all of her work! The next several hours were spent attempting to reconstruct it from browser cache, which turned out reasonably successful, though very costly in time.
To add illness to insult and injury, Caitlin came down with food poisoning that night, leaving your Leaf Peepers woefully short-handed during Wednesday's pretend-like-you're-working-but-try-to-make-up-for-lost-time sprint to the finish.
For fun, if not profit
By our measure, we didn't quite make minimum viable product, but the fruits of our effort stand nonetheless at http://niceneighbor.net, with code at github. We stand by the idea and perhaps will develop it further at some point to get it over that elusive hump of 'usefulness.' Results aside, we had a great time working together and learning about bits of tech we don't normally use.Continue reading
It's that time of year again...time for the 2011 Labs Olympics! This year, I was on a team with Andrew Pendleton of the Data Commons/Influence Explorer team and labs intern Matthew Gerring. Last year, I teamed up with Jeremy and Luigi to form the fierce (and winning) team, Blood Monkey. This year, we needed an equally intimidating team name and an equally creepy project to boot. So without further ado, team Baby in a Straight Jacket presents: Talk of the Town.
Talk of the Town is a corpus of closed captioning data from transcripts of municipal meetings from around the country. You can type in any word and see which cities or counties are talking about it, and how often. The size of the circle over each municipality corresponds to how frequently it was mentioned. Additionally, there's a sparkline underneath the word you searched for that shows the week-by-week change in frequency.
Talk of the Town is powered by data from the nice folks at Granicus. Granicus is a vendor that provides a streaming video and document publishing suite to governments who want to increase their transparency by making public meetings more accessible to citizens. They were kind enough to let us use the beta version of their api to pull down data from their clients for the last six months. Luckily, they serve hundreds of municipalities across the country, so while the data isn't exhaustive, it's a nice sampling.
In addition to noting that the data does not contain every local government, users should also note that we haven't had a chance to scale the frequency of mentions by the frequency of the meetings. However you can still find some pretty interesting results (bonus: try searching for "earthquake" or "irene"). For instance, if you search for "taxes", you'll notice the mention of taxes in Montgomery County is off the charts for a county that size (Montgomery County is the 13th wealthiest county in the country and is also home to a few Sunlighters, including myself).
So that was our two day project for the 2011 Labs Olympics. Although it wasn't the winner, we're happy to work on something that takes opengov to the grassroots level, even if only experimentally.Continue reading