The Apps for America Judging Process

by

Since announcing the winners yesterday, a few people have asked for notes about how the Sunlight Foundation selected the winners. The answer is: we didn’t. The Apps for America judging process went like this: we got 5 judges to agree to judge the contest– Adrian Holovaty, Peter Corbett, Xeni Jardin, Aaron Swartz and myself. We built a very lightweight judging app (screenshot) and invited every judge to log in and rate each app according to the attributes we specified on in the contest rules.

4 of those judges showed up to vote, and for the most part every application was voted up by these four judges. The fifth judge, while they agreed to judge this contest, didn’t reply to emails or respond to our various inquiries to judge. So we continued the process anyway rather than replacing the missing judge at the last minute.

Each category was rated on a scale of 0-5 with 1 being the lowest score and 5 being the highest score. The categories, as a refresher from the rules were:

  1. Usefulness to constituents for watching over and communicating with their members of Congress
  2. Potential impact of ethical standards on Congress
  3. Originality of the application
  4. Potential usability of the application
  5. Code quality of application

Code quality of the application was perhaps the most difficult for our judges to judge. One judge opted to not judge code quality and abstained from judging, which left three judges on that category.

After the judging process was complete, we averaged all the scores, listed them in descending order, and that’s the order they appear in on the winner’s list.

A few folks have mentioned things along the lines of these apps are “Sunlight Foundation” selected. And while that’s partially (25%) the case, the majority of votes cast were from non Sunlight Foundation employees. If there was any bias in this group, I would suspect that it would be a “python friendly” slant between Aaron, Adrian and myself, though I’ll point out that our first place winner is a Ruby on Rails application, and our second place one is a CodeIgniter application.

For the next round, I think we could stick with four judges and leave the fifth to the community at large somehow, though we’ll have to work that out so that the voting does award merit in the application itself, not just how popular or good of an organizer the entrant is. I think we could also have a more open nominations process for who the judges are.

While no one has complained about the judging process, I think its good to be clear for how applications were judged in this process and to think about how the process can be improved for our next round. We’re just learning how to do this, and ideas are welcome!