Why are some cities so good at releasing open data? (Pt. 1)
Cities collect tons of data. Open Baltimore reveals 1,847 dataset files; the Western Pennsylvania Regional Data Center notes 132,187 car accident reports; and on top of displaying large tables with interactive charts, the city of Mesa, Ariz., even makes a point of linking to the data that noncity agencies collect.
With so much data to compile, sort, and process, the U.S. City Open Data Census — an initiative that Sunlight is now revamping — looks at how well cities make some of their most important datasets accessible to the public. Open data isn’t just data that’s posted online — the census also notes if datasets are free, openly licensed, easy to download, accessible without restrictions and up to date, criteria that are linked to Sunlight’s open data policy guidelines. While many cities have made progress in recent years by creating open data portals, releasing comprehensive local maps and digitizing city archives, the City Open Data Census has seen a handful of cities consistently rise to the top. What about those cities makes them do well in the census? How did they ensure that their data are open and easily accessible to their communities?
Not all cities that release their data online have policies to govern open data access. However, the cities that do best in making their data open (as measured by the census) all have strong policies underlying their open data programs. Las Vegas is a case in point: Data Las Vegas earns the city top marks for a wide range of the datasets checked by the census, from budget information to data on parcels and permits. But Las Vegas’ data accessibility is rooted in its clear, strong policies. In 2014, “convenient, modifiable, and open formats” were written into the city’s code, and earlier this year the city added an assurance that data will be “placed into the public domain … [with] no restrictions or requirements placed on use.” Notably, the Las Vegas policy also calls for publishing any data created by private contractors on the city’s behalf.
Austin is a further example of both strong open data policy and practice. The Open Data Census gives Austin a nearly perfect record on its data accessibility, but beneath that accessibility is strong policy language that guides Austin’s open data sites. Austin’s policy guarantees that “[t]he City shall not assert any copyright, patent, trademark, or other restriction on government information”; Austin’s publicly-accessible data, in turn, “shall be updated … as often as necessary.” One can almost feel the enthusiasm of Austin’s desire for its data to be “retrieved, downloaded, indexed, sorted, searched, and reused.”
Still, Austin’s policy doesn’t let the city’s open data team rest on their laurels. Instead, the policy requires the city to investigate new data technologies and get feedback from the public, continually strengthening Austin’s open data program. Perhaps as a result, Austin’s open data effort is easy to navigate, visually appealing, innovatively used and staggeringly comprehensive — with data on everything from potholes and floodplains to Achilles the cat.
The open data policies and practices of Austin and Las Vegas could still be improved. Austin could reinforce its policy by requiring public application programming interfaces (APIs), which help third parties automatically gather data from government sites. Although both cities generally do well at providing data in bulk, their policies could be further strengthened by specifically requiring the cities to provide bulk data downloads. Another easy improvement could be adding guidelines for how to cite city datasets.
Both cities do well, but both are also big. In our second post, we look at how municipalities with smaller populations can still succeed in making data open and accessible.