Making the case for open human rights data

by and

Within issue areas where impact is easily quantifiable, the case for open data is clear. It’s possible to measure how much money is saved by governments when contract data is open, and to use these results as evidence for opening up data on government services. However, as citizens across the world demand better government data on police brutality and corruption, we are reminded that the task of government is more than service delivery, but also the protection and enhancement of human rights for its citizens. Although it’s clear that data-driven human rights monitoring efforts would be enhanced by more or better datasets that can be used to make conclusions about human rights violations or the overall human rights landscape in a particular context, the question of how open data can positively impact human rights monitoring is a much more sensitive area that has not been fully explored.

Throughout this post, we will seek to identify where the nexus between human rights and open data lies. We will outline where to find data for monitoring human rights and arguments for how opening up these and other relevant datasets can strengthen human rights monitoring.

What data is relevant in monitoring human rights?

(Image credit: geralt/Pixabay)

Data collected by international institutions, governments and local or international civil society organizations can be used for human rights monitoring. Some of these data sources are collected and published by international bodies, such as indexes that attempt to assess or compare the human rights situation in a particular context. The United Nations Human Development Index and the data from the World Bank, along with those from civil society groups such as the reports from Freedom House and Amnesty International are an example of these.

Data collected by national and local governments on a routine basis may be used to construct the international human rights indexes referenced above, but may also be used independently to monitor human rights violations and assess the status of human rights in a particular context. Examples may include labor statistics, data on incidents of corruption or police violence.

The varying data from these sources are often used to create new human rights datasets or form the foundation of data-focused monitoring projects. One prominent project of this nature is the CIRI Human Rights Data Project, which used U.S. State Department human rights reports and Amnesty International annual reports to create a customizable and downloadable dataset on 15 human rights from 202 countries from 1981-2011. Other projects include the activities of the Human Rights Data Analysis Group (HDRAG), which uses various sources to create scientifically verifiable reports on human rights violations and the Human Rights Atlas, which takes data from various indexes to produce an interactive human rights map.

Despite a seeming wealth of relevant data for monitoring human rights, the existing data is marred with myriad of problems and mechanisms to collect human rights data by governments and international institutions are weak.

Existing data is often incomplete, inconsistent and unreliable. At the national level in the U.S., for example, data relating to police shootings and deaths in police custody is practically non-existent, with some media outlets (like the Guardian and the Washington Post) and civil society groups resorting to crowdsourcing the data from local media reports, making the chances of creating a complete national dataset nearly impossible. It was only in December 2014 that Congress passed a bill requiring the collection of such data, reviving a program that had been long dormant. The mechanisms to collect this data are simply not in place.

Another contributing factor to the reliability and consistency problem is the methods by which the data is collected and documented. In the CIRI Human Rights Data Project, interviews were used to document human rights abuses. While these are an extremely valuable qualitative resource because they make for rich descriptions of instances where human rights were violated, such data is particularly susceptible to human error. When translated into statistics, such errors can result in issues that undermine conclusions due to selection bias and duplicative reporting, according to the Human Rights Data Analysis Group.

Lack of data rich enough in quality for monitoring human rights is also a problem. During the years the CIRI Human Rights Data Project was published, the only source that released “systematic qualitative information about the same rights for each country annually” were U.S. State Department human rights reports. Although the authors worked to crosscheck the data for American biases with Amnesty Reports, these reports are only limited to physical integrity rights (i.e. the right to not be tortured, extrajudicial killings, etc.). A dearth of diverse, consistent and systematic qualitative data on human rights is not simply due to a lack of effort, but also because of the nature of documenting human rights. Human rights beyond the physical integrity rights described above are difficult to quantify. While it may be easy to count the number extrajudicial killings a country has, the CIRI Human Rights Data Project contends, in their defense, that a wider range of reference is required to determine if freedom of movement or women’s rights are being protected.

Finally, cultural bias also manifests in research and collection methodology that can compromise the reliability of human rights data. The engine room points out that data used in various human rights studies, including those cited above, can hardly be viewed as neutral, nor can it be said to demonstrate cultural nuance, especially since those who gather the data rarely engage those in administrative capacities on the ground who produce the data and do so instead from an “outsider” or “expert” point of view. These sources are also susceptible to human error due to the method of data collection (i.e. interviews). As a result, datasets that are generated find little appreciation for context and while it is true that we should be skeptical of information provided by governments (and some governments more than others), the “name and shame” dynamic of human rights discourse that the data interpreted through provides little in the way of solutions for countries that are being reported on and has the possibility of simply reinforcing previously held views.

Can open data strengthen human rights monitoring?

Bad human rights data can threaten the credibility of human rights claims. However, opening up human rights data may provide some solutions that could help combat the issues with reliability, consistency and cultural bias to enhance evidenced-based advocacy with claims grounded in statistical reality.

Most obviously, open data principles and practices could eliminate some of the technical issues with consistency and completeness of human rights data. For the U.S. police data example, one of the main issues that makes it difficult to quantify the number of instances of police use-of-force over the last year (and that we’ve even tried to address with our criminal justice data project) is that police departments across the country have different data standards, different methods of collecting and publishing information. The information is spread across disparate locations and, in theory, open data practices would ensure that the relevant and complete data can be found in a central repository. Further, the accessibility of open data would facilitate the application of tools such as crowdsourcing and data verifying techniques to continue to test accuracy, reliability and enhance human rights monitoring. The concept of data collection could also be expanded to include the Use of mobile technology to collect data.

Beyond the technical benefits, open data also has powerful potential to challenge the ‘outsider’ or Western methodologies that lead to cultural biases. Open data collection processes allow for expanding the pool of sources when it comes to human rights data collection, verification and analysis. Where projects like the CIRI Human Rights Data Project had to solely rely on U.S. reports, a crowd-sourced approach combined with a rigorous verification process could encourage more direct engagement with groups on the ground that could provide deeper insight into these issues, especially for those rights which are not so easily quantifiable.

There are, of course, genuine privacy and security concerns with applying open data practices, technology and collaborative data collection to human rights monitoring. When there are accounts of oppressive regimes, such as the Islamic State in Syria and Iraq, who have been known to routinely identify individuals who distribute videos or accounts of their atrocities and murder them, one should caution against identifying data sources. Because of the sensitivity of the information, a nuanced application of open data principles should be applied that safeguards privacy. Questions of who is at risk of being harmed if data is released and weighing the benefits of using technology to document and distribute data should be addressed. There are many resources that advocate for the safe collection and distribution of human rights data. In particular, the engine room’s Responsible Data Forum has created a code of conduct for digital crowdsourcing projects and code of practice on anonymization and managing data protection risk with the aim of handling this highly-sensitive data safely at a time when we still don’t have the technology to completely de-anonymize data.

Publishing all human rights data openly would be challenging due to the capacity constraints on governments and international institutions — a lack of capacity is frequently cited as an argument by governments for not releasing open data. And in some cases, the current technology around privacy and security makes doing it irresponsible.

That being said, upholding human rights is incredibly important. And given the significance that good data on human rights can have in bolstering evidence to legitimize cases of human rights abuses that can enhance monitoring and advocacy, a premium should be put on safely publishing the data that can be released. As entities are deciding which datasets to open, these should be given high priority. Finally, we all have a stake in advancing human rights. As such, more attention should be given toward working to address the barriers to being able to open more data around this issue by all stakeholders — from governments, to activists and the private sector.