The Fix-Rate: Integrity Action’s New Transparency and Accountability Impact Metric

by

Earlier this week Integrity Action’s Fredrik Galtung launched his working paper ‘The Fix-Rate: A Key Metric for Transparency and Accountability’ (PDF). Yesterday my colleague Lee Drutman and I had an interesting conversation about this work with Fredrik, and I wanted to share some thoughts about the Fix-Rate.

With this paper Fredrik and Integrity Action take the position that the anecdote-heavy evidence base linking transparency and accountability interventions needs some more concrete measures. To that end, ‘Fix-Rate’ proposes a metric for measuring impacts, and offers examples of its use in a variety of national and municipal contexts, largely focusing on improvements in public service provisioning and infrastructure projects.

The fix-rate concept is fairly simple, but can provide an important tether to discrete outcomes. To evaluate an intervention, Fredrik counsels, we must begin with identifying the problems that the intervention seeks to solve and defining what constitutes a ‘fix’ to that problem. The fix-rate, then, is the percent of the problems that achieve a satisfactory ‘fix’. The higher the fix-rate for the problems that an intervention targeted, the more successful the intervention. This allows us to compare interventions, in a manner that is difficult with anecdote driven research:

Using the fix-rate as a key unit of measurement makes it possible to compare the effectiveness of different treatments of intervention, and to assess whether the treatment is long lasting. In countries and government sectors where corruption and maladministration are widespread, the use of the fix-rate will also generate positive externalities as some of the examples given below will illustrate. (p. 4)

The crucial part of the process is defining the problems and defining a fix. In our conversation Fredrik stressed that these assessments are highly contextual, and it matters a lot who you ask. For instance, a corrupt procurement process may not be a ‘problem’ for the government or the contractors (as everyone lines their pockets). The problem comes from the wasted public resources and sub-standard contract outcomes, and the citizens who suffer as a result. In defining problems and fixes, an inclusive process is key; the net for stakeholders must be cast wide.

The paper offers three key insights:

  • We need more measurement of outcomes. For our research to mature, we need real dependent variables. Fix-rate is perhaps a good place to start.

  • The bottom up approach of having the community define the problems helps ensure that relevant needs and experiences are accounted for.

  • By measuring positive outcomes (via fix-rate) rather than just being whistleblowers or watchdogs that point out problems (important though these functions are), we can begin to alter incentives for government actors that may help foster productive collaboration. Fix-rate offers a way to measure success that gives government officials a way to point to their progress, thus encouraging action.

Fix-rate is a reductive measure, and makes no claims not to be. This reduction in complexity is part of the point. The working paper stresses that it should be used in concert with a range of other variables about the problems being assessed. Like any other metric, once people start caring about it, fix-rate invites manipulation. For example, choosing small tractable tasks and ignoring larger, more daunting ones would lead to a higher fix-rate but is not necessarily a more successful initiative. Reporting other meaningful data along with the fix-rate can reduce this problem. While using the fix-rate metric, it is important to keep in mind the scale of the projects (either a monetary valuation or number of people impacted), the number of projects being monitored, and information about the environment and causal mechanisms at to prevent manipulation as well as ensure that comparisons between intervention assessments are valid. Comparing the fix-rate of service provision projects like rubbish collection to the fix-rate of bribe or kickback focused initiatives, for example, is unlikely to be meaningful as the types of problems and fixes being measured are not comparable. But it is likely to be useful to compare  similar interventions, the the response rates to two different FOI regimes (an example Fredrik brought up in our discussion).

As we progress with our case study work, we hope to be able to do some serious impact assessment ourselves, so these issues are very much on our minds. Fix-rate doesn’t fix all of the research design problems in the transparency and accountability space. But with this paper Integrity Action is initiating a serious conversation about assessment, and providing some valuable guidance on where to start