How to avoid toolsplaining: Thinking differently about social accountability

by 28th Jun 2017
How to avoid toolsplaining: Thinking differently about social accountability

On the plane to Accra just over a week ago I read Rebecca Solnit’s Men Explain Things to Me (the origin of the term “mansplaining”), and it struck a chord with me. A colleague from Kenya who hadn’t heard the term before asked if there was such a thing as “white-splaining”. And, indeed, there is. But, recently, I’ve been concerned with another phenomenon: “toolsplaining”.

“Toolsplaining” is, as far as I can see, the phenomenon where we over-explain how clever a particular tool is, but forget to explain how (in reality) it interacts with context, and works together with other strategies and processes to achieve change. We often assume, usually due to lack of information (or lack of careful investigation), that whatever tool we used must explain the change – that this tool (the scorecard) or that tool (the citizens’ charter), for example, was the cause of the change.

In practice, especially for social processes, how that change happens is generally more complicated, and nuanced. There are typically multiple causal pathways even within our own strategy that work together to influence change (and of course, various complementary factors driven by other actors). And it’s often the informal micro-politics that matters, rather than your formal process.

So, we need to think differently.

How is our intervention contributing to change?

I was in Accra to support the five-year USAID-funded Ghana Strengthening Accountability Mechanisms (GSAM) project which aims to “strengthen oversight of capital development projects to improve local government transparency, accountability and performance.” In particular, what CARE wants to understand better is how our intervention is contributing to district assemblies’ responsiveness to citizens’ concerns in relation to the planning and implementation of capital investment projects.

We used contribution tracing to define a hypothesis and identified causal pathways for how change happened, rather than merely what the log frame says, or what a process map suggests ought to happen. To do this, the team looked at the process that was designed (see the graphic below), but then traced back real changes (eg district assemblies replacing inadequate building materials) as a causal chain.

GSAM scorecard process flowchart

Scorecards formally hinge on a public meeting (an interface meeting). But, on various occasions, we believed that changes had been triggered even before we’d held a public meeting (6 or 13 in the graphic above), but after we’d conducted site visits to monitor the quality of infrastructure (2). We’d established District Steering Committees composed of government actors, community leaders, and engineers (invisible between 1b. and 2b.) which were seemingly able to resolve some (but not all) problems without district town hall meetings, or even scorecard interface meetings.

Tracing the real process has therefore helped us think again about how, when, and where we really might have influenced change.

Inter-related pathways to change

Rather than a single pathway of information sharing, or knowledge transfer, it was clear we had at least four inter-related change pathways for social accountability:

  1. providing financing to civil society organisations who prepared a district scorecard to get district assembly members to respond;
  2. getting district assembly members to release data and participate in the process;
  3. supporting citizens to monitor priority infrastructure projects and presenting their findings to authorities, and;
  4. creating new spaces for dialogue between citizens and district assemblies about capital projects.

The team are now going to go out to find evidence to support their claim about how their strategies influenced change. But, I just wanted to underline some of the learning:

  • Define terms (eg transparency, accountability, responsiveness) precisely so you know what change you’re actually going to measure and what data is relevant to your hypothesis.
  • Interrogate your assumptions periodically. Allow different staff members to challenge your logic. Don’t just rely on proposal writers or project managers.
  • Don’t bundle everything together. Or else, how will you understand the relationship between different components of your hypothesis?
  • Make sure your hypothesis is in order. Remember, logical steps follow chronologically...
  • Don’t toolsplain. Don’t get distracted by your hypothetical process maps or steps in your tools: in other words, consider the evidence, not what you hope your tool influenced.
Tom Aston

Tom was the monitoring, evaluation and research lead for the inclusive governance team. He particularly looked at the application of theory-based evaluation methods such as contribution tracing and outcome mapping.

He joined CARE International UK in 2012, providing support to the Latin America and the Caribbean and Middle East and North Africa regions, particularly in conducting political economy analyses, and conducting studies on social accountability and advocacy.

He has an MSc in Development Administration and Planning from University College London (UCL) and is doing a PhD on the political economy of cash transfers, with Bolivia as a case study. Previously he worked for CARE Bolivia and as a consultant for the ODI and UCL on issues of social protection and disaster risk reduction.

One good thing I've read

For those of you looking to unlock the activist inside you I recommend Rebecca Solnit’s Hope in the dark: Untold histories, wild possibilities.