A recent blog on the website of the UK development network, Bond, suggested using logframes with indicators, baselines and targets. This generated significant reaction on a discussion thread on the Platform for Evidence-based Learning and Communication for Social Change (Pelican). There’s also been significant progress over the last few years in M&E for advocacy, such as the work by ORS Impact to measure the difference made by ‘defensive advocacy’ (stopping bad things happening).
This blog aims to share some of our learning behind our recent blog on how to double your impact through advocacy and influencing. This should also be highly relevant to an emerging discussion on cross-learning between theory-based and qualitative methods for M&E.
Celebrating progress
Around 18 months ago, Save the Children held a meeting to discuss advocacy evaluation with a number of NGOs in the UK. While no-one had all the answers, it was a thoughtful discussion, and many of the organisations present had something helpful to contribute. There was already plenty of useful methods and tools out there. Last year, Save developed a useful light-touch framework, and Oxfam also finalised a meta-review of their policy influence, citizen voice and good governance effectiveness reviews.
At the time, CARE had its own aspirations to improve monitoring and evaluation for advocacy and influencing work. It also had an enormous database of impact reporting in which hundreds of projects would report each year. This included developing a tool for Advocacy and Influencing Impact Reporting (AIIR). Learning from both Save and Oxfam, CARE progressively adapted the tool, including aspects of outcome harvesting, process tracing and evaluation rubrics along the way. The tool asks projects or programmes which believed they had influenced the plans, policies, budgets or practices of others to explain the success, their level of contribution to the success, and the team’s learning from the process. It also asks teams to provide an estimation of who should actually benefit from the success, and how many people are already seeing positive change in their lives, if possible.
Moving beyond anecdote
Given a surfeit of interesting data, CARE was in the rare position where it could look at over 200 advocacy and influencing efforts that had reported success. We reviewed cases against CARE’s advocacy types identified in our Advocacy Handbook and Coffman and Beer’s (2015) advocacy strategy framework. Following Oxfam’s meta-review in order to maximise potential comparison, these tactics and strategies allowed us to develop an adapted fuzzy set Qualitative Comparative Analysis (fsQCA) with truth tables regarding:
- Level of outcome materialisation (i.e. to what degree the ‘win’ was leading to concrete outcomes or impacts)
- Significance of the advocacy win
- Level of influence
- Quality of evidence
- Duration of engagement
- Civic space context (taken from CIVICUS monitor).
CARE was keen to know not only whether teams had influenced a policy, plan, or a budget, but whether these had actually benefited real people downstream.
There were dozens of projects which had process outcomes, where public authorities had made some form of commitment (level 1, on our outcome materialisation scale). There were many more where a policy had been approved but without resources to implement it (level 2). There were others where the policy had resources, but where change was too abstract to measure (level 3). ‘Too abstract’ might include general themes or language in a strategy roadmap which would never allow you to figure out who might actually benefit from the success downstream.
Other examples included cases where CARE and partners had influenced governments or others to integrate gender equality strategies and approaches into their work: but measuring the difference this would make in terms of increased levels of gender equality in communities, compared to what would have happened otherwise, is well-nigh impossible (or at least not without significant resources for evaluative research that are almost never available).
What CARE was most interested in was advocacy or influencing wins that it may be possible to measure in the future (level 4) or wins with clear evidence that they are already making a difference, that is measurable (level 5). Twenty cases in the global South were judged to have reached at least level 4, where the significance of the outcome was considered high, and where CARE’s level of influence and the quality of evidence to support that was considered at least medium.
We were also able to rank tactics based on the frequency with which they were mentioned and their causal significance. And given that any contribution claim should be open to external scrutiny, we came up with an evidence ranking using simplified process tracing tests and the level of independence of evidence. Few of the claims were backed up by external evaluations, and many would benefit from more external corroboration. Yet, the claims chosen were defensible, and we believe that was good enough for the purpose.
Playing SMART ping pong
Many advocacy colleagues were initially quite sceptical. They said they were ‘too busy’ and felt that M&E wasn’t their responsibility. However, we were able to flatter and cajole over 200+ initiatives to report success, which shows that they weren’t too busy and monitoring and evaluation is everyone’s responsibility. There was even the odd conversion story. While some of those that reported clearly struggled, a number of staff were able to come up with pretty convincing explanations and evidence, with a bit of coaching (what outcome harvesting calls ‘ping-ponging’).
Teams were able to report in English, Spanish or French. Plenty of help was required to support teams to make outcome descriptions SMARTer (particularly in making them specific and measurable enough). Some of those who reject the use of logframes and standard indicators for advocacy work might cringe at the mention of being SMART. But even methods that stress unpredictability, emergence, and non-linearity such as outcome harvesting recommend you make outcomes SMART. Ultimately, if you’re vague about your outcome, if it’s difficult to verify, if it’s not clear there’s a plausible link between your actions and outcomes, or if the outcome isn’t relevant to the impact you seek, then it’s not worth assessing anyway.
Numbers, numbers, numbers
Perhaps the most contentious and challenging part of the process was estimating credible numbers of people who may have benefited from policy, plan or budget changes, or could do so in the future. Nowadays, it’s deeply unfashionable to use the word ‘beneficiary’ in the development field, which is why we talk about ‘participants’ at CARE. But whatever the terminology we use, we should care about who actually sees concrete benefits in their lives (or not) from what we do.
While it is certainly true that numbers are often concocted to satisfy a donor, or commitments made to the board, the process of estimating numbers is also about (responsibly) figuring out whether people have actually benefited from what we spend our time doing. At the meeting with Save the Children, CARE was the only organisation at the time that had the goal (whether well or poorly founded) to estimate numbers of people who benefited from policy change.
It’s probably fair to say that teams struggled with this more than any other issue. When we first started playing ping pong, the team in Egypt listed all women in Egypt as beneficiaries of changes in the inheritance law. But when asked whether they would actually benefit from the change, two women in the Egypt team we were discussing this with realised that they would not be beneficiaries. We faced the same battle in Latin America, where it was initially claimed that all domestic workers in the region (about 20 million people) would benefit from changes in ILO legislation. Even before we had the reporting system in place, one of us (Tom) had written a blog about a domestic worker employed at the time in Bolivia who wouldn’t benefit from a written contract he was working on at the time. This served as a useful example of how little control you often have over how changes in policies or budgets actually lead to concrete benefits.
However, looking on a case by case basis, we found that it was possible to come up with some credible numbers, to show the difference our influencing work was contributing to (as successes in advocacy and influencing work nearly always come from the efforts of many different actors, we talk about contribution not attribution). For example, CARE’s work with partners in Peru to get child malnutrition set as a political priority and to influence government budgets, strategies and programmes, has contributed to stunting levels in children under 5 more than halving over the last decade. The critical role of the CARE-led Child Malnutrition Initiative in pushing for these changes has been documented independently by the Institute of Development Studies and the World Bank, so we felt it was reasonable to claim that CARE and partners have made a significant contribution to improved nutrition security. Based on government statistics, this has seen reduced levels of stunting for over 690,000 children and their families – or around 2.3 million people.
Another case where we were able to quantify the numbers of people seeing positive change in their lives as a result of advocacy work came from CARE’s advocacy team in the US coordinating many humanitarian organisations in influencing the US Congress to provide nearly US$1 billion in supplemental funding for emergency famine relief in financial years 2017 and 2018. As a direct result of this additional funding, the US government supported at least 50.8 million crisis-affected people with humanitarian assistance, based on numbers reported by USAID. Additional famine-relief funding was also approved for financial year 2019, but numbers of people supported by those funds have not yet been confirmed. Recipients of this funding would have otherwise not received assistance, and it was argued that this funding would not have been provided without the advocacy and leadership of CARE and its partners. Added to the over 45 million people for whom CARE and partners can show contributions to positive impacts over the last five years, this advocacy for the supplemental US funding has doubled CARE’s total impact numbers to over 95 million people.
CARE now estimates that outcomes from these initiatives have so far improved the lives of more than 55.5 million people, with the potential for future impacts for a further 116 million people. This has helped make the case internally for greater focus on and investment in advocacy and influencing, rather than expecting significant change at scale to happen solely from (often smaller scale) projects in communities. These efforts to better understand – and where possible, quantify – the impact of our advocacy and influencing work have been critical to making this happen.