The Four Es and VfM
Value for money (VfM) has received increased attention in the international development sector over the past 15 years, as public and private organisations strive to allocate resources to maximise their contribution to Sustainable Development Goals and to demonstrate value back to their benefactors and taxpayers.
This blog is about one approach to VfM – the 4Es approach that the UK Department for International Development (DFID) and some other organisations use. This framework has traditionally focused on the three Es of economy, efficiency and effectiveness, and the overarching concept of cost-effectiveness that ties the three together. Increasingly, a 4th E, equity, has also gained attention.
Strengths of the 4Es
The 3Es of economy, efficiency and effectiveness break down the concept of VfM into sections of the results chain (e.g., minimising wastage, maximising outputs, and maximising outcomes within available resources). This breakdown offers an explicit way of mapping VfM measures back to a program’s intervention logic. It can make VfM more manageable to evaluate, for example, when a program is in its early stages and has not yet produced outcomes, or where long-term outcome data are scarce.
There is much in DFID’s framework that is aligned with our thinking on value for money. It is important that the framework includes equity, because VfM is about more than just ‘bang for bucks’ and involves balancing multiple aspects of good resource use. For example, reaching the most disadvantaged might involve additional costs.
DFID acknowledges that VfM is not just about doing the cheapest things – though it is important to understand and manage costs. It’s about results, including being clear about what results we should expect, not just short term tangibles but long term sustainable benefits as well, which may be intangible.
DFID acknowledges that VfM isn’t just about what’s easiest to measure – though good measurement is important (including being clear about what we’re measuring, why we’re measuring it, what the chosen indicators mean and what they don’t mean). It requires a judgement about whether the results justify the costs, based on the strength of evidence and making assumptions explicit. DFID does not, however, provide guidance on how such judgements should be made. OPM’s approach to assessing value for money (King & OPM, 2018) addresses this gap.
DFID also acknowledges that we need to be more innovative in assessing value – e.g., exploring ways of getting views and opinions from those who are intended to benefit.
While these aspirations signal an intent to strengthen the assessment of VfM, there is a lot of room for improvement in practice. In order to really progress VfM – get better at investing resources well, to improve people’s lives – the VfM conversation in the development sector needs to shift from a focus on accountability and cost containment to one of investigating VfM through an evaluative lens, for collective learning and improvement. Hard-nosed evaluation of VfM is critical to tackle the big questions about what sorts of interventions, investment strategies, and levels of investment, can best support the ongoing journey of sustainable development.
How we can improve their use
While the 4Es provide a conceptual foundation for systematically assessing and reporting on VfM, there are some practical challenges in their use. First, the standard definitions of each E are too generic. Their use in a particular program and setting can be improved by specifying how these concepts relate to specific aspects of the program’s design and performance – what do the 4Es really mean in this program? (i.e., defining “criteria” for evaluation purposes).
Second, the 4Es alone do not provide a transparent basis for distinguishing ‘good’ VfM from ‘excellent’ or ‘poor’ VfM. These terms (“standards”) are also definable, and doing so provides greater clarity in the assessment and reporting of VfM.
Defining criteria and standards is a core part of evaluative reasoning, and enables us to make valid, credible, transparent judgments about VfM from the empirical evidence. (This builds on the work of theorists like Michael Scriven, Deborah Fournier, and Jane Davidson, as described in this article in the American Journal of Evaluation).
Criteria and standards are often summarised in a matrix called a rubric. The Kinnect group wrote about rubrics in evaluation here.
When we develop VfM rubrics, we need to be mindful that:
- There may be other criteria of VfM besides the 4Es. Relevance and sustainability are two examples of other considerations that might help to determine how well resources are being allocated and used.
- Overseas aid funding might not be the only input. Local resources invested in a program also have opportunity costs. Resources are more than just money – e.g., by facilitating access, community leaders may invest and risk their time, expertise, relationships and reputations.
- Value is not just tied to results intended by the donor. Value to local beneficiaries and communities is important, and might look quite different through another cultural lens. Value can also be enhanced or diminished by unintended outcomes.
- The 4Es reflect a simple input > process > output > outcome > impact model. Effective development programs are more complex than that. For example, emergent processes and outcomes may be more valuable than planned ones. Systems thinking and complexity-informed approaches are important when evaluating complex programs.
- Value can come from unexpected quarters – e.g., investments in innovation involve some appetite for risk/reward rather than sticking with tried-and-true approaches, therefore there is value in learning about what works and what doesn’t work – i.e., there is value in failure.
- Different forms of evidence can help us to understand value from different perspectives. We might want to think about using a mix of quantitative and qualitative evidence.
- There may be trade-offs between criteria. An obvious trade-off is between efficiency and equity. Sometimes we might not be able to maximise both – we might have to decide how to balance them. A less-obvious trade-off is economy versus cost-effectiveness; it is important to use resources economically, but the best VfM doesn’t always come from buying the cheapest inputs – sometimes investing more can create disproportionate increases in value.
We should also reach across disciplinary boundaries and make more use of economic analysis. Systematically evaluating costs and consequences yields insights that can’t be gained by looking at either factor in isolation. Quantitative modelling facilitates clear thinking about the relationships between costs and consequences, which can otherwise be difficult to intuit. Forecasting encourages systematic thought about a program’s future value, beyond the window of investment. Scenario and sensitivity analysis facilitate transparency and robust thinking about uncertainty and risk.
Economic analysis can also help inform future investments through benchmarking of costs and consequences. Of course, caution is required in generalising from one setting to another, but there is real value in understanding cost structures and providing hard data demonstrating – as an inspirational client recently reminded me – that “if you want quality it’s not always cheap, if you want sustainable systems it takes time, if you want to work with disadvantaged populations you need to invest more than with highly educated ones”.
In the end, evaluative reasoning needs to preside over measurement. It’s important we get the measurement right – but indicators can’t make judgements. People make judgements – and explicit evaluative reasoning provides the means to make those judgements on an agreed basis, making use of multiple sources of evidence, and balancing multiple criteria.