The 4Es – why I like them, and how we can improve their use

I’ve been doing more work in the international development space lately. Value for Money (VFM) has received increasing attention here as public and private organisations strive to allocate resources to maximise their contribution to Sustainable Development Goals, and to demonstrate value back to their benefactors and taxpayers.

This blog is about one approach to VFM – the 4Es approach that the UK Department for International Development (DFID) and some other organisations use. This framework focuses on the three Es of economy, efficiency and effectiveness, and the overarching concept of cost-effectiveness that ties the three together. Increasingly, a 4th E has also gained prominence: equity.

Screen Shot 2016-06-17 at 10.23.54 AM

 

 

 

 

 

 

 

 

Why I like the 4Es

The 3Es of economy, efficiency and effectiveness break down the concept of VFM into sections of the results chain (e.g., minimising wastage, maximising outputs, and maximising outcomes within available resources). This breakdown offers an explicit way of mapping VFM measures back to a program’s intervention logic. It can make VFM more manageable to assess, for example, when a program is in its early stages and has not yet produced outcomes, or where long-term outcome data are scarce.

I also like that the framework includes equity, as it reflects my view that VFM is about more than just ‘bang for the bucks’ and involves balancing multiple aspects of good resource use. For more on this see my downloads page.

I also find much in DFID’s framework that is aligned with my thinking on value for money. For example, DFID acknowledges that VFM is not just about doing the cheapest things – though it is important to understand and manage costs. It’s about results, including being clear about what results we should expect, not just short term tangibles but long term sustainable benefits as well, which may be intangible. VFM isn’t just about what’s easiest to measure – though we do have to get better at measuring (including being clear about what we’re measuring, why we’re measuring it, what the chosen indicators mean and what they don’t mean). It requires a judgment (emphasis mine) about whether the results justify the costs, based on the strength of evidence and making assumptions explicit. DFID also acknowledges that we need to be more innovative in assessing value – e.g., exploring ways of getting views and opinions from those who are intended to benefit.

While these aspirations signal an intent to strengthen the assessment of VFM, there is a lot of room for improvement in practice. In order to really progress VFM – get better at investing resources well, to improve people’s lives – the VFM conversation in the development sector needs to shift from a focus on accountability and cost containment to one of investigating VFM through an evaluative lens, for collective learning and improvement. Hard-nosed evaluation of VFM is critical to tackle the big questions about what sorts of interventions, investment strategies, and levels of investment, can best support the ongoing journey of sustainable development.

How we can improve their use

While the 4Es provide a conceptual foundation for systematically assessing and reporting on VFM, there are some practical challenges in their use. First, they are expressed at a generic level. On the one hand this is a strength, as it ensures they apply across a broad range of investments and are not locked into the ‘wrong things’ at program level. On the other it is a problem, because they lack specificity. Their use in a particular program and setting can be improved through careful explication of how these concepts relate to specific aspects of the program’s design and performance – what do the 4Es really mean in this program? (i.e., defining “criteria” for evaluation purposes).

Second, the 4Es alone do not provide a transparent basis for distinguishing ‘good’ VFM from ‘excellent’ or ‘poor’ VFM. These terms (“standards”) are also definable, and doing so provides greater clarity in the assessment and reporting of VFM.

Defining criteria and standards is a core part of evaluative reasoning, and enables us to make valid, credible, transparent judgments about VFM from the empirical evidence. (This builds on the work of theorists like Michael Scriven, Deborah Fournier, and Jane Davidson, as described in my 2016 article in the American Journal of Evaluation. Like playing golf or guitar, it’s easy to get started and have a go at defining criteria and standards – but becoming proficient requires lots of practice. My colleagues and I wrote about our experiences here).

When we define those criteria and standards, we need to be mindful that:

  • There may be other dimensions of VFM besides the 4Es. Relevance and sustainability are just two examples of other considerations that might help to determine how well resources are being allocated and used.
  • Overseas aid funding might not be the only input. Local resources invested in a program also have opportunity costs. Resources are more than just money – e.g., people also invest and risk their time, expertise, relationships and reputations.
  • Value is not just tied to results intended by the funder. Value to local beneficiaries and communities is important, and might look quite different through another cultural lens. Value can also be enhanced or diminished by unintended outcomes.
  • The 4Es reflect a simple input > process > output > outcome > impact model. The program we’re evaluating may be more complex than that. For example, emergent processes and outcomes may be more valuable than planned ones.
  • Value can come from unexpected quarters – e.g., investments to promote innovation involve some appetite for risk/reward rather than sticking with tried-and-true approaches, therefore there is value in learning about what works and what doesn’t work – i.e., there is value in failure.
  • Different forms of evidence can help us to understand value from different perspectives. We might want to think about using a mix of quantitative and qualitative evidence.
  • There may be trade-offs between criteria. An obvious trade-off is between efficiency and equity. Sometimes we might not be able to maximise both – we might have to decide how to balance them. A less-obvious trade-off is economy versus cost-effectiveness; it is important to use resources economically, but the best VFM doesn’t always come from buying the cheapest inputs – sometimes we have to invest more to create more value.

VFM cartoon.

 

 

 

 

 

 

 

We should also reach across disciplinary boundaries and make more use of economic analysis (where feasible and acceptable). Systematically evaluating costs and consequences yields insights that can’t be gained by looking at either factor in isolation. Quantitative modelling facilitates clear thinking about the relationships between costs and consequences, which can otherwise be difficult to intuit. Forecasting encourages systematic thought about a program’s future value, beyond the window of investment. Scenario and sensitivity analysis facilitate transparency and robust thinking about uncertainty and risk.

Economic analysis can also help inform future investments through benchmarking of costs and consequences. Of course caution is required in generalising from one setting to another, but there is real value in understanding cost structures and providing hard data demonstrating – as an inspirational client recently reminded me – that “if you want quality it’s not always cheap, if you want sustainable systems it takes time, if you want to work with disadvantaged populations you need to invest more than with highly educated ones”.

In the end, evaluative reasoning needs to preside over measurement. It’s important we get the measurement right – but indicators can’t make judgments. People make judgments – and explicit evaluative reasoning provides the means to make those judgments on an agreed basis, making use of multiple sources of evidence, and balancing multiple criteria.

June, 2016

Comments are closed.