Preface
This is a book for decision-makers and for the evaluators, analysts and advisors whose work helps to inform their decisions. It offers an approach to help people make good use of evidence and values to support good decision-making that serves the public interest.
The approach offered in this book is intended for any policy, program or intervention where resources are used to pursue social goals – that is, to change society or the lives of people within it. Increasingly, these decisions are made not only by politicians and civil servants, but also by businesses, social enterprises and philanthropists.
Every decision we make comes at a cost – and not just money. Every decision happens at a crossroads, and involves choosing not to do something else. This includes the way we each choose to invest our time, intellectual and emotional energy, relationships, and other things. It also applies collectively to things like natural resources, social and cultural capital. Trade-offs are a fact of life, and the cost of each trade-off is the opportunity cost of foregone alternatives.
Over time, humankind has developed various tools to help us make good decisions, ranging from simple checklists, scoring tools, weighing of pros and cons, to complex decision trees, mathematical models, civic engagement approaches, and many more. One tool, in particular, is often preferred when it comes to informing resource allocation decisions: a method from economics called cost-benefit analysis (CBA). I once heard an esteemed professor of medicine proclaim to a room full of people that the only way to really tell if something provides ‘value for money’ (VFM) is to conduct a randomised controlled trial (to determine whether the program causes outcomes) followed by a CBA (to determine whether the benefits of the program, valued monetarily, exceed the costs).
Such approaches do, of course, have advantages. But it would be a big leap to go from ‘here are some study designs that can reduce some kinds of bias’ to ‘here is the gold standard approach to rule them all’. All approaches have strengths and limitations. If we’re responsible for public-facing decision-making and analysis, then we have a responsibility to understand the strengths and limitations of the tools we use, and a good place to start is developing a healthy scepticism of terms like ‘gold standard’ or ‘best practice’. This book explores the case for making greater use of CBA, as well as the reasons why we should not regard it as the gold standard, and the potential to combine it with other methods for more useful insights and better decisions.
Out here in the real world of complex programs and imperfect data, what we need is a fit-for-purpose way of determining VFM under all sorts of different circumstances. For example, what if:
- Program design is constantly evolving during rollout, in response to changes in context, emergent opportunities and learning-by-doing?
- We want to assess VFM in the early stages of a program while implementation processes are still bedding down and it’s too early to measure outcomes?
- Outcomes are intangible – hard to value monetarily with the time and data available to us – e.g., improved teacher self-efficacy, fairer treatment of prisoners, or more sustainable management of an endangered species?
- We need to make a rapid and robust judgement about VFM, based on qualitative evidence or an expert stakeholder workshop?
- We need to transparently balance trade-offs, such as profits versus environmental harms?
- We need to deliberate on the costs and value of resource use for different groups with disparities in political power?
- A policy doesn’t provide a positive return on investment but instead creates social value through “equity, human dignity, fairness, and distributive impacts”? (Executive Order No. 13563, 2011, p. 3821)
This book offers a solution. It doesn’t offer a one-size-fits-all formula. Instead, it provides a framework and a set of principles for applying an appropriate mix of methods to any context.
Chapter 1 defines what ‘value for money’ means and why it matters. If we’re going to provide a clear answer to a VFM question, we first need to be clear about these things. Different theorists and organisations have defined VFM in different ways, but there are some pervasive themes. I will offer a unifying definition of VFM, positioning VFM as a shared domain of evaluation and economics.
Chapter 2 gives an overview of what economic methods of evaluation are, with a particular focus on cost-benefit analysis, the most comprehensive form of economic evaluation. Economic methods are often able to enhance the validity of an evaluation of VFM. In this chapter I will demystify this set of methods and provide a call-to-action for evaluators to become savvy about when and how to use them. I will also offer a few resources to set you on the path to learning how to design and undertake an economic evaluation.
Chapter 3 unpacks the limitations of economic methods of evaluation. I will argue that although we should use cost-benefit analysis more, we shouldn’t use it on its own. Most of the time it should be regarded not as the whole evaluation, but as something that contributes a piece of evidence toward an evaluation. We should use it together with our broader evaluation toolkit, with methods matched to context.
Chapter 4 sets out four principles explaining how to integrate economic evaluation with other methods, combining diverse criteria and evidence under the umbrella of an evaluation framework. I will connect these principles to the body of evaluation theory and explain the benefits of evaluating VFM in this way.
Chapter 5 provides a stepped model for putting the theory into practice. It emphasises the use of approaches and methods that will already be familiar to many evaluators, and offers a concrete and intuitive process for evaluating VFM with fidelity to the the four principles of the previous chapter.
Chapter 6 offers some tips and tricks for implementing the approach. Learning any new discipline is a cumulative process of mastery, and these lessons from years of collaborative practice-based learning with my colleagues can help to accelerate your learning.
Chapter 7 concludes with some closing remarks about what I hope we can achieve together by forming a community of practice around good VFM evaluation. I invite you to use the approach reflectively and reflexively, adapting it to meet your evaluation needs and circumstances, while remaining true to the core principles underpinning the model.
Throughout, I will provide examples illustrating what the approach looks like in practice in a range of different situations.
This book reflects my orientation as an evaluator. I am Pākehā – a New Zealander of European descent. I attended culturally diverse schools in low socioeconomic areas. From my earliest friends I learned that although we might all inhabit the same world, there are multiple ways of experiencing and understanding it. I was a postpositivist some 40 years before I knew what it meant. From my parents I learned values of social justice, collective responsibility, care for the natural environment, and creativity – values that were deeply embedded in my worldview long before I learned about things like ‘return on investment’.
Paradoxically, the 1980s were part of the backdrop for my formative years, exposing me to the individualistic, materialistic ideas that were taking hold in New Zealand as they were in the UK, USA and elsewhere. Gordon Gecko taught us that “greed is good”. Thatcher, Reagan, and New Zealand’s Lange Government told us that public services are a cost and governments should be small. Somehow, the idea that ‘if you work hard, the sky’s the limit’ morphed into a growing perception that ‘if you’re poor, it’s not my problem’ or even ‘it’s your own fault’. Around mid-1987, when my school organised a work experience day, I spent it with some of Auckland’s top stock traders, admiring their glamorous lifestyles. By the end of that day, I knew in my heart that it was not for me (a few months later they all learned it was not for them either, but I digress). What remained was a lingering notion that investing is a thing, that a small investment is a seed that, if planted wisely and nurtured well, can turn into something big, and that investment decisions can have positive or negative consequences.
Over the next 10 years I was many things including fruit picker, commercial cleaner, butcher’s assistant, short-lived architecture student, call centre manager, flying instructor, and policy analyst. For some reason, the last one stuck. I went back to school, completing a Master of Public Policy. I learned the policy trade in government departments in New Zealand and Canada, before moving into consulting some 20 years ago. Policy analysts spend a lot of their time appraising policy options and providing advice to political decision-makers. They define a need or problem, scope options for addressing it, develop decision criteria, examine the evidence along with political and economic arguments for and against each option, and provide a recommendation about which option is best. Years later I learned that I had been applying something called the General Logic of Evaluation, which underpins the approach in this book.
My initial grounding in formal evaluation theory and practice was economic; I attended a week-long course in Methods for the Economic Evaluation of Health Care Programmes at McMaster University, taught by Professor Michael Drummond and colleagues. Their ‘little blue book’ of the same title (Drummond, Sculpher, Torrance, O’Brien, & Stoddard, 2005) is to this day regarded as a seminal text and I highly recommend it. It was the first evaluation text I ever read and my only evaluation text for several years.
I did, however, find myself increasingly moving into the diverse field of program evaluation. It wasn’t rocket science. We worked with our clients and their stakeholders to agree what questions we needed to answer. We decided what aspects of performance we’d look at, delineating the scope of the evaluation. We opened a blank report template and worked out a structure of headings and sub-headings that would form the framework for a performance story. Then we worked out a plan for collecting the facts, figures, and stakeholder feedback we needed to write our report. Somehow it always came together into a pretty coherent story. Generally I think our reports were useful. However, our reports presented conclusions that probably fell short of being evaluative judgements, and certainly were not explicitly evaluative.
Then one day, after preparing a draft evaluation framework, I sent it around the evaluation team and asked how it could be improved. Kataraina Pipi said she thought it was pretty good, but I could improve it by more clearly defining criteria of merit. Me: “Criteria of what?”. Kataraina: “You might like to meet Jane Davidson”. And so it was that I started to become a Real Evaluator, and discovered a treasure trove of literature debating what that might mean.
Throughout all this time, I was doing economic evaluations, and other evaluations – and sometimes both together. In 2012 I was invited to run a training workshop on VFM at New Zealand’s ANZEA[1] evaluation conference. I wanted to offer something more than just another course on cost-benefit analysis so I started grappling seriously with a notion that we might be able to systematically combine economic and other evaluation methods, using rubrics to bring the findings together into a final, overall judgement. I phoned some evaluation friends – Kataraina Pipi, Kate McKegg, Nan Wehipeihana, Judy Oakden, Jane Davidson, and Fiona Cram. We sat around my kitchen table. I presented a draft set of slides, which they duly shredded in the kindest and most constructive way.
After that, the lightbulb was on. This idea might work! We tried the approach on real-life projects. Providing two parallel evaluations – the economic one and the other one – was easy. Bringing them together was hard. We lacked a theoretical foundation to know whether it’s valid to combine the theory and practice of program evaluation and economic evaluation. Also, with no guidance on how it should be done, we were sometimes flying in fog. A chance coffee with Patricia Rogers one fine day was a tipping point. Patricia convinced me that I should pursue the topic as a piece of doctoral research, carving out the time to unpack the problem and develop a solution, under the guidance of top scholars.
This book builds on my PhD. I heard somewhere that a PhD starts with a question and takes a long journey to an answer. The book should start with the answer. That’s what I aim to do here. This book is for decision-makers and evaluation practitioners. I will outline relevant theory, but if you’d like to take the full journey of theory development, my thesis is available online.
[1] Aotearoa New Zealand Evaluation Association www.anzea.org.nz