Evaluating value for money: there has to be a better way!

My PhD research is predicated on this notion.

Governments and other social investors want to know if their programs represent value for money. There are different answers to this call, none of them perfect. Prevailing views (at the risk of indulging in a little hyperbole) fall into three main groups:

  • Economists, who favour a particular brand of rocket science that involves quantifying everything that matters (and if it’s not quantifiable, then it probably doesn’t matter)
  • Evaluators, who come in many stripes but (at least compared to the economists) tend to temper any quantitative enthusiasm with an appetite for the underlying narrative, diversity and nuance
  • Politicians, who have limited patience for either rocket scientists or the socially conscientious, and are pretty sure they “know value for money when they see it”.

What if the first two could be combined in some way, to balance the strengths and counteract the limitations of each perspective? (I haven’t yet figured out what to do with the third).

That’s it, in a nutshell. If explicit evaluative reasoning and economic methods of evaluation are combined, can this lead to better value for money evaluation in some circumstances?

If you like details, read on…

 

What is value for money?

Value for money (VFM) is concerned with using resources well. 

At the core of both economics (the study of how people choose to use resources) and evaluation (the systematic determination of quality or value) is a shared interest in valuing resource use for social betterment.

Untitled

It is only by examining resources invested, value derived from their investment, and by having some basis for reconciling the two, that questions about value for money can be comprehensively assessed. In other words, we need a good way to answer these three questions: What did we put in? What did we get out? Was it worth it? 

We make evaluative assessments of VFM all the time as householders, usually based on snap judgements +/- mental arithmetic. But for really big investments, like the ones we taxpayers trust our Governments to manage, a small sprinkling of rocket science might not be such a bad idea.

Evaluation and economics tend to operate as complementary or competing disciplines rather than being integrated within an overarching logic. Combining them might be a way to provide more valid and warrantable evaluation of value for money in some circumstances.

 

Why does it matter?

There’s only so much money and time to go around. We want to spend it doing things that return the most value. Evaluation and economics can both support this endeavour.

On the one hand, economic techniques are good at explicitly taking resource use into account – but they apply a fairly rigid and narrow set of approaches to valuing that don’t always fit the context. On the other hand, evaluation offers a wider toolkit for valuing that can be adapted to make it fit for different purposes and contexts – but all too often, evaluations don’t take resource use into account.

 

What’s wrong with the techniques we’ve already got? A million economists can’t be wrong, can they?

Of course they can. Don’t you watch the news?

Actually, economic methods of evaluation can be very illuminating and I regularly use them as part of evaluating VFM. Economic methods can be combined with evaluative methods. Different techniques are fit for different purposes, and there are some circumstances where economics is not quite enough, on its own. For example, cost benefit analysis (CBA) and cost-effectiveness analysis (CEA), which I have described in another blog, have limitations when it comes to evaluating social investments where there are multi-faceted outcomes, attribution problems, poor data, and a range of stakeholder perspectives about what ‘value’ means in context.

One of the stumbling blocks for CBA is the need to assign monetary values to intangible outcomes. While there are clever techniques that can be used to monetise anything of value to society (e.g., arts, sports, culture), these techniques are subject to some simplifying assumptions and other limitations that may skew results and compromise their validity in certain contexts. In other words, they might help us reach the wrong conclusions.

Techniques like CBA and CEA can be performed to an exemplary standard without reaching any evaluative conclusions at all. For example, CEA produces a cost-effectiveness ratio but can leave it for the audience to work out whether the result is great, good, acceptable or poor. The assumptions and scenarios that feed into a CBA can result in a wide range of plausible results, leading to ambiguous conclusions – or worse, can be manipulated to produce ‘findings’ that support a particular position rather than illuminating genuine understanding of whether the program represents good or poor VFM.

Recent economic events and political leanings have brought an increased focus on VFM. But there is a lack of clarity or coherent definition in the public sector about what VFM even means. Together with the limitations of current economic evaluation techniques, this runs the risk that social investment decisions are skewed toward the production of outcomes to which economic returns are readily attributable, at the expense of other outcomes that are valuable to society in other ways.

 

So, what are you actually doing?

My PhD research at the University of Melbourne aims to develop a model for evaluating value for money in social programs, integrating economic methods within evaluation-specific methodology. I would like this research to contribute to the field of evaluation by enhancing the rigour and accessibility of VFM evaluation in situations where traditional economic evaluation techniques are infeasible, unaffordable or have serious limitations. I think that a combination of evaluation-specific approaches (e.g., rubrics) together with health economic concepts and methods might be a way of doing this.

The model will identify:

  • Features of a good VFM evaluation (how would you know one if you saw one?)
  • Strengths and limitations of economic approaches against these features
  • How (in what ways) and when (in what circumstances) economics combined with other approaches to valuing would come closer to providing a good VFM evaluation.

I am calling it a “syncretic approach”. Syncretism is “the combination of different forms of belief or practice” (Merriam-Webster Dictionary). It is characterised by the merger and analogising of ostensibly discrete paradigms, “thus asserting an underlying unity” (Wikipedia). For example, syncretism can be observed in politics, religion, culture and the arts.

The use of mixed methods (e.g., quantitative and qualitative) in evaluation could be described as syncretic if “one method enables the other to be more effective and together both methods would provide a fuller understanding of the evaluation problem” (Greene & Caracelli, 1997, cited in Mertens and Hesse-Biber, 2013). Similarly, mixing evaluation-specific and economic methods might result in new insights that contribute to good resource allocation decisions. A syncretic approach to evaluating value for money requires a solid theoretical foundation as well as sufficient flexibility to respond to different contexts.

 

What will it add?

In our public policy consulting practice, my colleagues and I routinely draw on evaluation-specific methodology as well as the tools of health economics. This has inevitably resulted in some experimentation to design context-sensitive evaluations that mix economic and other methods.

A practical example of this hybrid approach in action is the use of an evaluative rubric that sets out qualitative descriptors of ‘excellent’, ‘good’, ‘acceptable’ and ‘poor’ value for money against multiple criteria such as efficiency, equity, environmental, social and cultural value.

Based on this real-world tinkering and incremental learning, I can offer many anecdotal examples of satisfied clients, community members who felt validated and empowered, and funding decisions made on the basis of hybrid value for money evaluations.

But what we lack is an explicit theoretical foundation for this work. While informed reasoning and intuitive judgment will take us a long way in deciding when to do what, a more systematic framework is desirable to promote evaluation quality and consistency.

The tensions between economic and evaluation-specific approaches to valuing have been likened to the “quant-qual debate” (Julnes, 2012b) and the “causal wars” (Scriven, 2008) in that both of these controversies involved opposing sets of world views with one camp pushing a notion that one particular set of methods (quantitative data analysis and randomised controlled trials respectively) represented a gold standard and the other camp arguing that “methods should be matched to the situation” (Davidson, 2006).

In both cases, the latter camps’ appeals to a higher-order, overarching logic offered a basis for a set of principles framing the dominant methods as conditionally valid and sometimes appropriate contributors to mixed methods evaluation, rather than being unconditionally superior to any alternative methods. Although these debates are far from over, evaluators are at least armed with a robust framework to design and defend context-appropriate methodologies.

I hope that my research will make a similar contribution by providing an overarching model to guide the use of economic methods together with other approaches to valuing. Given the ubiquity of resource constraints and the ongoing need for decision makers to make good resource allocation decisions for the betterment of society, a theoretical foundation for a syncretic approach to evaluating value for money is needed.

Download my free e-book.

 

References

Adler & Posner. (2006). Cost-Benefit Analysis: Legal, Economic, and Philosophical PerspectivesChicago: University of Chicago Press;

Davidson, E.J. (2005). Evaluation Methodology Basics: The nuts and bolts of sound evaluation. Sage: Thousand Oaks CA.

Davidson, E.J. (2006). The RCTs-Only Doctrine: Brakes on the Acquisition of Knowledge? Journal of Multidisciplinary Evaluation, No. 6.

Drummond MF, Stoddart GL, Torrance GW. (1987). Methods for the Economic Evaluation of Health Care Programmes. 1st ed. Oxford: Oxford University Press.

Greene, J.C. (2002). With a splash of soda, please: Towards active engagement with difference. Evaluation 8(2), 249-258.

Greene, J.C., Caracelli, V.J. 9Eds). (1997). Advances in mixed method evaluation. New Directions For Evaluation, 74.

Julnes, G. (2012). Managing valuation. In G. Julnes (Ed). Promoting Valuation in the Public Interest: Informing Policies for Judging Value in EvaluationNew Directions for Evaluation, 133, pp 3-15.

Julnes, G. (2012). Managing Valuation in the Public Interest: Guiding the Use, Combining, and Sequencing of Multiple Approaches to Valuing in Evaluation. In: Julnes G, Schwandt T, Davidson J, King J. 2012. Valuing Public Programs and Policies in Complex Contexts: Balancing Multiple Values, Multiple Cultures, and Multiple Needs. Panel presentation, American Evaluation Association Conference, Minneapolis.

Mertens, D.M., Hessie-Biber, S. (2013). Mixed methods and credibility of evidence in evaluation. In D.M. Mertens & S. Hessie-Biber (Eds.), Mixed methods and credibility of evidence in evaluation. New Directions For Evaluation, 138, 5-13.

Scriven, M. (2008). A Summative Evaluation of RCT Methodology: & An Alternative Approach to Causal Research. Journal of Multidisciplinary Evaluation, Volume 5, Number 9.

 

September, 2012 / Updated May, 2016

Comments are closed.