My research suggests most people are unaware of how projects are evaluated, in particular, the systematic biases baked into these processes. So much so that there can be an emotional outpouring when these biases are revealed as frustrating rather than enabling sustainable transformation. The devil is in the detail, but how many of us can be bothered to read the small print, trusting institutions and individuals to carry out these complex, multidimensional evaluations effectively?
The problem is that very few sustainable evaluation methods are designed to produce meaningful evidence of the sustainable consequences of a proposed intervention. Most sustainable evaluation methods are re-purposed models co-opted into the sustainability space with minimal adjustment. But there are real consequences of flawed evaluation models. Authentic sustainable solutions may be rejected, solutions that transfer costs and risks to others or into the future may be falsely labelled sustainable or solutions that make things worse given a ‘green for go’ signal.
These evaluation models can be impenetrable black boxes, controlled by gatekeepers hoping those submitting to their evaluations are happy with ‘computer says no or yes’ outcomes. However, with a little persistence you can lift the lid to explore their inner workings, biases and omissions. After gazing into many of these black boxes here are some common problems. Typically they:
- underestimate the scale, scope, timing and distribution of unsustainable costs and risks,
- overestimate the costs of sustainable transformation and the benefits of unsustainable practices
- underestimate the scale, scope, timing and distribution of benefits accruing from sustainable interventions.
For example, renewable energy generation projects are often burdened with the costs of new distribution infrastructure. But these costs are compared fossil fuel systems that rely on networks paid by the public sector and ignore decades of under-investment. The sustainable options are paying the price for subsidies enjoyed by fossil fuel companies.
New oil and gas developments in the UK were certified as net zero and approved using carbon calculations that ignore all greenhouse gas emissions from the use of these products. Are we certain that all clean cooling evaluations do not suffer the same problem and include all scope 3 emissions?
There are plenty of red flags associated with food systems that suggest possible problems with evaluating different infrastructure enhancements and investment plans. These include persistently high levels of food loss and food waste coupled with the existence of under- and over-nutrition and poor geographic coverage of clean cold systems. This is despite the existence of cost effective, value creating solutions. The existence of the clean cooling network is evidence of the need for radical intervention into a system of systems that is not working.
Most evaluation systems struggle in complex systems because they were never designed to cope with these settings. But this evaluation crisis is resolvable by design. We know the problems and we know the characteristics of evaluation systems that are fit for purpose. We need to design systems that recognise the full scope of impacts, make visible the intersection of different crises and contradictions, evidence progress towards agreed system outcomes, valuing all material impacts, including all relevant costs incurred and costs avoided. Risks and opportunities need to be considered from the perspective of all groups impacted by a system, winners and losers, as well as explicitly accounting for the redistribution of risks, costs and value added over appropriate time scales. The harsh reality is that if we don’t redesign how we evaluate interventions we will be trapped in an unsustainable world while believing we are changing it.