Upcoming Evaluations Illustrate the Complexity of Use
One area of consensus within Africa’s growing national evaluation systems is the importance of a use-focused design. On the surface, this seems uncontroversial. Given each country’s significant development and governance challenges, identifying evaluations that can be used to improve public sector programming ought to be quite straightforward. But, when you dig beneath the surface, like participants did in DPME’s recent design clinic, it becomes apparent that a use-focussed approach to evaluation is not necessarily so simple.
First of all, when we talk about use, who is the user? Is it the department that owns the programme? This is the initial, simple answer, but like many linear explanations, might be wrong. This department, of course, plays an important role in the evaluation, and we always hope that the government uses the results.But it became clear in the design clinic that the lead department is, on its own, insufficient for a useful evaluation. Very often, an anchor department knows very well what the strengths and weaknesses of its programmes are, and does not need a large, expensive external evaluation to give them recommendations.
The reason the programme’s problems have not already been solved through good management, are because the problems, the programme, and their results, are complex. There might be a mismatch in mandates, or gaps in institutional coordination. Maybe the department hopes that an evaluation will help other stakeholders that are important in a programme to better understand their role.
Then, what happens if there is a disagreement among different role players about what the critical issues are to evaluate? Officials in a department might need technical, process information from an evaluation to help improve some sticky issues of implementation. Political leaders might want bigger pictures answers to strategic policy decisions. But where does the ‘public good’ sit within all of this? Can the evaluation answer big questions of political strategy and trade-offs, technical ‘how to’ questions of how to make the bureaucracy a bit slicker, and also respond to the needs of citizens for more transparency, and more inclusion in processes of governance?
The recent design clinic was full of many ‘Aha!’ moments, for most people in the room. My own revelation, as a human geographer and an evaluator, was getting some words for what spatial planning and M&E have in common – they are both ‘invisible’ services that make everything else work better, and they grapple with a lot of very similar issues around institutional location, professionalisation, and constant need for advocacy.
However, I think many of the small revelations participants had was around who experiences some of the problems in programme design, and who holds the solutions. This mismatch often points to areas where use becomes complicated. If one department sees a problem to solve, and proposes an evaluation, but finds out that in fact, the use of the evaluation results need to be taken up by another department, an evaluation needs to do extra work to be useful to different audiences.
In a context where there are plenty of problems to go around, it is important that people working within evaluation systems give collective attention to what use means, and who users are.