Upcoming Evaluations Illustrate the Complexity of Use
One area of consensus within Africa’s growing national evaluation systems is the importance of a use-focused design. On the surface, this seems uncontroversial. Given each country’s significant development and governance challenges, identifying evaluations that can be used to improve public sector programming ought to be quite straightforward. But, when you dig beneath the surface, like participants did in DPME’s recent design clinic, it becomes apparent that a use-focussed approach to evaluation is not necessarily so simple.
First of all, when we talk about use, who is the user? Is it the department that owns the programme? This is the initial, simple answer, but like many linear explanations, might be wrong. This department, of course, plays an important role in the evaluation, and we always hope that the government uses the results.But it became clear in the design clinic that the lead department is, on its own, insufficient for a useful evaluation. Very often, an anchor department knows very well what the strengths and weaknesses of its programmes are, and does not need a large, expensive external evaluation to give them recommendations.
The reason the programme’s problems have not already been solved through good management, are because the problems, the programme, and their results, are complex. There might be a mismatch in mandates, or gaps in institutional coordination. Maybe the department hopes that an evaluation will help other stakeholders that are important in a programme to better understand their role.
Then, what happens if there is a disagreement among different role players about what the critical issues are to evaluate? Officials in a department might need technical, process information from an evaluation to help improve some sticky issues of implementation. Political leaders might want bigger pictures answers to strategic policy decisions. But where does the ‘public good’ sit within all of this? Can the evaluation answer big questions of political strategy and trade-offs, technical ‘how to’ questions of how to make the bureaucracy a bit slicker, and also respond to the needs of citizens for more transparency, and more inclusion in processes of governance?
The recent design clinic was full of many ‘Aha!’ moments, for most people in the room. My own revelation, as a human geographer and an evaluator, was getting some words for what spatial planning and M&E have in common – they are both ‘invisible’ services that make everything else work better, and they grapple with a lot of very similar issues around institutional location, professionalisation, and constant need for advocacy.
However, I think many of the small revelations participants had was around who experiences some of the problems in programme design, and who holds the solutions. This mismatch often points to areas where use becomes complicated. If one department sees a problem to solve, and proposes an evaluation, but finds out that in fact, the use of the evaluation results need to be taken up by another department, an evaluation needs to do extra work to be useful to different audiences.
In a context where there are plenty of problems to go around, it is important that people working within evaluation systems give collective attention to what use means, and who users are.
DPME’s Design clinic confirms the importance of participation
The Department of Planning, Monitoring, and Evaluation (DPME) held its design clinic on the 6th and 7th of September 2022 in Pretoria. Facilitating it has been a professional highlight – it was inspiring to see so many people grappling with complex development challenges and unwieldy government institutions, coming away feeling more empowered and clearer about how their work will benefit from an evaluation.
However, one thing that comes up in conversation at each recurring design clinic, is the importance of participation for both the success of the evaluation, and the programme. The tricky thing about participation is that it reflects both a cause, and an effect of programme performance. Good evaluations happen when a team of people sitting at different places in a sector come together to solve a collective problem around programme performance. When you get the right mix of people in a room, you can often see a programme improve from the evaluation design stage.
However, often when there’s a problem that needs to be solved in a programme, you see reflections of this problem in the evaluation design. Maybe people are frustrated with the programme, and as a response, they have made that project less central in their work day. Maybe people have written the problem off as ‘too hard.’ Maybe someone cannot convince their boss that the programme is important enough to release them from their regular work for two days.
Maybe someone knows there is a performance problem, but worries that the transparency in evaluations will lead to bad press. Maybe ownership and will is strong, but administrative processes have not allowed the right people to be identified and invited. Whatever the issue is, you can learn a lot about a programme from multiple stakeholder participation in an evaluation.
At the closing of the two day workshop, one participant asked about civil society – that stakeholder that must always balance being sufficiently constructive, and sufficiently critical. The question was about whether or how a constructive role could be carved out for civil society actors through an evaluation. DPME Deputy Director-General Godfrey Mashamba responded with a call to invite and include civil society actors in the evaluation process moving forward.
Coming from a background in civil society myself, my mind immediately went to all of the challenges of working with government – different paces of expected change, the many, many layers of bureaucracy and many accompanying meetings. A lack of trust on both sides. But, those difficult negotiation processes are exactly where evaluation shines. I think broader civil society inclusion in the evaluation system moving forward can only be a strength for governance in the sector.
On the Thursday 22 September 2022, Twende Mbele hosted a multi-paper session at the SAMEA Biannual Conference titled,‘Can Horizontal Leadership Approaches Augment the Practice of Monitoring, Evaluation and Learning in the Public Sector’.
Born from research and discussion paper, the purpose of this presentation by Philip Browne and Stanley Ntakumba, was to stimulate further discussions on whether, from within the constraints of public sector MEL practice, there are opportunities for forms of horizontal leadership to be established that are disruptive of conventional and constraining leadership practices.
On Thursday the 22 September, Twende Mbele hosted a panel discussion title,‘Using experience to adapt a guideline on Rapid Evaluations in the public sector’ at the SAMEA Biannual Conference.
The purpose of this session was to update the existing Twende Mbele Rapid Evaluation Guideline from the lessons learned in the adaption and piloting of rapid evaluations in the partner countries. This session facilitated by Mine Pabari, included presentations by the Chief Director of the DPME, Ms Thokozile Molaiwa who spoke to the lessons emerging from South Africa, and Dr Ayabulela Dlakavu who presented on the lessons emerging from three partner countries.
As part of the National Evaluation System (NES), a system of Evaluation Design Clinic is used to facilitate stakeholder participation in the conceptualisation and design of the evaluations prioritised in the National Evaluation Plan (NEP). The last Design Clinic was held in September 2021 with custodian departments and proposing parties on the evaluations identified for the periods 2021/22 and 2022/23. The Design Clinic facilitated stakeholder engagements on problem analysis, programme theory building or refinements where it existed, development of terms of reference (scope, focus, evaluation purpose, questions and methodology), and identification of key stakeholders for each evaluation. The DPME will be hosting a Design Clinic in 2022 for the NEP evaluations planned for 2023/24 and 2024/25.
The objectives of the evaluation Design Clinic is to ensure that the following are developed and achieved:
- A draft theory of change for the programmes/ policies being evaluated in 2023/24 and 2024/25,
- Elements of a draft TOR including the evaluation purpose and evaluation questions are developed, and
- Workshop participants are introduced to the 1-1.5 pager summary for evaluations included in the National Evaluation Plan.
Below are some blogs written with regards to the design clinic. Click the link below for a quick read: