Reflections on the UNDP NEC Conference 2022: Resilient national evaluation systems for sustainable development
The United Nations Development Programme (UNDP) hosted the National Evaluation Capacities (NEC) Conference in Turin, Italy from the 25-28 October 2022. There were more than 300 participants from over 100 countries, representing national governments, development partners and the international evaluation community. All these role-players gathered to discuss the development of resilient national evaluation systems that inform and support policy and other decision making in a rapidly changing and ever-more complex world.
At this conference, I participated in a panel discussion on capacity development initiatives aimed at supporting National Evaluation Systems (NESs), with representatives from three other countries, namely, Paraguay, Guatemala, Panama. I shared my experiences from the various capacity development initiatives undertaken by the South African Department of Planning, Monitoring and Evaluation (DPME) in collaboration with other African countries under the Twende Mbele initiative, the National School of Government (NSG), CLEAR-AA, and others.
The presentation was well received by participants and fellow panellist. And some of the issues that came up in the Q&A session include:
- The importance of multi-disciplinary skills and capacities in evaluation teams
- Agility of evaluation teams and skills required to undertake evaluations
- Ability of evaluators to translate evaluation findings and recommendations to respond to needs of political principals
Technical evaluation competencies and soft skills needed to sustain the supply-side of NESs
Technical competencies are key to evaluation practice. Key amongst these is the ability to systematically gather, analyse, and synthesise relevant evidence (data and information) on the evaluand (e.g. policy) from a range of sources, identifying relevant material, assessing its quality, and spotting gaps.
The following technical and soft competencies were recommended:
1. Design and Evaluation methodology
Understanding the knowledge base of evaluation (theories, models including logic and theory-based models, types, methods and tools) and how this would impact on appropriate evaluation designs and understanding of current issues in evaluation methodology. And, using specific research methods and tools that address the evaluation’s research needs, this may include qualitative, quantitative or mixed methods. For a particular evaluation, these should be specified.
2. Data collection and analysis competencies,
The ability to systematically gather, analyse, and synthesise relevant evidence (data and information) from a range of sources, identifying relevant material, assessing its quality, spotting gaps. And equally so, the ability to interpret the findings and reach valid, defensible, and transparent findings that address the evaluation questions. Critical thinking is key in analysis.
3. Resource Management:
This is the ability to develop an appropriate budget for an evaluation and when necessary to negotiate evaluation budgets with an understanding of how budgets influence evaluation designs.
4. Managing and Commissioning Evaluations
This requires effective leadership and managerial skills on the part of the commissioning institution. Team leaders need to have the ability to manage the evaluation process in such a way that it maximises the impact of the process as well as the quality of the evaluation product. Good leadership comes with the ability to motivate stakeholders to commit time and resources and work together to undertake the evaluation and ensure use.
5. Communicating Evaluation Findings and promoting use:
This involves the ability to provide guidance to others within and outside the organisation on how to reflect on and use evaluation findings effectively; Understanding how to promote or support the use of evaluative evidence through follow-up and tracking of evaluation recommendations; mobilising stakeholders to ensure utility of the evaluation findings by decision/policymakers, who are the users of the evaluation report; developing management response and improvement action plan templates for policy or program decision-makers (managers or policymakers) is an important evidence-utilisation mechanism to be championed by evaluators and other evaluation stakeholders; and lastly, monitoring the improvement.
6. Contextual and Theoretical Knowledge
The evaluator should understand the specific intervention, how and why it was developed and implemented. Furthermore, evaluators are expected to possess the following soft skills:
- Cultural Sensitivity, which is the ability to provide, as an individual evaluator or part of a team of evaluators, credibility in certain contexts and societal settings.
- Ethical conduct, which involves protecting confidentiality/anonymity of respondents, and obtaining informed consent from evaluation participants.
- Stakeholder management, including the ability to undertake suitable negotiation and conflict resolution processes to handle challenges emerging during the evaluation.
To end, I believe, sustaining the supply-side of NESs involves investing and nurturing individual technical evaluation skills and soft skills, as systems are composed of various interconnected parts (stakeholders).
Call for Proposal/Appel a Propositions
Consultant Needed
To develop a guideline on how to entrench the use of M&E evidence in the planning and budgeting processes of the government they focused on, making sure that these two crucial functions are informed by the best available performance data on existing development plans, policies, programmes and projects of governments.
Application Deadline: 15th November 2022
link: Full Terms of Reference
link: Call for Proposals
Send applications to:

Besoin d’un consultant
pur développer une ligne directrice sur la manière d’ancrer l’utilisation des données de S&E dans les processus de planification et de budgétisation du gouvernement sur lequel ils se sont concentrés, en s’assurant que ces deux fonctions cruciales sont informées par les meilleures données de performance disponibles sur les plans, politiques, programmes et projets de développement existants des gouvernements.
Date limite: 15 Novembre 2022
lien: termes de référence
lien: appel a proposition
Envoyez les candidatures à:
et
Upcoming Evaluations Illustrate the Complexity of Use
Upcoming Evaluations Illustrate the Complexity of Use
One area of consensus within Africa’s growing national evaluation systems is the importance of a use-focused design. On the surface, this seems uncontroversial. Given each country’s significant development and governance challenges, identifying evaluations that can be used to improve public sector programming ought to be quite straightforward. But, when you dig beneath the surface, like participants did in DPME’s recent design clinic, it becomes apparent that a use-focussed approach to evaluation is not necessarily so simple.
First of all, when we talk about use, who is the user? Is it the department that owns the programme? This is the initial, simple answer, but like many linear explanations, might be wrong. This department, of course, plays an important role in the evaluation, and we always hope that the government uses the results.But it became clear in the design clinic that the lead department is, on its own, insufficient for a useful evaluation. Very often, an anchor department knows very well what the strengths and weaknesses of its programmes are, and does not need a large, expensive external evaluation to give them recommendations.
The reason the programme’s problems have not already been solved through good management, are because the problems, the programme, and their results, are complex. There might be a mismatch in mandates, or gaps in institutional coordination. Maybe the department hopes that an evaluation will help other stakeholders that are important in a programme to better understand their role.
Then, what happens if there is a disagreement among different role players about what the critical issues are to evaluate? Officials in a department might need technical, process information from an evaluation to help improve some sticky issues of implementation. Political leaders might want bigger pictures answers to strategic policy decisions. But where does the ‘public good’ sit within all of this? Can the evaluation answer big questions of political strategy and trade-offs, technical ‘how to’ questions of how to make the bureaucracy a bit slicker, and also respond to the needs of citizens for more transparency, and more inclusion in processes of governance?
The recent design clinic was full of many ‘Aha!’ moments, for most people in the room. My own revelation, as a human geographer and an evaluator, was getting some words for what spatial planning and M&E have in common – they are both ‘invisible’ services that make everything else work better, and they grapple with a lot of very similar issues around institutional location, professionalisation, and constant need for advocacy.
However, I think many of the small revelations participants had was around who experiences some of the problems in programme design, and who holds the solutions. This mismatch often points to areas where use becomes complicated. If one department sees a problem to solve, and proposes an evaluation, but finds out that in fact, the use of the evaluation results need to be taken up by another department, an evaluation needs to do extra work to be useful to different audiences.
In a context where there are plenty of problems to go around, it is important that people working within evaluation systems give collective attention to what use means, and who users are.
DPME’s Design clinic confirms the importance of participation
DPME’s Design clinic confirms the importance of participation
The Department of Planning, Monitoring, and Evaluation (DPME) held its design clinic on the 6th and 7th of September 2022 in Pretoria. Facilitating it has been a professional highlight – it was inspiring to see so many people grappling with complex development challenges and unwieldy government institutions, coming away feeling more empowered and clearer about how their work will benefit from an evaluation.
However, one thing that comes up in conversation at each recurring design clinic, is the importance of participation for both the success of the evaluation, and the programme. The tricky thing about participation is that it reflects both a cause, and an effect of programme performance. Good evaluations happen when a team of people sitting at different places in a sector come together to solve a collective problem around programme performance. When you get the right mix of people in a room, you can often see a programme improve from the evaluation design stage.
However, often when there’s a problem that needs to be solved in a programme, you see reflections of this problem in the evaluation design. Maybe people are frustrated with the programme, and as a response, they have made that project less central in their work day. Maybe people have written the problem off as ‘too hard.’ Maybe someone cannot convince their boss that the programme is important enough to release them from their regular work for two days.
Maybe someone knows there is a performance problem, but worries that the transparency in evaluations will lead to bad press. Maybe ownership and will is strong, but administrative processes have not allowed the right people to be identified and invited. Whatever the issue is, you can learn a lot about a programme from multiple stakeholder participation in an evaluation.
At the closing of the two day workshop, one participant asked about civil society – that stakeholder that must always balance being sufficiently constructive, and sufficiently critical. The question was about whether or how a constructive role could be carved out for civil society actors through an evaluation. DPME Deputy Director-General Godfrey Mashamba responded with a call to invite and include civil society actors in the evaluation process moving forward.
Coming from a background in civil society myself, my mind immediately went to all of the challenges of working with government – different paces of expected change, the many, many layers of bureaucracy and many accompanying meetings. A lack of trust on both sides. But, those difficult negotiation processes are exactly where evaluation shines. I think broader civil society inclusion in the evaluation system moving forward can only be a strength for governance in the sector.
SAMEA Conference 2022 Multipaper Session
On the Thursday 22 September 2022, Twende Mbele hosted a multi-paper session at the SAMEA Biannual Conference titled,‘Can Horizontal Leadership Approaches Augment the Practice of Monitoring, Evaluation and Learning in the Public Sector’.
Born from research and discussion paper, the purpose of this presentation by Philip Browne and Stanley Ntakumba, was to stimulate further discussions on whether, from within the constraints of public sector MEL practice, there are opportunities for forms of horizontal leadership to be established that are disruptive of conventional and constraining leadership practices.
To gain further insights on the presentations by the respective panelists, please click the links below for the presentations-
Philip Browne: https://bit.ly/3Tf3qOD
Stanley Ntakumba: https://bit.ly/3C7eOVz