Reflections on DPME’s Capacity Development Initiatives since the inception of South Africa’s national evaluation system
The Department of Planning, Monitoring and Evaluation (DPME) recognized from the inception of its national evaluation system (NES) in 2011 that there is limited evaluation capacity within government, and externally. This has been supported by the results of the survey conducted by DPME in partnership with the South African Monitoring and Evaluation Association (SAMEA, 2020), which reveals a critical deficiency of evaluation capacity in South Africa. The National Evaluation Policy Framework (NEPF) of 2011 provides for capacity development elements to support the implementation of the NEPF. Between 2011/12 and 2018/ 2019 financial years, DPME and partners made significant progress on evaluation capacity development (ECD). However, despite the notable achievements, ECD still remains one of the main problematic components of the NES. The DPME’s capacity development plan has been broad and included the following elements:
1. Country Peer to Peer Learning
During the early years before the inception of the NES, DPME made significant efforts to learn from the experiences of other countries around evaluation, in order to avoid re-inventing the wheel. In 2011, study tours were undertaken to Mexico, Colombia, the United States and Australia to learn from their experience in evaluation, and the lessons gained from this exercise enabled the DPME to progress much quicker in its ECD endeavours. Likewise, DPME has hosted several countries or received invitations internationally, to share South Africa’s experience on the NES. As part of peer-to-peer learning, DPME has been working with Evaluation Partners and at some point, hosted the Presidents of M&E Associations of Uganda and Kenya. Twende Mbele is a peer-learning partnership around M&E, involving South Africa, Benin, Uganda, Kenya, Ghana and Niger. Using a peer learning approach, this partnership, has allowed the DPME to further strengthen individual and institutional evaluation capacities.
2. Community of practice through inter-Departmental and Provincial Peer to Peer Learning
DPME and Provincial Offices of the Premier recognized a need to create platforms for knowledge–sharing and peer to peer learning. At national sphere, the national monitoring and evaluation forums, chaired by DPME, were held quarterly, whilst provinces also created their monitoring and evaluation forums catering for provincial departments. Since 2015, DPME has been organising annual national evaluation seminars and brownbag sessions as key ECD and learning initiatives. These sessions have provided platforms for peer to peer learning and knowledge sharing. This learning practice is still applicable in the new evaluation discourse and has since included state-owned enterprises (SOEs) and municipalities.
Support through direct experience of undertaking evaluations, in particular the inclusion of capacity development components in all evaluation terms of reference and service level agreements, is one initiative championed by the DPME. Another initiative entails ensuring interns and junior officials get to work with the service providers for the duration of the evaluation. DPME introduced evaluation interns since 2012 and most of them have secured permanent employment both in the public and private sectors.
DPME has also allocated evaluation directorates to departments and provinces specifically to provide technical support as and when requested. DPME also serves in evaluation steering committees across government in order to provide similar support. Given the extension of the NES scope to SOEs and municipalities, DPME has provided technical evaluation support for the evaluations done in these new spheres of the NES.
4. A Suite of Evaluation Courses
A suite of eight courses were developed aimed at supporting the implementation of the NES. Most of these courses were rolled out through partnership with the National School of Government (NSG). This standing partnership is effectively managed through a Memorandum of Agreement (MOA) between the NSG and DPME. This MOA is currently being drawn following the expiry of the previous one in March 2019. A total of 1 989 government officials at national and provincial spheres undertook training of these courses between 2012/13 and 2016/17. The majority of officials were trained in Theory of Change, Evidence–Based Policymaking and in the Managing and Commissioning of Evaluations. Other courses include Evaluation Methodology, Implementing Programme Planning, Technical Evidence-Based Policy Implementation and Deepening Evaluations. Overall, the evaluation of the NES found the training provided very useful, however, the revision of the NSG courses is paramount as most of the courses have been delivered for a long period without being updated and revised. DPME commissioned an evaluation of evaluation courses in 2020 and one key recommendation that emerged was the need to update and revise the course to align with context specific examples. To date, DPME has revised the Evidence Based Policy Making (EBPM) executive course and is almost at the point of revision completion. The other courses, namely, the 1) Managing and Commissioning Evaluations (2) Deeping Evaluation (3) Selecting Appropriate Evaluation Methodologies have been revised in the 2021/2022 financial year through the Twende Mbele support.
The launch of the NEPF, 2019 brought on board rapid evaluation approaches. Twende Mbele assisted DPME in developing the rapid evaluation guideline. Subsequent to that, DPME has partnered with the NSG in developing the rapid evaluation course. The course provides requisite skills for rapid evaluation approaches to be conducted.
6. Partnerships with Twende Mbele, CLEAR-AA, SAMEA and PSETA
The suite of evaluation courses referred above were developed, co-funded and offered by DPME and CLEAR-AA in 2012/13 – 2014/15 – and later outsourced to the NSG. In 2012, DPME signed a MOA with the South African Monitoring and Evaluation Association, (SAMEA) to collaborate on capacity development, evaluation standards and competencies and co-hosting SAMEA conferences. This MOA has since been renewed on a continuous basis. In 2017, both parties commissioned a Feasibility Study on professionalizing evaluation in South Africa culminating in the Road Map for South Africa. In 2017, DPME and Public Service Education and Training Authority (PSETA) signed a MOA aimed at improving the evaluation skills in government. The PSETA have established an M&E bursary programme for government officials in various spheres to study for a Postgraduate Diploma in M&E at the University of the Witwatersrand. To date about 50 Government officials have accessed the bursary and have acquired Post Graduate diplomas from University of the Witerswatersrand and Rhodes University.
A process is underway to ensure that SOEs and municipalities do benefit from this bursary scheme.
7. Evaluation Guidelines
DPME has developed 22 evaluation guidelines and 9 templates. Overall, these have been very helpful to departments and provinces and often used as resource material during training and evaluation process. The evaluation of the NES found that newer evaluating departments and provinces find the guidelines and templates as difficult to put into practice for new entrants. On the other hand, departments and provinces with more experience suggested that the guidelines need to be more flexible, and that additional guidelines are needed for undertaking complex evaluations and different kinds of evaluations.
The launch of NEPF in 2019 presented an opportunity for the revision of all guidelines and the introduction of new guidelines. DPME has introduced new guidelines and those evaluation approaches are a key highlight of NEPF 2019. The new guidelines include the 1) Rapid Evaluation guideline 2) Gender Responsive Evaluation guideline, and 3) Sectoral Reviews and Evaluations guideline.
Guidelines on emergent issues such as Equity and Climate change have been developed through assistance of SAMEA’s 2021 hackathon process.
The blog is based on the evaluation capacity development (ECD) initiatives she shared at the recently held UNDP National Evaluation Capacities (NEC) Conference in Turin, Italy from the 25-28th of October 2022.
Reflections on the UNDP NEC Conference 2022: Resilient national evaluation systems for sustainable development
The United Nations Development Programme (UNDP) hosted the National Evaluation Capacities (NEC) Conference in Turin, Italy from the 25-28 October 2022. There were more than 300 participants from over 100 countries, representing national governments, development partners and the international evaluation community. All these role-players gathered to discuss the development of resilient national evaluation systems that inform and support policy and other decision making in a rapidly changing and ever-more complex world.
At this conference, I participated in a panel discussion on capacity development initiatives aimed at supporting National Evaluation Systems (NESs), with representatives from three other countries, namely, Paraguay, Guatemala, Panama. I shared my experiences from the various capacity development initiatives undertaken by the South African Department of Planning, Monitoring and Evaluation (DPME) in collaboration with other African countries under the Twende Mbele initiative, the National School of Government (NSG), CLEAR-AA, and others.
The presentation was well received by participants and fellow panellist. And some of the issues that came up in the Q&A session include:
- The importance of multi-disciplinary skills and capacities in evaluation teams
- Agility of evaluation teams and skills required to undertake evaluations
- Ability of evaluators to translate evaluation findings and recommendations to respond to needs of political principals
Technical evaluation competencies and soft skills needed to sustain the supply-side of NESs
Technical competencies are key to evaluation practice. Key amongst these is the ability to systematically gather, analyse, and synthesise relevant evidence (data and information) on the evaluand (e.g. policy) from a range of sources, identifying relevant material, assessing its quality, and spotting gaps.
The following technical and soft competencies were recommended:
1. Design and Evaluation methodology
Understanding the knowledge base of evaluation (theories, models including logic and theory-based models, types, methods and tools) and how this would impact on appropriate evaluation designs and understanding of current issues in evaluation methodology. And, using specific research methods and tools that address the evaluation’s research needs, this may include qualitative, quantitative or mixed methods. For a particular evaluation, these should be specified.
2. Data collection and analysis competencies,
The ability to systematically gather, analyse, and synthesise relevant evidence (data and information) from a range of sources, identifying relevant material, assessing its quality, spotting gaps. And equally so, the ability to interpret the findings and reach valid, defensible, and transparent findings that address the evaluation questions. Critical thinking is key in analysis.
3. Resource Management:
This is the ability to develop an appropriate budget for an evaluation and when necessary to negotiate evaluation budgets with an understanding of how budgets influence evaluation designs.
4. Managing and Commissioning Evaluations
This requires effective leadership and managerial skills on the part of the commissioning institution. Team leaders need to have the ability to manage the evaluation process in such a way that it maximises the impact of the process as well as the quality of the evaluation product. Good leadership comes with the ability to motivate stakeholders to commit time and resources and work together to undertake the evaluation and ensure use.
5. Communicating Evaluation Findings and promoting use:
This involves the ability to provide guidance to others within and outside the organisation on how to reflect on and use evaluation findings effectively; Understanding how to promote or support the use of evaluative evidence through follow-up and tracking of evaluation recommendations; mobilising stakeholders to ensure utility of the evaluation findings by decision/policymakers, who are the users of the evaluation report; developing management response and improvement action plan templates for policy or program decision-makers (managers or policymakers) is an important evidence-utilisation mechanism to be championed by evaluators and other evaluation stakeholders; and lastly, monitoring the improvement.
6. Contextual and Theoretical Knowledge
The evaluator should understand the specific intervention, how and why it was developed and implemented. Furthermore, evaluators are expected to possess the following soft skills:
- Cultural Sensitivity, which is the ability to provide, as an individual evaluator or part of a team of evaluators, credibility in certain contexts and societal settings.
- Ethical conduct, which involves protecting confidentiality/anonymity of respondents, and obtaining informed consent from evaluation participants.
- Stakeholder management, including the ability to undertake suitable negotiation and conflict resolution processes to handle challenges emerging during the evaluation.
To end, I believe, sustaining the supply-side of NESs involves investing and nurturing individual technical evaluation skills and soft skills, as systems are composed of various interconnected parts (stakeholders).
Call for Proposal/Appel a Propositions
Consultant Needed
To develop a guideline on how to entrench the use of M&E evidence in the planning and budgeting processes of the government they focused on, making sure that these two crucial functions are informed by the best available performance data on existing development plans, policies, programmes and projects of governments.
Application Deadline: 15th November 2022
link: Full Terms of Reference
link: Call for Proposals
Send applications to:
Besoin d’un consultant
pur développer une ligne directrice sur la manière d’ancrer l’utilisation des données de S&E dans les processus de planification et de budgétisation du gouvernement sur lequel ils se sont concentrés, en s’assurant que ces deux fonctions cruciales sont informées par les meilleures données de performance disponibles sur les plans, politiques, programmes et projets de développement existants des gouvernements.
Date limite: 15 Novembre 2022
lien: termes de référence
lien: appel a proposition
Envoyez les candidatures à:
et
Upcoming Evaluations Illustrate the Complexity of Use
Upcoming Evaluations Illustrate the Complexity of Use
One area of consensus within Africa’s growing national evaluation systems is the importance of a use-focused design. On the surface, this seems uncontroversial. Given each country’s significant development and governance challenges, identifying evaluations that can be used to improve public sector programming ought to be quite straightforward. But, when you dig beneath the surface, like participants did in DPME’s recent design clinic, it becomes apparent that a use-focussed approach to evaluation is not necessarily so simple.
First of all, when we talk about use, who is the user? Is it the department that owns the programme? This is the initial, simple answer, but like many linear explanations, might be wrong. This department, of course, plays an important role in the evaluation, and we always hope that the government uses the results.But it became clear in the design clinic that the lead department is, on its own, insufficient for a useful evaluation. Very often, an anchor department knows very well what the strengths and weaknesses of its programmes are, and does not need a large, expensive external evaluation to give them recommendations.
The reason the programme’s problems have not already been solved through good management, are because the problems, the programme, and their results, are complex. There might be a mismatch in mandates, or gaps in institutional coordination. Maybe the department hopes that an evaluation will help other stakeholders that are important in a programme to better understand their role.
Then, what happens if there is a disagreement among different role players about what the critical issues are to evaluate? Officials in a department might need technical, process information from an evaluation to help improve some sticky issues of implementation. Political leaders might want bigger pictures answers to strategic policy decisions. But where does the ‘public good’ sit within all of this? Can the evaluation answer big questions of political strategy and trade-offs, technical ‘how to’ questions of how to make the bureaucracy a bit slicker, and also respond to the needs of citizens for more transparency, and more inclusion in processes of governance?
The recent design clinic was full of many ‘Aha!’ moments, for most people in the room. My own revelation, as a human geographer and an evaluator, was getting some words for what spatial planning and M&E have in common – they are both ‘invisible’ services that make everything else work better, and they grapple with a lot of very similar issues around institutional location, professionalisation, and constant need for advocacy.
However, I think many of the small revelations participants had was around who experiences some of the problems in programme design, and who holds the solutions. This mismatch often points to areas where use becomes complicated. If one department sees a problem to solve, and proposes an evaluation, but finds out that in fact, the use of the evaluation results need to be taken up by another department, an evaluation needs to do extra work to be useful to different audiences.
In a context where there are plenty of problems to go around, it is important that people working within evaluation systems give collective attention to what use means, and who users are.
DPME’s Design clinic confirms the importance of participation
DPME’s Design clinic confirms the importance of participation
The Department of Planning, Monitoring, and Evaluation (DPME) held its design clinic on the 6th and 7th of September 2022 in Pretoria. Facilitating it has been a professional highlight – it was inspiring to see so many people grappling with complex development challenges and unwieldy government institutions, coming away feeling more empowered and clearer about how their work will benefit from an evaluation.
However, one thing that comes up in conversation at each recurring design clinic, is the importance of participation for both the success of the evaluation, and the programme. The tricky thing about participation is that it reflects both a cause, and an effect of programme performance. Good evaluations happen when a team of people sitting at different places in a sector come together to solve a collective problem around programme performance. When you get the right mix of people in a room, you can often see a programme improve from the evaluation design stage.
However, often when there’s a problem that needs to be solved in a programme, you see reflections of this problem in the evaluation design. Maybe people are frustrated with the programme, and as a response, they have made that project less central in their work day. Maybe people have written the problem off as ‘too hard.’ Maybe someone cannot convince their boss that the programme is important enough to release them from their regular work for two days.
Maybe someone knows there is a performance problem, but worries that the transparency in evaluations will lead to bad press. Maybe ownership and will is strong, but administrative processes have not allowed the right people to be identified and invited. Whatever the issue is, you can learn a lot about a programme from multiple stakeholder participation in an evaluation.
At the closing of the two day workshop, one participant asked about civil society – that stakeholder that must always balance being sufficiently constructive, and sufficiently critical. The question was about whether or how a constructive role could be carved out for civil society actors through an evaluation. DPME Deputy Director-General Godfrey Mashamba responded with a call to invite and include civil society actors in the evaluation process moving forward.
Coming from a background in civil society myself, my mind immediately went to all of the challenges of working with government – different paces of expected change, the many, many layers of bureaucracy and many accompanying meetings. A lack of trust on both sides. But, those difficult negotiation processes are exactly where evaluation shines. I think broader civil society inclusion in the evaluation system moving forward can only be a strength for governance in the sector.