Elements of a successful ECB intervention in Africa: Participant’s perceptions.
Introduction
The last decade has witnessed the growth in accountability, especially on the side of donors who demand to understand how their funds were utilized to deliver respective results. This has led to a precedent increase in the demand of employees with skills in Monitoring and Evaluation (M&E). Expertise not only increases projects delivery, it also helps assess program performance, impact, results and its sustainability. At the continental level, the growing trend of results-oriented development, has led to the mushrooming of M&E programmes. This is also in response to the fact that both donors, and citizens are expecting their governments to be accountable. Despite the boom in M&E trainings offered both onsite and off-site learnings, less research focusing on soliciting participant’s perception in terms of what works in M&E training programmes have been undertaken. This blog documents the perceptions of participants of the Development Training Programme in Africa (DETPA) cohort delivered by the Centre for Learning on Evaluation and Results (CLEAR-AA). A mixed method approach which entailed semi-structured interviews and survey was used to solicit perceptions of the three cohorts for the period 2017 to 2019 who were enrolled in the programme. It is authored by Mokgophana Ramasobana (Programme Convener) and Nagnouma Nanou Kone (2019 participant).
Overall, the research questions were aimed at determining what worked, what did not work, what should be retained and what should not be retained in relation to the programme. In summary, the following constructs were documented as findings. Below the findings are presented.
1) Kindly enlist what was successful about the programme
(a) Top class coordination process and organisational ownership
A well-thought implementation process and organizational ownership was demonstrated throughout the delivery of the programme. For example, one of the respondent indicated that “the provision of the logistical note prior the commencement, and the entire coordination of the programme was professed to have been well executed by the programme team. Another respondent further corroborated this sentiment by stating that “I can’t emphasise the importance of how organized the programme was. It was really a highlight, the support of the entire CLEAR-AA “team work showed, and I was impressed”
(b) Contextually-fitted curriculum content
“The two tracks (fundamentals and advanced) approach which catered for new entrants and experienced practitioners was a highlight for me”. This ensured that the modules were sequenced and structured appropriately and fit for purpose. One of the respondents alluded that “the use of case studies, site visits, peer learning and group work were some of the learning approaches that scaffolded her skills and knowledge during the delivery of the programme. Another respondent gave a similar response by stating that “the Department of Evaluation and Monitoring (DPME) site visit was a highlight because it entrenched the concepts discussed in the classroom with a practical National Evaluation System (NES). Most importantly, the Made in Africa Evaluation (MAE) module was highlighted as one of the key components of the programme.“This module gave account of Africa’s evolution and how M&E fits in the broader development” said one respondent. This is a particularly important aspect as its reflects the objective of the programme, which is “to build a community of M&E professionals equipped with skills, knowledge and tools, which are fit-for-purpose to address local and global development challenges”.
(c) The quality of facilitators
The use of experts with technical expertise in various fields of M&E provided enriching theoretical and practical knowledge to the participants, enriching the delivery of the programme. As an illustration, one of the respondent asserted that “facilitators presented concepts in-depth as well as provide relevant examples”. A different respondent further noted that “dual facilitation during sessions contributed to the success of the programme”. In addition, it was proposed that other institutions should consider applying this approach in delivering training programmes as such a mix approach responds more to the needs of participants from diverse backgrounds, skills and countries.
2) Kindly enlist what was not successful about the programme
(a) Pre–survey
Some of the concepts such as systems thinking remain complex and difficult to decipher. Therefore, one respondent recommended CLEAR-AA to “consider conducting a pre-survey in order to gauge the level of participant’s understandings prior the commencement of the programme.” This is envisaged to ensure that concepts are pitched on par with participant’s levels of understanding, expectations, as well as to ensure that the modules are tailored to cater for diverse different contexts.
(b) Time
The programme duration was considered short. One respondent argued that “in some cases lots of theories were covered with less time allocated for the application” Another participants mentioned that “there were time constraints witnessed during the site visit”. This was caused by traffic delays between Pretoria and Johannesburg.
(c) Standardize the facilitation style
A minority of the facilitators had poor facilitation skills. For example, “one of the facilitator was perceived to have not been eloquent and used traditional teaching style to lecturer”. It was therefore recommended that “CLEAR-AA standardize the facilitation style with all facilitators”. Standardization of the teaching style among facilitators will help to curb such minor occurrences.
3) Kindly enlist what should be retained about the programme
(a) Curriculum and learning approach
Majority of the respondents consented that the curriculum structure and the learning approach applied in the programme should be retained. For instance, one of the participants mentioned that “Learning how to package the evaluation reports especially how to report to various stakeholders. This includes steps on how to commission evaluations. The decolonisation seminar was useful therefore it should be retained. This will assist practitioners to adapt existing concepts and frameworks as well as empower them to navigate their practices. Most importantly, the site visit enlightened me because it illustrated how a complex M&E system works”.
(b) Two different streams/tracks
The two streams offered by the programme should be retained. In emphasising this sentiment, one respondent cited that “I like the fact that there are two streams/track. One focuses on theoretical concepts whilst another pays attention to technical approaches”. Beyond the classroom, one of the respondent proposed that “CLEAR-AA should think about a platform that connects the alumni’s as well as avail space to learn from each other and beyond the programme”.
4) Kindly enlist what should not be retained about the programme
(a) Mixed reactions
In answering the question of what should not be retained, different responses were solicited from the respondents. Although most of the respondents mentioned that all components of the programme should be retained. As an illustration, one of the respondent mentioned that “the programme was well structured and learning occurred”. On the other hand, some of the respondents raised issues relating to the heavy curriculum content. For example, one respondent mentioned that “the programme was quite content heavy. A multiplicity of events such as lunchtime lectures, evening activities, and weekend tours etc., were overwhelming as they provided less time for processing the information taught. Based on the sentiments, recommendations such as “CLEAR-AA reducing the intensity of such activities. Rather explore more social activities after the classrooms so that people can relax and connect in a relaxed mode outside the formalities”, was proposed, in addition to extending the programme duration.
Although the findings of this study cannot be generalized. However, there are few conclusions that could be inferred as well contribute to the growing discourse of evaluation capacity building in the region. Firstly, the authors argue that these responses reinforce the importance of the evolving field of M&E, and the urgency to customize training initiatives which are in sync with the skills needs of African practitioners. Secondly, there is an increasing acknowledgment that M&E is an evolving field in Africa, therefore, suppliers of training are urged to be cognizant of the context in which they are operating in. Lastly, the key conclusion drawn from this blog is that the continent requires more skilled personnel trained in M&E in order to track implementation and outputs systematically, and measure the effectiveness of programmes. Therefore, it remains important that training programmes provided are fit for purpose and contextually relevant.
By: Mokgophana Ramasobana and Nagnouma Nanou Kone
The Republic of Ghana Public Sector and M&E Culture
In 2017 the Government of Ghana added a new tool to its development process by establishing the ministries for Planning (MoP) and for Monitoring and Evaluation (MoME). The MoME and the NDPC (National Development and Planning Commission) and the local M&E Association – the Ghana National Monitoring and Evaluation Forum (GMEF) – have developed a National Monitoring and Evaluation Policy. Once cabinet approves the Policy, it will be an important instrument in fostering the culture and practice of evidence-based decision-making in the public sector. The institutionalisation of M&E calls for better planning, use of resources, implementation and learning from outcomes about what worked and what did not.
With the establishment of the MoME and the recent launch of GIMPA’s (Ghana Institute for Management and Public Administration) Masters programme for evaluators, a better coordinated ecosystem for evaluation appears to be forming. However, the government system still faces many challenges, such as:
- weak M&E capacities,
- low demand for, and utilisation of M&E results, limited resources and budgetary allocations for M&E,
- non-compliance with M&E reporting timelines and formats by MDAs (Ministries, Departments and Agencies) /MMDAs (Metropolitans, Municipalities, Districts and Agencies)
- inconsistent data quality,
- data gaps, and
- limited management information systems.
Given that building evaluative thinking and systems is a relatively new endeavour, research into the current M&E culture in the public sector would prove useful. It is for this reason that the Ghanaian government partnered with Twende Mbele to undertake a study to establish a baseline of the M&E culture in the public sector that can be used to measure changes in practices and attitudes over the coming years. The study included interviewing 43 senior management officials from 14 ministries and two agencies using a formulated survey. The results below are based on the Report on the Ghana M&E Culture Baseline Survey.
All (100%) respondents to the survey indicated that evaluation reports are not structured to hide results. Futhermore, most (95%) reported that evaluation results of poor performance are not ignored and 92.50% that senior management does not reject evaluation reports with findings of poor performance.
The MoME is currently working on frameworks and policies to standardise and improve use of M&E across government. This includes the development of a mechanism to obtain information to understand the cause of poor performance. The above suggests a positive management attitude and environment within which to practice M&E in the public service.
According to survey results, recommendations are implemented and learning outcomes are documented and used to improve future results – 92.50% of respondents stated always. The implementation of performance management and performance contracts with managers of the ministries has yielded some results as performance measurement and results are now a basic requirements by Senior level Managers, Ministers and Cabinet for decision-making. There was however, a perception that the officers responsible for poor performance are not sanctioned, (32/38 of respondants stated never), which needs to be addressed in order to motivate staff. The report therefore recommended that a rigorous sanction and rewards regime must be implemented to regulate staff performance in the Ghanaian public service.
An enduring M&E system cannot be developed overnight. However, the Ghanaian government has proven that it is committed to pursuing a desired change based on the work of existing institutions including NDPC, MoME and GIMPA. It has been a steady process and one of the building blocks required was behaviour change. There is also the need for a paradigm shift towards the incorporation of M&E into the routine work of public service staff in addition to deepening M&E processes by which public service activities are conducted in order to build a stronger, learning-focused M&E Culture in Ghana.
By: Rendani Manugu
Twende Mbele, South Africa, Country Coordinator
How “Practical” or “Useful” are Rapid Evaluation, Really?
There is no doubt that evaluations can play an integral role in decision-making. On the one hand, when the information is accurate, detailed, and insightful; evaluations can help make effective programme decisions, like whether to continue, scale up, discontinue or even, whether to intervene at all. On the other hand, it is well understood that there are significant investment requirements to producing reliable and insightful evaluations.
Besides the financial burden associated with collecting programme data, evaluations can require high levels of technical skills while also commanding long periods of time. However, there are situations when decisions have to be made under time constraints and limited access to technical skills. And indeed, it has been argued that high pressure to deliver development imperatives, limited technical skills, access to financial resources, including weak information systems are a common feature across African public institutions. As such, the idea of producing good evaluation information timeously can be a challenge.
To counter expensive and complicated studies with long turn-around time, rapid evaluations have been strongly lobbied as a more practical alternative. Rapid evaluations are meant to strike a balance between maintenance of high levels of investigative rigour in evaluation, and, the high cost and technical requirements.
“The elephant in the room” though, is the reminder that evaluations are a means to a greater end. Or that an evaluation report itself, IS NOT the end. In other words, the real judgement of rapid evaluations is in whether they truly accommodate key functions of a good evaluation. You see; praxis teaches us that the value of an evaluation must be demonstrated by its role across the programme or policy cycle. And if we are going to rely on rapid evaluations to be in anyway helpful in decision-making – I argue – it is only fair to assess their worth against the same functions. To facilitate my argument, I present the following five key evaluation functions:
1) The Improvement function, referring to enhancement of the efficiency and effectiveness of the chosen programme strategies and how they are implemented. 2) The Coordination function, meaning that the evaluation process assesses the roles of different collaborating actors, allowing partners to know what the others are doing, and how this works or links with their respective roles. 3) The Accountability function, assessing whether identified outcomes are actually being reached, and how wide the deviation is. 4) The Celebration function, celebrating the achievements of the underlying programme or policy, and finally, 5) the Legitimation function, which refers to the idea that an evaluation serves to provide persuasive and credible data to justify the underlying programme or intervention logic. This function is critical in Theory Based evaluation as it is the primary function of evaluation in this context.
The common threats connecting and enabling the attainment of each of the five functions is the ability of rapid evaluations to adhere to basic quality elements such as reliability, credibility, accuracy and relevance. With that said, a key point of contention surrounding rapid evaluation, is their ability to deliver exceptional levels of quality information within a shorter period of time, and in a cost effective manner. A matter in question is whether the ideal that evaluations are meant to be a systematic determination of a subject’s merit, worth or significance (Canadian Evaluation Society, 2015), can sufficiently be met by rapid evaluations. Can the balance between a quick turn-around and comprehensiveness evaluative information actually be achieved in practice? Can evaluators can provide rigorously researched answers within a limited amount of time and budget, and if so, in which situations can this be done?
The idea of reducing evaluation “turn-around time” and budget, while at the same time ensuring high levels of rigour by using multiple methods to collect data from multiple sources, implies very direct assumptions regarding capacity. For one, it assumes existence of adequate technical skills to manage such levels of triangulation. Secondly, it assumes existence of reliable secondary data. And finally, it assumes good programme design with acceptable monitoring frameworks to facilitate the evaluation process.
It is a reality that, in Africa, we operate in a contexts where systems of evidence are not necessarily supportive of evaluations, i.e. we seldom have integrated systems with complete data; or logically designed social programmes with clear monitoring frameworks. In fact, it is almost always the case that evaluations have to include collection of primary data, and hardly rely on monitoring data (if it exists at all). While evaluation budgets are also usually constrained, our commitment to dedicate resources to capacity development is proof that our institutions have glaring gaps in evaluation technical capacity to take on evaluation projects.
Any approach to rapid evaluation we are to employ should therefore take cognisance of our reality. It should be responsive to the levels of evaluation capacity as we experience them. Therefore, my position is that an appropriate guide to rapid evaluation should be one that is based on well-researched elements of what is needed. And not one that is based on a theoretical idea, which might not necessarily fit the context.
What is your position?
By Khotso Tsotsotso
A Twende Mbele Masters Student
Evidence informed decision making amidst competing priorities in the public sector
Background
Basic economic theory notes that scarce resources should be optimally utilized which in turn point to a need for evidence-informed decision-making. The availability of evidence should guide decision-makers on adopting government policies that will best meet development imperatives. Globally there is a clarion call for using existing and new evidence to inform decision-making. In particular, attaining the Sustainable Development Goals, SDGs, now emphasise use of evidence and data driven action (UNDSN, 2018).
There is also ongoing debate as to what interventions strengthen the use of evidence for policy and decision-making. Based on this, several development partners have supported interventions that strengthen monitoring and evaluation (M&E) capacities in use of tools for translation of evidence to make research evidence available for use for decision-making. There is also a growing community of practice around M&E systems for evidence use, including actors such as the African Evidence Network and the Uganda Evaluation Association. These efforts have complimented the establishment of government-wide national monitoring and evaluation systems.
In Uganda, the integrated national monitoring and evaluation system is driven by the national M&E policy for the public sector. The M&E system comprises of both a monitoring and an evaluation component. The monitoring component comprises of government’s self-assessment on development imperatives; community information sharing and accountability forums; as well as local government performance assessments. The evaluation component is managed under the Government Evaluation Facility.
However, the key question is, in the context of competing priorities in budget decision making and policy design, what determines evidence use and how is it influenced by the national evaluation system?
Factors influencing use of evidence in decision-making in the public sector.
Various factors influence the use of monitoring and evaluation evidence, including the levels of institutional effectiveness; political interests; levels of engagement with relevant stakeholders.
Institutional effectiveness: human and financial resources, policy and regulatory frameworks all affect the ability of an institutional to effectively carry out its mandate. Strong institutions have higher chances of yielding results. For example, where the structure allows for knowledgeable experts in M&E and where they actually exist, champions for evidence use should be available to support the process.
The policy and legal framework also plays an enabling role towards evidence use, for example, the M&E policy specifies the role of various stakeholders. It is imperative that the M&E policy is implemented. When for instance, key aspects of the policy are not implemented such as adequate financing for evaluations, the recruitment of M&E staff in Ministries, Departments and Agencies, the operationalisation of management information systems, it becomes difficult for the M&E system to enable M&E evidence use.
Institutions that work through sector-wide approaches tend to demand and apply evidence much more as compared to institutions that work in silos because more than one institution contributes to the results. For example, in Uganda, case disposal (access to justice) under justice, law and order sector consists of contributions from the Police, Prosecutions Office, Judiciary, human rights agencies, as well as legal aid networks. The scaling-up of alternative dispute resolution under The Back-log Reduction Strategy in JLOS sector can be seen as a collective effort informed by evidence.
Political interest: Political influence tends to take precedence in decision-making in situations where the public sector is faced with competing priorities. Politicians (Cabinet and Parliament) continually place pressure in the process of allocating of resources to improve service delivery and are interested in knowing how the resources were used which can be an opportunity for informing and improving decision-making. This is exemplified in a situation where there is evidence that “school feeding significantly increases enrolment rates and reduces absenteeism in developing countries” (Maijo, 2019) and it is in the interest of politicians to increase number of children in school, decisions will be made to increase school enrolment rates.
Engagement or participation of stakeholders in the generation of evidence: Buy-in is needed from stakeholders such as implementers of interventions, funders and policy-makers to ensure uptake of evidence. It is highly likely that recommendations from evaluations that were designed or validated with key stakeholders will be used. Effective stakeholder engagement facilitates evidence use and uptake (3ie, 2018). However, in practice the engagements start late, are not frequent and without champions.
Quality of the evidence: It is important that evaluation is done by competent M&E experts. Credible evidence positively influences the quality of evaluations which in turn will influence the uptake of the evidence by decision-makers (provided they know what credible evidence looks like).
What do we need to do?
It is imperative that consistency in the use of evidence amidst competing priorities is maintained. Decision-makers want “quick evidence” or timely information but the existing M&E structures do not answer these needs.
Governments should have robust and responsive national evaluation systems. A functional M&E system should facilitate evidence generation and use. The M&E system puts in place a platform that will provide politicians with information for decision making. The M&E agenda should be mapped to national strategic intervention priorities for instance; national development plans, ruling party priorities and the SDGs. The plan should be approved by top decision-makers and popularised by key stakeholders in the evidence ecosystem. However, in Uganda, this has been affected by the inadequate implementation of the government M&E system.
Having established national priorities, it is crucial that the generators of evidence provide relevant evidence for decision-making. Decision-makers should also be informed of the available evidence for use. In addition, there is need for tools to be available to facilitate the use of evidence; track actual use; and document changes that result from use of evidence. Experience from Uganda shows that high-level forums where evidence is shared are important avenues for immediate decision-making. For example, the Government Annual Performance retreats, Presidential Investors Round Table, one on one meetings for Permanent Secretaries have supported the use of evidence in budgeting and targeting of interventions in Uganda’s public sector.
Indeed, evidence matters when you are faced with competing priorities but what is important is the understanding of these priorities and using relevant evidence to make decisions that will address development priorities.
By Kachero Benjamin
Monitoring and Evaluation Practitioner, Uganda
Pamoja Newsletter – June 2019
In March this year the African Evaluation Association’s (AfrEA) hosted its biannual conference, which allowed for knowledge sharing, collaboration and networking between a wide range of international organisations, including Twende Mbele. We were proud to witness the swearing in of the new AfrEA president in Ms Rossetti Nabbumba Nayenga, who along with the rest of her team, are tasked with growing AfrEA and keeping to the standard of knowledge sharing that has become synonymous with the organisation. Click here for newsletter…