In 2017 the Government of Ghana added a new tool to its development process by establishing the ministries for Planning (MoP) and for Monitoring and Evaluation (MoME). The MoME and the NDPC (National Development and Planning Commission) and the local M&E Association – the Ghana National Monitoring and Evaluation Forum (GMEF) – have developed a National Monitoring and Evaluation Policy. Once cabinet approves the Policy, it will be an important instrument in fostering the culture and practice of evidence-based decision-making in the public sector. The institutionalisation of M&E calls for better planning, use of resources, implementation and learning from outcomes about what worked and what did not.
With the establishment of the MoME and the recent launch of GIMPA’s (Ghana Institute for Management and Public Administration) Masters programme for evaluators, a better coordinated ecosystem for evaluation appears to be forming. However, the government system still faces many challenges, such as:
- weak M&E capacities,
- low demand for, and utilisation of M&E results, limited resources and budgetary allocations for M&E,
- non-compliance with M&E reporting timelines and formats by MDAs (Ministries, Departments and Agencies) /MMDAs (Metropolitans, Municipalities, Districts and Agencies)
- inconsistent data quality,
- data gaps, and
- limited management information systems.
Given that building evaluative thinking and systems is a relatively new endeavour, research into the current M&E culture in the public sector would prove useful. It is for this reason that the Ghanaian government partnered with Twende Mbele to undertake a study to establish a baseline of the M&E culture in the public sector that can be used to measure changes in practices and attitudes over the coming years. The study included interviewing 43 senior management officials from 14 ministries and two agencies using a formulated survey. The results below are based on the Report on the Ghana M&E Culture Baseline Survey.
All (100%) respondents to the survey indicated that evaluation reports are not structured to hide results. Futhermore, most (95%) reported that evaluation results of poor performance are not ignored and 92.50% that senior management does not reject evaluation reports with findings of poor performance.
The MoME is currently working on frameworks and policies to standardise and improve use of M&E across government. This includes the development of a mechanism to obtain information to understand the cause of poor performance. The above suggests a positive management attitude and environment within which to practice M&E in the public service.
According to survey results, recommendations are implemented and learning outcomes are documented and used to improve future results – 92.50% of respondents stated always. The implementation of performance management and performance contracts with managers of the ministries has yielded some results as performance measurement and results are now a basic requirements by Senior level Managers, Ministers and Cabinet for decision-making. There was however, a perception that the officers responsible for poor performance are not sanctioned, (32/38 of respondants stated never), which needs to be addressed in order to motivate staff. The report therefore recommended that a rigorous sanction and rewards regime must be implemented to regulate staff performance in the Ghanaian public service.
An enduring M&E system cannot be developed overnight. However, the Ghanaian government has proven that it is committed to pursuing a desired change based on the work of existing institutions including NDPC, MoME and GIMPA. It has been a steady process and one of the building blocks required was behaviour change. There is also the need for a paradigm shift towards the incorporation of M&E into the routine work of public service staff in addition to deepening M&E processes by which public service activities are conducted in order to build a stronger, learning-focused M&E Culture in Ghana.
By: Rendani Manugu
Twende Mbele, South Africa, Country Coordinator
There is no doubt that evaluations can play an integral role in decision-making. On the one hand, when the information is accurate, detailed, and insightful; evaluations can help make effective programme decisions, like whether to continue, scale up, discontinue or even, whether to intervene at all. On the other hand, it is well understood that there are significant investment requirements to producing reliable and insightful evaluations.
Besides the financial burden associated with collecting programme data, evaluations can require high levels of technical skills while also commanding long periods of time. However, there are situations when decisions have to be made under time constraints and limited access to technical skills. And indeed, it has been argued that high pressure to deliver development imperatives, limited technical skills, access to financial resources, including weak information systems are a common feature across African public institutions. As such, the idea of producing good evaluation information timeously can be a challenge.
To counter expensive and complicated studies with long turn-around time, rapid evaluations have been strongly lobbied as a more practical alternative. Rapid evaluations are meant to strike a balance between maintenance of high levels of investigative rigour in evaluation, and, the high cost and technical requirements.
“The elephant in the room” though, is the reminder that evaluations are a means to a greater end. Or that an evaluation report itself, IS NOT the end. In other words, the real judgement of rapid evaluations is in whether they truly accommodate key functions of a good evaluation. You see; praxis teaches us that the value of an evaluation must be demonstrated by its role across the programme or policy cycle. And if we are going to rely on rapid evaluations to be in anyway helpful in decision-making – I argue – it is only fair to assess their worth against the same functions. To facilitate my argument, I present the following five key evaluation functions:
1) The Improvement function, referring to enhancement of the efficiency and effectiveness of the chosen programme strategies and how they are implemented. 2) The Coordination function, meaning that the evaluation process assesses the roles of different collaborating actors, allowing partners to know what the others are doing, and how this works or links with their respective roles. 3) The Accountability function, assessing whether identified outcomes are actually being reached, and how wide the deviation is. 4) The Celebration function, celebrating the achievements of the underlying programme or policy, and finally, 5) the Legitimation function, which refers to the idea that an evaluation serves to provide persuasive and credible data to justify the underlying programme or intervention logic. This function is critical in Theory Based evaluation as it is the primary function of evaluation in this context.
The common threats connecting and enabling the attainment of each of the five functions is the ability of rapid evaluations to adhere to basic quality elements such as reliability, credibility, accuracy and relevance. With that said, a key point of contention surrounding rapid evaluation, is their ability to deliver exceptional levels of quality information within a shorter period of time, and in a cost effective manner. A matter in question is whether the ideal that evaluations are meant to be a systematic determination of a subject’s merit, worth or significance (Canadian Evaluation Society, 2015), can sufficiently be met by rapid evaluations. Can the balance between a quick turn-around and comprehensiveness evaluative information actually be achieved in practice? Can evaluators can provide rigorously researched answers within a limited amount of time and budget, and if so, in which situations can this be done?
The idea of reducing evaluation “turn-around time” and budget, while at the same time ensuring high levels of rigour by using multiple methods to collect data from multiple sources, implies very direct assumptions regarding capacity. For one, it assumes existence of adequate technical skills to manage such levels of triangulation. Secondly, it assumes existence of reliable secondary data. And finally, it assumes good programme design with acceptable monitoring frameworks to facilitate the evaluation process.
It is a reality that, in Africa, we operate in a contexts where systems of evidence are not necessarily supportive of evaluations, i.e. we seldom have integrated systems with complete data; or logically designed social programmes with clear monitoring frameworks. In fact, it is almost always the case that evaluations have to include collection of primary data, and hardly rely on monitoring data (if it exists at all). While evaluation budgets are also usually constrained, our commitment to dedicate resources to capacity development is proof that our institutions have glaring gaps in evaluation technical capacity to take on evaluation projects.
Any approach to rapid evaluation we are to employ should therefore take cognisance of our reality. It should be responsive to the levels of evaluation capacity as we experience them. Therefore, my position is that an appropriate guide to rapid evaluation should be one that is based on well-researched elements of what is needed. And not one that is based on a theoretical idea, which might not necessarily fit the context.
What is your position?
By Khotso Tsotsotso
A Twende Mbele Masters Student
Basic economic theory notes that scarce resources should be optimally utilized which in turn point to a need for evidence-informed decision-making. The availability of evidence should guide decision-makers on adopting government policies that will best meet development imperatives. Globally there is a clarion call for using existing and new evidence to inform decision-making. In particular, attaining the Sustainable Development Goals, SDGs, now emphasise use of evidence and data driven action (UNDSN, 2018).
There is also ongoing debate as to what interventions strengthen the use of evidence for policy and decision-making. Based on this, several development partners have supported interventions that strengthen monitoring and evaluation (M&E) capacities in use of tools for translation of evidence to make research evidence available for use for decision-making. There is also a growing community of practice around M&E systems for evidence use, including actors such as the African Evidence Network and the Uganda Evaluation Association. These efforts have complimented the establishment of government-wide national monitoring and evaluation systems.
In Uganda, the integrated national monitoring and evaluation system is driven by the national M&E policy for the public sector. The M&E system comprises of both a monitoring and an evaluation component. The monitoring component comprises of government’s self-assessment on development imperatives; community information sharing and accountability forums; as well as local government performance assessments. The evaluation component is managed under the Government Evaluation Facility.
However, the key question is, in the context of competing priorities in budget decision making and policy design, what determines evidence use and how is it influenced by the national evaluation system?
Factors influencing use of evidence in decision-making in the public sector.
Various factors influence the use of monitoring and evaluation evidence, including the levels of institutional effectiveness; political interests; levels of engagement with relevant stakeholders.
Institutional effectiveness: human and financial resources, policy and regulatory frameworks all affect the ability of an institutional to effectively carry out its mandate. Strong institutions have higher chances of yielding results. For example, where the structure allows for knowledgeable experts in M&E and where they actually exist, champions for evidence use should be available to support the process.
The policy and legal framework also plays an enabling role towards evidence use, for example, the M&E policy specifies the role of various stakeholders. It is imperative that the M&E policy is implemented. When for instance, key aspects of the policy are not implemented such as adequate financing for evaluations, the recruitment of M&E staff in Ministries, Departments and Agencies, the operationalisation of management information systems, it becomes difficult for the M&E system to enable M&E evidence use.
Institutions that work through sector-wide approaches tend to demand and apply evidence much more as compared to institutions that work in silos because more than one institution contributes to the results. For example, in Uganda, case disposal (access to justice) under justice, law and order sector consists of contributions from the Police, Prosecutions Office, Judiciary, human rights agencies, as well as legal aid networks. The scaling-up of alternative dispute resolution under The Back-log Reduction Strategy in JLOS sector can be seen as a collective effort informed by evidence.
Political interest: Political influence tends to take precedence in decision-making in situations where the public sector is faced with competing priorities. Politicians (Cabinet and Parliament) continually place pressure in the process of allocating of resources to improve service delivery and are interested in knowing how the resources were used which can be an opportunity for informing and improving decision-making. This is exemplified in a situation where there is evidence that “school feeding significantly increases enrolment rates and reduces absenteeism in developing countries” (Maijo, 2019) and it is in the interest of politicians to increase number of children in school, decisions will be made to increase school enrolment rates.
Engagement or participation of stakeholders in the generation of evidence: Buy-in is needed from stakeholders such as implementers of interventions, funders and policy-makers to ensure uptake of evidence. It is highly likely that recommendations from evaluations that were designed or validated with key stakeholders will be used. Effective stakeholder engagement facilitates evidence use and uptake (3ie, 2018). However, in practice the engagements start late, are not frequent and without champions.
Quality of the evidence: It is important that evaluation is done by competent M&E experts. Credible evidence positively influences the quality of evaluations which in turn will influence the uptake of the evidence by decision-makers (provided they know what credible evidence looks like).
What do we need to do?
It is imperative that consistency in the use of evidence amidst competing priorities is maintained. Decision-makers want “quick evidence” or timely information but the existing M&E structures do not answer these needs.
Governments should have robust and responsive national evaluation systems. A functional M&E system should facilitate evidence generation and use. The M&E system puts in place a platform that will provide politicians with information for decision making. The M&E agenda should be mapped to national strategic intervention priorities for instance; national development plans, ruling party priorities and the SDGs. The plan should be approved by top decision-makers and popularised by key stakeholders in the evidence ecosystem. However, in Uganda, this has been affected by the inadequate implementation of the government M&E system.
Having established national priorities, it is crucial that the generators of evidence provide relevant evidence for decision-making. Decision-makers should also be informed of the available evidence for use. In addition, there is need for tools to be available to facilitate the use of evidence; track actual use; and document changes that result from use of evidence. Experience from Uganda shows that high-level forums where evidence is shared are important avenues for immediate decision-making. For example, the Government Annual Performance retreats, Presidential Investors Round Table, one on one meetings for Permanent Secretaries have supported the use of evidence in budgeting and targeting of interventions in Uganda’s public sector.
Indeed, evidence matters when you are faced with competing priorities but what is important is the understanding of these priorities and using relevant evidence to make decisions that will address development priorities.
By Kachero Benjamin
Monitoring and Evaluation Practitioner, Uganda
In March this year the African Evaluation Association’s (AfrEA) hosted its biannual conference, which allowed for knowledge sharing, collaboration and networking between a wide range of international organisations, including Twende Mbele. We were proud to witness the swearing in of the new AfrEA president in Ms Rossetti Nabbumba Nayenga, who along with the rest of her team, are tasked with growing AfrEA and keeping to the standard of knowledge sharing that has become synonymous with the organisation. Click here for newsletter…
Forming part of a side event at the African Evaluation Association (AfrEA) conference 2019, the Centre for Learning on Evaluation and Results Anglophone Africa (CLEAR-AA), and Twende Mbele hosted a peer-learning Symposium on ‘Strengthening Systems of Evidence in Parliaments’.
The symposium reflected on a number of key issues affecting evidence use within parliaments. The discussions centred on what participants had learnt during the past 12-months and what should be taken forward to strengthen monitoring and evaluation (M&E) systems and capacity for evidence use in parliaments. It was based on discussions from a broader peer learning programme undertaken in the year 2018.
This broader peer learning programme facilitated peer learning between parliamentarians and parliamentary support staff from various parliaments in the African region. It was delivered collaboratively by the African Centre for Parliamentary Affairs (ACEPA), African Institute for Development Policy (AFIDEP), African Parliamentarians’ Network on Development Evaluation (APNODE), UNWomen and, the two symposium hosts CLEAR-AA and Twende Mbele.
Taking a ‘world café’ approach, the symposium traversed six key issues affecting evidence use within parliaments including oversight, legislation, representation, communities of practice & peer learning, institutional components and incentives. Also discussed were strengths/opportunities and challenges of evidence use, and what should be taken forward in the efforts to strengthen evidence use in parliaments.
This blog is the first of three addressing the topics discussed above. This first blog will focus on the institutional components and incentives for evidence use.
Institutional components and evidence use
Participants noted that there are a variety of institutional components where M&E functions are located, depending on the different type of parliamentary system. The research department, library, civic education, and audit committees are just a few of these institutional structures within parliaments that support parliamentarians in discussions and debates. Linkages between the various institutional components and the sources of evidence for parliamentarians is necessary to strengthen evidence use in parliaments.
The committee system is a key institutional component in most parliaments as it scrutinises legislation and monitors the work of the executive. Committees are considered a good starting point for strengthening evidence systems, particularly if evidence use is integrated into the strategic plans of the committees.
According to participants, determining which units within parliament will drive the agenda of strengthening the evidence systems within parliament is paramount. This unit should consider the role of the capacity building/training unit within parliament in the broader evidence system strengthening process.
The M&E function within some African parliaments plays a strategic role in supporting and strengthening evidence use. The significance of this institutional component is that it should ideally play a central and dual function in terms of M&E by parliaments (oversight of government performance) and M&E of parliaments (internally focused M&E on parliament’s performance). Its positioning within the institutional structure needs to be carefully considered since the M&E function will take on a different character depending on where in parliaments it sits. For example, if it sits in auditing units it may become a compliance tool, while if it sits in strategy units, it may take on a broader, overarching nature. While the decision on where to situate the M&E function in parliaments should be determined based on the needs and context of that individual parliament, participants at the symposium all agreed that ensuring the M&E function is placed in a strategic unit (ie, not auditing) will have a more positive outcome for parliaments.
Another important consideration is the resourcing of M&E units within parliaments as these specialised staff and budgets but are often under-resourced. One symposium participant put forward some potential ways in which to strengthen the M&E function within parliaments suggesting that the M&E function should be guided by strategic plans and should be better aligned with other institutional structures within parliament. Further exploration is needed regarding each parliament’s unique structure.
Incentives for evidence use
This issue received as much attention at the symposium as it did during the 2018 peer-learning workshops, while being particularly sensitive to address. Understanding the politics of evidence use is key to understanding the systemic drivers for evidence use e.g. political power and patronage. The ideal scenario is a situation where the incentives for evidence use should be focused on achieving outcomes for the national good. Ultimately the incentive for evidence use in parliament should assist parliamentarians to fulfill their mandates.
The key role of a parliamentarian is to contribute towards better development outcomes and to hold the executive accountable for its work. Parliamentarians need to make sure the right evidence is used for the right arguments to be advanced to ensure better service delivery by government. In other words, evidence can be used to identify the problems that hinder service delivery and to come up with solutions to address them. Parliamentarians can draw on a range of evidence sources from government, civil society, and international organisations, and even the media to gain a better understanding and to be able to ask more evaluative questions.
Conclusion and ideas on a way forward
Passionate discussions were followed by ideas on the way forward, as described by participants:
- To mobilise resources and budgets for capacity building and infrastructure which will strengthen institutional components supporting evidence use.
- Follow up with more technical support / learning exchanges for the development and strengthening of M&E frameworks for parliaments, and for understanding how the location of the M&E function affects objectives
- Piloting of the oversight app – a mobile tool designed to enhance oversight research processes