In a recent panel discussion at the #Evidence2018 conference in Pretoria, South Africa, panellists from Benin, Uganda and South Africa, discussed how government institutions made use of evidence for better informed policy-making. The panel discussion titled: Cross-Governmental Panel Discussion: Sharing Institutional Insights into Evidence Informed Policy-Making Approaches in Africa, also delved deeper into governmental landscapes unique to these countries.
Decision-making is something that happens daily in public sector programs and policy making in these countries. All three countries make use of systematic mechanisms to take evidence generated to policy makers who need it. This is to repair the disconnect between the people who need evidence and those who produce evidence. Thus resulting to significant efforts being done by the country departments for the use of evaluation results in the improvement of public services.
Data is collected through the various processes of researching, assessing, analysing and enquiring. Through these processes all three countries are able to compile evidence which is then used to know what needs to be done differently to increase impact, and to identify what works and what doesn’t work. Below are summaries of the presentations.
Benin
Benin has two levels of government; 22 Ministries on the National level and 77 Municipalities on the local or district levels. In 2007, the Bureau de l’Évaluation des Politiques Publiques et de l’Analyse de l’Action Gouvernementale (BEPPAAG) was established with two mandates, (1) Refine and implement the National Evaluation Policy; (2) Monitor the performance of departments and municipalities to improve service delivery. Located in the Presidency, this office has commissioned 24 national public policy evaluations in various sectors including, health, finance, agriculture, education, energy and water.
In order to analyse the processes for use of the results and recommendations of nine public policy evaluations conducted between 2010 and 2013, BEPPAAG undertook in a study on the use of the evaluation results. The general objective of this study was to ascertain the steps taken by line ministries for the implementation of evaluation recommendations to ensure the efficiency of public services. Significant efforts have being made since 2010 by the departments for the use of evaluation results in the improvement of public services. The study showed that during 2010 to 2013, eighty (80) recommendations were made from evaluations, of which seventy (70) formed part of the concerned ministries action plans to. The ownership of the recommendations looks as follows:
- 39 (56%) were fully implemented at the time of the monitoring mission.
- 31 (44%) were partially implemented at the time of the monitoring mission. Some are planned short and medium term.
- As for the 20 recommendations that are not implemented, the reasons given were because of a lack of financial resources or institutional framework
The study also notes that 40% of the recommendations have led to the revision or formulation of new public policies. These policies relate to technical education and vocational training, agriculture and handicrafts. In the energy sector for example, it led to the development of a rural electrification policy of 2016. Also noted by the study is that 10% of the recommendations have led to new institutional frameworks at two line ministries, plus another 8% of the recommendations have led to the formulation of new projects and programs in the agricultural, water, technical education and vocational training.
The main lesson learned from this study relates to the quality of evaluation as this is critical in the use of evaluation results. This can be measured in terms of the relevance of the recommendations of the evaluations. Also revealed was that some of the evaluation recommendations are very inclusive and require multiple steps or reforms, and this makes the process harder to implement.
South Africa
In the Department of Planning, Monitoring and Evaluation (DPME) in the Presidency, have put in place a framework for looking at evidence and knowledge systems. Data is collected through processes of researching, analysing and enquiring. Through visualisation and analysing data, the evidence is compiled and what comes out is knowledge on what needs to be done differently to increase programmatic impact. Programme managers and policy-makers can then use this information in the planning, implementation and budgeting/ resource allocation. Decisions are taken regularly by those who implement policies on the ground, and that shapes what is actually implemented. If there is failure by policy-makers to bring awareness or share their views on the evidence that informed the policy, the implementers of policies would often do what works for them, and that is still the norm today.
There are regulations in the public administration that are not sector-specific but determine much of the behaviour of public institutions. Those options include, complying with auditors, national treasury, court orders, etc… So the DPME took a serious look at incentives that shape decisions within the public sector and worked within them. This is to say that they collaborated extensively with public institutions or implementers of evidence to enhance the use of evidence, while still trying to find some compliance measures.
Government plays a very important role in shaping policy, but they are not the only people making policy. It is assumed that the government space is the place that makes policy and that it is relatively homogenic. But policy is also contested within government where questions of which outcomes matter most, the quality of the evidence generated and which should take political preference are some brought into question by the different public institutions.
Researchers can sometimes block other forms of knowledge where politics and cultural thinking are ignored thereby creating a monopoly over ways of knowing and communicating that knowledge. This impacts on whose voice is heard in EIDM/EIPM and can there be a creation of spaces for equal sharing and learning, where power is distributed more equally, and where different views be heard and appreciated as much as the voices of those who do evaluation and publish the evidence.
Uganda
In Uganda, decision-making is something that happens daily in the public sector programs and in the implementation of government projects. A lot of evidence that goes into the long processes of policy-making, and Uganda has a very consultative process. But the question about how evidence is actually used to inform policies is continually a point for discussion.
Uganda instituted a number of reforms to establish a robust National Monitoring and Evaluations System (NMES) to improve efficiency and to deliver positive development results. In this process it was discovered that there was little use of evidence in Uganda in the policy process. Noted examples were that the agricultural sectors annual plan was unchanged for the periods of 2007 to 2009. Systematic mechanisms are needed to take evidence generated to those who needed it. This is to repair the disconnect between the people who need evidence and those who produce evidence.
Between 1997 and 2007, Uganda was preoccupied with poverty reduction projects. At the time, there were no tools to assess if the projects were successful, in other words there were no monitoring of projects. And, another norm of the time, was focus on stabilising the economy and putting measures in place for economic growth. After 10 years of stability, they developed a 5 year national development plan. This plan allocates a significant amount of money to each sector and with that, many questions are being asked, such as:
- Is the government implementing the right policies?
- Are they doing the right activities?
- Are the right projects in place to lead that country to where it wants to go?
To answer those questions, what was required was better evidence, better evaluation systems and monitoring the performance of government projects. Due to the nationwide implementation of reform and the ensuing demand for evidence to show that government projects were delivering results, NMES was born.
- The Ugandan NMES detailed robust assessments of selected programme performance of the previous year in order to ensure that the following years’ programmes plans were informed by the performance of the previous year. All sectors were also required to demonstrate strong use of evidence on what was needed to achieve the outcomes set to achieve.
Of note is that most Ugandan research is done by local Universities with little engagement between government and researchers in terms of priority research areas or agenda setting. This translates to low exposure of policy-makers to new evidence. To combat this, the national research institute was formed and it has been particularly successful in the agricultural sector, where evidence of crops immune to diseases were widely accepted by farmers.
Going forward, Uganda will need to devise how best to take the culture of evidence use in the relevant departments forward so as to avoid a situation where only auditors or evaluation institutes are the ones using evidence.