Representatives from Kenya, Rwanda, Tanzania, Uganda, The East African Legislative Assembly, South Africa, Malawi, and Ghana have called for improved research and evidence use in African Parliaments. They were hosted by the Parliament of Uganda for a workshop on research and evidence use in the Parliament Context.
Members of the African Evaluation Association (AfrEA) have launched a Global Campaign for Gender Equality and Social Equity in Evaluation following the association’s annual event held in Abidjan, Côte d’Ivoire during March.
Participants at the event backed a new 12-point declaration, committing to strengthening evaluation culture and practice on the continent and, in particular, enhancing efforts aligned with the Sustainable Development Goals and the “leave no one behind” principle. “Any time we evaluators ignore the inequities that permeate societies, evaluation fails to achieve its purpose,” the statement said.
“We expect each evaluator and organisation representing a constituency, including AfrEA and other national and regional Voluntary Organisations for Professional Evaluation [VOPES] to define and commit to specific actions to advance gender and equity in evaluation. We as an evaluation community are calling on all evaluators to integrate gender and equity dimensions into all evaluations, regardless of sector, scope or purpose.”
AfrEA has again committed to continue supporting individual evaluators, national evaluation associations, institutions, networks and partners to debate, analyse, disseminate and make constructive use of evaluation information, products and services for the betterment of Africa and the global community. The 12-point declaration further entrenches these values.
The event, which brought together representatives of governments, parliaments, development partners, associations, networks and civil society, saw the sharing of the AfrEA vision: An Africa rooted in a culture of evaluation for equitable and sustainable development.
Twende Mbele, one of the event’s supporters, used the event as an opportunity to bring parliamentarians together for a peer learning symposium. The parliamentarians were encouraged and taught to share ideas between and within parliaments on how to better use evidence in parliamentary spaces. Participants discussed the contextual issues that may affect M&E and evidence systems within their parliaments, and discussed potential approaches to reform.
Twende Mbele has partnered with the Centre of Learning on Evaluation and Result Anglophone Africa (CLEAR-AA) since 2016 on regional peer learning programmes, which included a workshop focusing on the use of evidence in the parliamentary context hosted by the Parliament of Uganda (Click to know more).
Over the years there has been a shift in the use of evaluations, from a donor dominated space to, in recent years, an increased use by both governments and parliaments. Parliaments represent the bridge between the state and its citizens, so the use of evaluations by Parliamentarians for increasing accountability can be seen as the next frontier, as evaluation systems bring together monitoring data and information to provide deeper insight into what works and what does not work. To enhance the use of evaluations by parliamentarians, an enabling environment needed to be created through learning exchange platforms and networks such as Twende Mbele, the CLEAR-AA initiative and APNODE.
The Global Parliamentarians Forum for Evaluation (GPFE) together with EvalPartners, the Sri Lanka Parliamentarians Forum for Evaluation, Prime Minister’s Office of Sri Lanka, Parliament of Sri Lanka, and the Sri Lanka Evaluation Association, hosted EvalColombo2018, a three-day forum from 17-19 September 2018 in Colombo, Sri Lanka, to promote demand and use of evaluation by parliamentarians through dialogue and exchange, and to generate innovative approaches to tackling challenges facing Parliamentarians at a global level.
Participants at the forum included parliamentarians from across the globe committed to evaluation, evaluation experts, and other international delegates to ensure a rich discussion on developing stronger monitoring and evaluation frameworks for evidenced-based decision-making and accountability in government. Some of the highlights of these discussions are captured below, including insights in the use of evaluations for development goals; the benefits of a National Evaluation Policy (NEP) although not sufficient but still important; the professionalisation of evaluation practice through standardising and building capacity of evaluators; and African countries commitments to Colombo declaration.
Evaluations can help countries become masters of development outcomes
Evaluation is recognised as a crucial component towards realising the Sustainable Development Goals (SDGs). Parliamentarians can address the SDG’s by driving the oversight processes forward to ensure that nobody is left behind, especially the most vulnerable. Solutions to SDG’s are required at country level, therefore partnerships between different sectors are important. As Parliamentarians are one of the leading catalysts in terms of crafting national policy and exercising oversight over government, they can be the leading voices for citizens.
National Evaluation Policies are important but not essential or universally applicable
Very few countries have policies on evaluation however this does not mean that parliamentarians and the executive can’t engage with evaluations. National Evaluation Policies (NEPs) are neither necessary nor sufficient but they are useful for political buy-in, especially in countries that struggling to set up a foundation for learning and improvement. More important, however, are the systems countries put in place and the resources allocated to evaluations. Not having a NEP can sometimes create setbacks for a country in terms of improving a system, as Dr Mulu, from the Kenyan Budget Appropriations Committee says:
“The lack of a National Evaluation Policy (NEP) [has left] leaves gaps / challenges in terms of strategic direction and buy-in from all of government as well as adequate budget allocation and commitment.”
Evaluations must be technically sound/relevant
The need for professionalisation of evaluation practice was also highlighted at the conference. Clear-AA through its Collaborative Curriculum program is one of these initiatives which looks to harmonise M&E competencies and curriculum in Africa. The creation of a common language for evaluation can enhance understanding and use.
Capacity development of both evaluators and commissioners is important to enhance the credibility of evaluations within parliaments, so evaluators will need to work hard at building professional skills and ensuring that evaluations follow ethical practices. Having more evaluators who produce useful, ethical evaluations that are credible will address both supply and demand side of evaluations.
African countries commitments to the Colombo declaration
Delegates representing varies African countries made commitments to the Colombo declaration at the conference, but the general commitment by these delegates was to promote and increase APNODE membership, as APNODE was seen to be a key instrument for learning and advocacy. The following points highlight the commitments by some countries:
- Kenya committed to enact a law on M&E and to involve more MPs in National Evaluation Week in which took place during the month of November (in 2018);
- Uganda committed to creating a Parliamentary caucus linked to national evaluation association and APNODE, and also to creating awareness with MPs on holding government accountable to using more evidence;
- Tanzania committed to taking the NEP agenda to Parliament, while UNWomen will support women caucuses to use evaluation;
- Nigeria made a commitment to sensitise members of parliament and build their capacities for M&E;
- Zambia made a commitment to evaluate indigenous communities as a follow-up to the commitments made in parliament in previous sittings;
- South Africa, Ghana and Zimbabwe made commitments to promote the use of evaluations by their parliaments through their Speakers of Parliament;
The closing ceremony was held at the Parliament of Sri Lanka and included a panel discussion and inputs by various delegates referred to as “the voices of global parliamentarians”. This was followed by a vote of thanks to the organisers and participants. The conference itself had reached its objectives of which include raising awareness on the role of parliaments in driving the SDGs agenda, reaffirm the importance of using evidence as part of good governance, promote dialogue between parliamentarians, government, evaluation practitioners, and civil society to encourage their joint use of evaluations for decision making, last but not least, agree on way forward by compiled in the Colombo Declaration which included commitments from many countries including those from Africa.
In a recent panel discussion at the #Evidence2018 conference in Pretoria, South Africa, panellists from Benin, Uganda and South Africa, discussed how government institutions made use of evidence for better informed policy-making. The panel discussion titled: Cross-Governmental Panel Discussion: Sharing Institutional Insights into Evidence Informed Policy-Making Approaches in Africa, also delved deeper into governmental landscapes unique to these countries.
Decision-making is something that happens daily in public sector programs and policy making in these countries. All three countries make use of systematic mechanisms to take evidence generated to policy makers who need it. This is to repair the disconnect between the people who need evidence and those who produce evidence. Thus resulting to significant efforts being done by the country departments for the use of evaluation results in the improvement of public services.
Data is collected through the various processes of researching, assessing, analysing and enquiring. Through these processes all three countries are able to compile evidence which is then used to know what needs to be done differently to increase impact, and to identify what works and what doesn’t work. Below are summaries of the presentations.
Benin has two levels of government; 22 Ministries on the National level and 77 Municipalities on the local or district levels. In 2007, the Bureau de l’Évaluation des Politiques Publiques et de l’Analyse de l’Action Gouvernementale (BEPPAAG) was established with two mandates, (1) Refine and implement the National Evaluation Policy; (2) Monitor the performance of departments and municipalities to improve service delivery. Located in the Presidency, this office has commissioned 24 national public policy evaluations in various sectors including, health, finance, agriculture, education, energy and water.
In order to analyse the processes for use of the results and recommendations of nine public policy evaluations conducted between 2010 and 2013, BEPPAAG undertook in a study on the use of the evaluation results. The general objective of this study was to ascertain the steps taken by line ministries for the implementation of evaluation recommendations to ensure the efficiency of public services. Significant efforts have being made since 2010 by the departments for the use of evaluation results in the improvement of public services. The study showed that during 2010 to 2013, eighty (80) recommendations were made from evaluations, of which seventy (70) formed part of the concerned ministries action plans to. The ownership of the recommendations looks as follows:
- 39 (56%) were fully implemented at the time of the monitoring mission.
- 31 (44%) were partially implemented at the time of the monitoring mission. Some are planned short and medium term.
- As for the 20 recommendations that are not implemented, the reasons given were because of a lack of financial resources or institutional framework
The study also notes that 40% of the recommendations have led to the revision or formulation of new public policies. These policies relate to technical education and vocational training, agriculture and handicrafts. In the energy sector for example, it led to the development of a rural electrification policy of 2016. Also noted by the study is that 10% of the recommendations have led to new institutional frameworks at two line ministries, plus another 8% of the recommendations have led to the formulation of new projects and programs in the agricultural, water, technical education and vocational training.
The main lesson learned from this study relates to the quality of evaluation as this is critical in the use of evaluation results. This can be measured in terms of the relevance of the recommendations of the evaluations. Also revealed was that some of the evaluation recommendations are very inclusive and require multiple steps or reforms, and this makes the process harder to implement.
In the Department of Planning, Monitoring and Evaluation (DPME) in the Presidency, have put in place a framework for looking at evidence and knowledge systems. Data is collected through processes of researching, analysing and enquiring. Through visualisation and analysing data, the evidence is compiled and what comes out is knowledge on what needs to be done differently to increase programmatic impact. Programme managers and policy-makers can then use this information in the planning, implementation and budgeting/ resource allocation. Decisions are taken regularly by those who implement policies on the ground, and that shapes what is actually implemented. If there is failure by policy-makers to bring awareness or share their views on the evidence that informed the policy, the implementers of policies would often do what works for them, and that is still the norm today.
There are regulations in the public administration that are not sector-specific but determine much of the behaviour of public institutions. Those options include, complying with auditors, national treasury, court orders, etc… So the DPME took a serious look at incentives that shape decisions within the public sector and worked within them. This is to say that they collaborated extensively with public institutions or implementers of evidence to enhance the use of evidence, while still trying to find some compliance measures.
Government plays a very important role in shaping policy, but they are not the only people making policy. It is assumed that the government space is the place that makes policy and that it is relatively homogenic. But policy is also contested within government where questions of which outcomes matter most, the quality of the evidence generated and which should take political preference are some brought into question by the different public institutions.
Researchers can sometimes block other forms of knowledge where politics and cultural thinking are ignored thereby creating a monopoly over ways of knowing and communicating that knowledge. This impacts on whose voice is heard in EIDM/EIPM and can there be a creation of spaces for equal sharing and learning, where power is distributed more equally, and where different views be heard and appreciated as much as the voices of those who do evaluation and publish the evidence.
In Uganda, decision-making is something that happens daily in the public sector programs and in the implementation of government projects. A lot of evidence that goes into the long processes of policy-making, and Uganda has a very consultative process. But the question about how evidence is actually used to inform policies is continually a point for discussion.
Uganda instituted a number of reforms to establish a robust National Monitoring and Evaluations System (NMES) to improve efficiency and to deliver positive development results. In this process it was discovered that there was little use of evidence in Uganda in the policy process. Noted examples were that the agricultural sectors annual plan was unchanged for the periods of 2007 to 2009. Systematic mechanisms are needed to take evidence generated to those who needed it. This is to repair the disconnect between the people who need evidence and those who produce evidence.
Between 1997 and 2007, Uganda was preoccupied with poverty reduction projects. At the time, there were no tools to assess if the projects were successful, in other words there were no monitoring of projects. And, another norm of the time, was focus on stabilising the economy and putting measures in place for economic growth. After 10 years of stability, they developed a 5 year national development plan. This plan allocates a significant amount of money to each sector and with that, many questions are being asked, such as:
- Is the government implementing the right policies?
- Are they doing the right activities?
- Are the right projects in place to lead that country to where it wants to go?
To answer those questions, what was required was better evidence, better evaluation systems and monitoring the performance of government projects. Due to the nationwide implementation of reform and the ensuing demand for evidence to show that government projects were delivering results, NMES was born.
- The Ugandan NMES detailed robust assessments of selected programme performance of the previous year in order to ensure that the following years’ programmes plans were informed by the performance of the previous year. All sectors were also required to demonstrate strong use of evidence on what was needed to achieve the outcomes set to achieve.
Of note is that most Ugandan research is done by local Universities with little engagement between government and researchers in terms of priority research areas or agenda setting. This translates to low exposure of policy-makers to new evidence. To combat this, the national research institute was formed and it has been particularly successful in the agricultural sector, where evidence of crops immune to diseases were widely accepted by farmers.
Going forward, Uganda will need to devise how best to take the culture of evidence use in the relevant departments forward so as to avoid a situation where only auditors or evaluation institutes are the ones using evidence.
A number of evaluation recommendations in government are not fully implemented due to a host of constraints, and thus the opportunity for learning and improvement are lost. These relate to time-constraints (delays in completing evaluations), financial challenges (evaluations are costly) and the lack of human capacity (general lack of experienced evaluators in the country). Additionally, challenges in programme/policy implementation and monitoring, country governments continuously faces emergencies requiring timeous and informed intervention strategies
To this end Twende Mbele and the Department of Planning, Monitoring and Evaluation (DPME) in South Africa are looking for a consultant/s to work with DPME to explore current practice in rapid evaluation tools for the public sector.
The following are expected outcomes from the project:
- A desk-top comparative analysis which explores existing rapid evaluation tools in sectors and compares them to enable judgement on which would be most ammenable to the South African Public Service.
- Two rapid evaluation tools, including relevant templates and guidelines for the application of the rapid evaluation approach for piloting by DPME.
Please read the full terms of reference here.
Applications close on Friday December 7th 2018.