Le cours de formation et de mise en œuvre de politiques et de mise en œuvre de politiques fondé sur des bases factuelles, organisé au Bénin en décembre dernier, a attiré les plus hauts responsables et responsables gouvernementaux. C’était la première fois que nous adaptions le cours à un environnement francophone et nous étions ravis de voir le grand accueil des hauts responsables gouvernementaux au Bénin et au Niger. Click here for newsletter…
Over the years there has been a shift in the use of evaluations, from a donor dominated space to, in recent years, an increased use by both governments and parliaments. Parliaments represent the bridge between the state and its citizens, so the use of evaluations by Parliamentarians for increasing accountability can be seen as the next frontier, as evaluation systems bring together monitoring data and information to provide deeper insight into what works and what does not work. To enhance the use of evaluations by parliamentarians, an enabling environment needed to be created through learning exchange platforms and networks such as Twende Mbele, the CLEAR-AA initiative and APNODE.
The Global Parliamentarians Forum for Evaluation (GPFE) together with EvalPartners, the Sri Lanka Parliamentarians Forum for Evaluation, Prime Minister’s Office of Sri Lanka, Parliament of Sri Lanka, and the Sri Lanka Evaluation Association, hosted EvalColombo2018, a three-day forum from 17-19 September 2018 in Colombo, Sri Lanka, to promote demand and use of evaluation by parliamentarians through dialogue and exchange, and to generate innovative approaches to tackling challenges facing Parliamentarians at a global level.
Participants at the forum included parliamentarians from across the globe committed to evaluation, evaluation experts, and other international delegates to ensure a rich discussion on developing stronger monitoring and evaluation frameworks for evidenced-based decision-making and accountability in government. Some of the highlights of these discussions are captured below, including insights in the use of evaluations for development goals; the benefits of a National Evaluation Policy (NEP) although not sufficient but still important; the professionalisation of evaluation practice through standardising and building capacity of evaluators; and African countries commitments to Colombo declaration.
Evaluations can help countries become masters of development outcomes
Evaluation is recognised as a crucial component towards realising the Sustainable Development Goals (SDGs). Parliamentarians can address the SDG’s by driving the oversight processes forward to ensure that nobody is left behind, especially the most vulnerable. Solutions to SDG’s are required at country level, therefore partnerships between different sectors are important. As Parliamentarians are one of the leading catalysts in terms of crafting national policy and exercising oversight over government, they can be the leading voices for citizens.
National Evaluation Policies are important but not essential or universally applicable
Very few countries have policies on evaluation however this does not mean that parliamentarians and the executive can’t engage with evaluations. National Evaluation Policies (NEPs) are neither necessary nor sufficient but they are useful for political buy-in, especially in countries that struggling to set up a foundation for learning and improvement. More important, however, are the systems countries put in place and the resources allocated to evaluations. Not having a NEP can sometimes create setbacks for a country in terms of improving a system, as Dr Mulu, from the Kenyan Budget Appropriations Committee says:
“The lack of a National Evaluation Policy (NEP) [has left] leaves gaps / challenges in terms of strategic direction and buy-in from all of government as well as adequate budget allocation and commitment.”
Evaluations must be technically sound/relevant
The need for professionalisation of evaluation practice was also highlighted at the conference. Clear-AA through its Collaborative Curriculum program is one of these initiatives which looks to harmonise M&E competencies and curriculum in Africa. The creation of a common language for evaluation can enhance understanding and use.
Capacity development of both evaluators and commissioners is important to enhance the credibility of evaluations within parliaments, so evaluators will need to work hard at building professional skills and ensuring that evaluations follow ethical practices. Having more evaluators who produce useful, ethical evaluations that are credible will address both supply and demand side of evaluations.
African countries commitments to the Colombo declaration
Delegates representing varies African countries made commitments to the Colombo declaration at the conference, but the general commitment by these delegates was to promote and increase APNODE membership, as APNODE was seen to be a key instrument for learning and advocacy. The following points highlight the commitments by some countries:
- Kenya committed to enact a law on M&E and to involve more MPs in National Evaluation Week in which took place during the month of November (in 2018);
- Uganda committed to creating a Parliamentary caucus linked to national evaluation association and APNODE, and also to creating awareness with MPs on holding government accountable to using more evidence;
- Tanzania committed to taking the NEP agenda to Parliament, while UNWomen will support women caucuses to use evaluation;
- Nigeria made a commitment to sensitise members of parliament and build their capacities for M&E;
- Zambia made a commitment to evaluate indigenous communities as a follow-up to the commitments made in parliament in previous sittings;
- South Africa, Ghana and Zimbabwe made commitments to promote the use of evaluations by their parliaments through their Speakers of Parliament;
The closing ceremony was held at the Parliament of Sri Lanka and included a panel discussion and inputs by various delegates referred to as “the voices of global parliamentarians”. This was followed by a vote of thanks to the organisers and participants. The conference itself had reached its objectives of which include raising awareness on the role of parliaments in driving the SDGs agenda, reaffirm the importance of using evidence as part of good governance, promote dialogue between parliamentarians, government, evaluation practitioners, and civil society to encourage their joint use of evaluations for decision making, last but not least, agree on way forward by compiled in the Colombo Declaration which included commitments from many countries including those from Africa.
In a recent panel discussion at the #Evidence2018 conference in Pretoria, South Africa, panellists from Benin, Uganda and South Africa, discussed how government institutions made use of evidence for better informed policy-making. The panel discussion titled: Cross-Governmental Panel Discussion: Sharing Institutional Insights into Evidence Informed Policy-Making Approaches in Africa, also delved deeper into governmental landscapes unique to these countries.
Decision-making is something that happens daily in public sector programs and policy making in these countries. All three countries make use of systematic mechanisms to take evidence generated to policy makers who need it. This is to repair the disconnect between the people who need evidence and those who produce evidence. Thus resulting to significant efforts being done by the country departments for the use of evaluation results in the improvement of public services.
Data is collected through the various processes of researching, assessing, analysing and enquiring. Through these processes all three countries are able to compile evidence which is then used to know what needs to be done differently to increase impact, and to identify what works and what doesn’t work. Below are summaries of the presentations.
Benin has two levels of government; 22 Ministries on the National level and 77 Municipalities on the local or district levels. In 2007, the Bureau de l’Évaluation des Politiques Publiques et de l’Analyse de l’Action Gouvernementale (BEPPAAG) was established with two mandates, (1) Refine and implement the National Evaluation Policy; (2) Monitor the performance of departments and municipalities to improve service delivery. Located in the Presidency, this office has commissioned 24 national public policy evaluations in various sectors including, health, finance, agriculture, education, energy and water.
In order to analyse the processes for use of the results and recommendations of nine public policy evaluations conducted between 2010 and 2013, BEPPAAG undertook in a study on the use of the evaluation results. The general objective of this study was to ascertain the steps taken by line ministries for the implementation of evaluation recommendations to ensure the efficiency of public services. Significant efforts have being made since 2010 by the departments for the use of evaluation results in the improvement of public services. The study showed that during 2010 to 2013, eighty (80) recommendations were made from evaluations, of which seventy (70) formed part of the concerned ministries action plans to. The ownership of the recommendations looks as follows:
- 39 (56%) were fully implemented at the time of the monitoring mission.
- 31 (44%) were partially implemented at the time of the monitoring mission. Some are planned short and medium term.
- As for the 20 recommendations that are not implemented, the reasons given were because of a lack of financial resources or institutional framework
The study also notes that 40% of the recommendations have led to the revision or formulation of new public policies. These policies relate to technical education and vocational training, agriculture and handicrafts. In the energy sector for example, it led to the development of a rural electrification policy of 2016. Also noted by the study is that 10% of the recommendations have led to new institutional frameworks at two line ministries, plus another 8% of the recommendations have led to the formulation of new projects and programs in the agricultural, water, technical education and vocational training.
The main lesson learned from this study relates to the quality of evaluation as this is critical in the use of evaluation results. This can be measured in terms of the relevance of the recommendations of the evaluations. Also revealed was that some of the evaluation recommendations are very inclusive and require multiple steps or reforms, and this makes the process harder to implement.
In the Department of Planning, Monitoring and Evaluation (DPME) in the Presidency, have put in place a framework for looking at evidence and knowledge systems. Data is collected through processes of researching, analysing and enquiring. Through visualisation and analysing data, the evidence is compiled and what comes out is knowledge on what needs to be done differently to increase programmatic impact. Programme managers and policy-makers can then use this information in the planning, implementation and budgeting/ resource allocation. Decisions are taken regularly by those who implement policies on the ground, and that shapes what is actually implemented. If there is failure by policy-makers to bring awareness or share their views on the evidence that informed the policy, the implementers of policies would often do what works for them, and that is still the norm today.
There are regulations in the public administration that are not sector-specific but determine much of the behaviour of public institutions. Those options include, complying with auditors, national treasury, court orders, etc… So the DPME took a serious look at incentives that shape decisions within the public sector and worked within them. This is to say that they collaborated extensively with public institutions or implementers of evidence to enhance the use of evidence, while still trying to find some compliance measures.
Government plays a very important role in shaping policy, but they are not the only people making policy. It is assumed that the government space is the place that makes policy and that it is relatively homogenic. But policy is also contested within government where questions of which outcomes matter most, the quality of the evidence generated and which should take political preference are some brought into question by the different public institutions.
Researchers can sometimes block other forms of knowledge where politics and cultural thinking are ignored thereby creating a monopoly over ways of knowing and communicating that knowledge. This impacts on whose voice is heard in EIDM/EIPM and can there be a creation of spaces for equal sharing and learning, where power is distributed more equally, and where different views be heard and appreciated as much as the voices of those who do evaluation and publish the evidence.
In Uganda, decision-making is something that happens daily in the public sector programs and in the implementation of government projects. A lot of evidence that goes into the long processes of policy-making, and Uganda has a very consultative process. But the question about how evidence is actually used to inform policies is continually a point for discussion.
Uganda instituted a number of reforms to establish a robust National Monitoring and Evaluations System (NMES) to improve efficiency and to deliver positive development results. In this process it was discovered that there was little use of evidence in Uganda in the policy process. Noted examples were that the agricultural sectors annual plan was unchanged for the periods of 2007 to 2009. Systematic mechanisms are needed to take evidence generated to those who needed it. This is to repair the disconnect between the people who need evidence and those who produce evidence.
Between 1997 and 2007, Uganda was preoccupied with poverty reduction projects. At the time, there were no tools to assess if the projects were successful, in other words there were no monitoring of projects. And, another norm of the time, was focus on stabilising the economy and putting measures in place for economic growth. After 10 years of stability, they developed a 5 year national development plan. This plan allocates a significant amount of money to each sector and with that, many questions are being asked, such as:
- Is the government implementing the right policies?
- Are they doing the right activities?
- Are the right projects in place to lead that country to where it wants to go?
To answer those questions, what was required was better evidence, better evaluation systems and monitoring the performance of government projects. Due to the nationwide implementation of reform and the ensuing demand for evidence to show that government projects were delivering results, NMES was born.
- The Ugandan NMES detailed robust assessments of selected programme performance of the previous year in order to ensure that the following years’ programmes plans were informed by the performance of the previous year. All sectors were also required to demonstrate strong use of evidence on what was needed to achieve the outcomes set to achieve.
Of note is that most Ugandan research is done by local Universities with little engagement between government and researchers in terms of priority research areas or agenda setting. This translates to low exposure of policy-makers to new evidence. To combat this, the national research institute was formed and it has been particularly successful in the agricultural sector, where evidence of crops immune to diseases were widely accepted by farmers.
Going forward, Uganda will need to devise how best to take the culture of evidence use in the relevant departments forward so as to avoid a situation where only auditors or evaluation institutes are the ones using evidence.
A number of evaluation recommendations in government are not fully implemented due to a host of constraints, and thus the opportunity for learning and improvement are lost. These relate to time-constraints (delays in completing evaluations), financial challenges (evaluations are costly) and the lack of human capacity (general lack of experienced evaluators in the country). Additionally, challenges in programme/policy implementation and monitoring, country governments continuously faces emergencies requiring timeous and informed intervention strategies
To this end Twende Mbele and the Department of Planning, Monitoring and Evaluation (DPME) in South Africa are looking for a consultant/s to work with DPME to explore current practice in rapid evaluation tools for the public sector.
The following are expected outcomes from the project:
- A desk-top comparative analysis which explores existing rapid evaluation tools in sectors and compares them to enable judgement on which would be most ammenable to the South African Public Service.
- Two rapid evaluation tools, including relevant templates and guidelines for the application of the rapid evaluation approach for piloting by DPME.
Please read the full terms of reference here.
Applications close on Friday December 7th 2018.
By: Mokgophana Ramasobana and Nozipho Ngwabi
The blog titled “Made in Africa Evaluation: Africa’s novel approach towards its developmental paths (Part 1) provided an historical overview on some of the initiatives proposed to pioneer the MAE concept by various African scholars and evaluation practitioners. These include Prof. Zenda Ofir, Prof. Bagele Chilisa, Dr. Sukai Prom Jackson and Dr. Sully Gariba just to name a few. As a follow-up, Part 2 of the blog aims to explore some of the factors that influence maturity of the MAE concept beyond rhetoric and into practice, and raises a question around its uptake within the broader evaluation discourse.
The seed to collaborate on writing this blog between Nozipho and I has been forthcoming with a sense of speed. There are two cyclical international events that provoked our thinking as well as expedited the opportunity to co-write this blog. Firstly, a seminar titled “Decolonising the Evaluation Curriculum” hosted by CLEAR-AA during the delivery of the Development Evaluation Training Programme in Africa (DETPA). The panellists of the seminar comprised national and international evaluation experts: Prof. Bagele Chilisa (University of Botswana), Dr. Nombeko Mbava (University of Cape Town), Dr. Kambidima Wotela (University of Witwatersrand), Ms. Adeline Sibanda (AfrEA President) and Ms. Candice Morkel (CLEAR-AA) as the moderator. The second was a panel discussion titled “There is no Resilience without Equity: When will our Profession Finally Act to Reverse Asymmetries in Global Evaluation?” Chaired by Ms. Adeline Sibanda (AfrEA President) at the 13th European Evaluation Society (EES) Biennial Conference. Both events were characterised by heated debates among the panellists and participants and in this blog, Nozipho and I identified four key themes, which emerged as common threads. These four themes inhibit the deepening of the discourse around MAE, both conceptually and in practice. They include, (i) over-reliance on western worldviews or paradigms (ii) dominance of donors as commissioners of African evaluations (iii) Supply-chain Practices Crowd out African Evaluators and (iv) Perceived infancy of the evaluation profession in Africa.
(i) Worldviews or paradigms
The colonisation of African people in the 19th century had dire consequences of desecrating their traditional knowledge systems, cultural practices, values and beliefs (Kaya and Seleti, 2013). Scholars argue that Eurocentric or western worldviews of “knowledge” are yet to appreciate alternative non-western ways of knowing and producing knowledge. Consequently, the lack of this appreciation means that in the historical account of African or indigenous knowledge systems are less documented and evidenced in the broader academic discourses (ibid.). Likewise the evaluation profession is not immune from this influence of western paradigms. Thus, the theories informing evaluation practice in Africa are dominated by Western paradigms (Cloete, 2016).
Various African scholars (Chilisa, 2012); (Chilisa and Tsheko, 2017); (Shiza, 2013),and (Ofir, 2018) have impressively embarked on numerous initiatives, aimed at championing indigenous or localised African knowledge systems in the evaluation sector. These initiatives are geared to ensure that Afrocentric approaches, inter alia, methodologies, ways of knowing and philosophies are embedded into the evaluation praxis. Some of the studies elevating the Afrocentric paradigms include: indigenous knowledge systems (IKS) (Keane, 2008); (Geber and Keane, 2013); (Keane, Khupe, and Seehawe, 2017) and (Khupe and Keane, 2017) decolonisation and indigenisation of evaluation (le Grange, 2016) and (Chilisa, Major, Gaotlhobogwe and Mokgolodi, 2016) MAE or African-led or African and African “rooted” evaluations (Cloete, Rabie and de Coning, 2014; Chilisa , 2017 and Ofir, 2018) . These authors acknowledge that African voices and their ways of knowing should be integrated into a discourse of development. In spite of these commendable cited initiatives, African knowledge systems and paradigms remain insufficiently used, specifically in evaluation practice on the continent. We have to ask ourselves why this is the case.
To avoid the risk of providing a simplistic solution to a complex phenomenon, we recommend that opportunities should be created for collaboration between young and experienced African scholars to proactively pursue a research agenda around MAE and the translation of the findings into evaluation practice. However, this issue requires deeper conversations within the evaluation community around ways in which this shift in approach can be attained.
(ii) Dominance of donors as commissioners of African evaluations
Accountability for financial investments injected in Africa by donor communities elevated the demand for evaluation and has played a significant role in the institutionalisation of evaluation practices (Tirivanhu, Robertson, Waller and Chirau, 2018). This is corroborated by the African Evaluation Database (AfrED) database report (2017) commissioned by CLEAR-AA in collaboration with CREST for the period 2005-2015, which illustrates that: donors commissioned 69% of the evaluations, followed by a 31% split between NGO’s and governments. Notably, non-African evaluators in these reports have been appointed as project leads responsible for technical and strategic activities during the implementation of evaluation assignments whilst on the other hand, African experts are dispensed with supporting activities entailing administrative and logistical duties (Mouton and Wildschut, 2017). These disparities in roles and authority in evaluation assignments to some extent validate the widely held view that African scholars are less skilled to execute credible evaluations (Tirivanhu, Robertson, Waller and Chirau, 2018, p 230). Once again, a trite solution cannot be suggested for such a complex problem, but commissioners of evaluations (particularly donors) could consider revising procurement regulations geared to facilitate equivalent shared responsibilities between African and Western experts. In addition to capacity building initiatives that are focused on building African expertise in evaluation practice, it is time to also look at the legal-technical and administrative levers (such as procurement) that could provide a catalyst to changing the landscape of existing patterns of supply and demand on the continent.
(iii) Supply-chain Practices Crowd out African Evaluators
Building on the second theme, the evaluation field in Africa is historically and currently dominated by the Global North. (Cloete, 2016, p. 55) states that, “Evaluations in Africa are still largely commissioned by non-African stakeholders who mostly comprise international donor or development agencies that run or fund development programmes on the continent”. In addition, the current supply chain frameworks insist that evaluation expertise should be sourced from the development agencies’ countries of origin. This observation coincides with Phillips’s (2018) findings on a study of four major donors who commission evaluations in South Africa. The author found that the majority of international donor evaluation contracts in South Africa are obtained by international companies, who often sub-contract local expertise to enable them to understand the local context. This means that the evaluation criteria, methods and approaches are designed from a Global North orientation and that minimal effort is made to contextualise or ‘indigenise’ evaluations.
This situation raises concerns around the cultural competency of evaluators to conduct evaluations in African contexts, particularly if they are led by donor/development organisations who do not recognise the importance of this aspect of evaluation practice (AEA, 2011; Hopson, 2003 and Rebien, 1997). We acknowledge that more work needs to be done in developing a body of knowledge of Afrocentric paradigms, ways of knowing and methodologies in conducting and commissioning evaluations in Africa. Once this is available, a rich database of African methods could be made available globally. This will contribute towards the incremental documentation of Africa’s ways of knowing and elevating the indigenisation of evaluation practice as well as the prominence of African knowledge systems.
(iv) Perceived infancy of the evaluation profession in Africa
The slow progress of professionalisation of the evaluation discipline is common globally, as only a few countries have formally professionalised evaluation (Podems, 2015). M&E has not been professionalised in any of the African countries and this may be one of the main gaps in the slow progress of the Made in Africa concept. It is only in fairly recent years that monitoring and evaluation capacity building programmes such as the CLEAR Initiative, the International Programme on Development Evaluation Training (IPDET), trainings offered by Voluntary Organisations for Professional Evaluation (VOPEs) such as the African Evaluation Association (AfrEA) and the South African Monitoring and Evaluation Association (SAMEA), as well as universities have been developed to contribute to the growth of evaluation in Africa (Stockdill, Baizerman and Compton, 2002; Stewart, 2015; Denney and Mallett, 2017).
Scholars generally concur with sentiments that professionalising evaluation should be a priority (Montrosse –Moorhead and Griffith (2017), Podems and Cloete (2014) and Lavelle (2014). The idea of professionalisation appeals to those looking to improve quality control for the practice of evaluation, to address the problem of the lack of uniformity in the field and the roles of evaluation practitioners. Thus, without the standardisation of evaluator competencies on the continent (or one could argue globally) it is difficult to fit the ‘Made in Africa’ concept into the several other issues of standardisation we already have.
In summary, addressing the four constraints highlighted above to bring to maturity the MAE concept requires greater cohesion and more intensive championing amongst practitioners and scholars. As a way forward, it is proposed that a few disruptions are introduced into the system to stimulate change into the well-entrenched patterns of evaluation practice in Africa. These include: the intensification of research between experienced and young African scholars to establish a body of knowledge for MAE; adjustments to procurement practices, which could for example include a compulsory split between African and Western experts with equal shared responsibilities in evaluation assignments; need to commission and conduct inter-disciplinary evaluations and an expedited momentum towards the professionalisation of the evaluation practice in Africa.