I was a team leader on two rapid evaluations in Uganda. These evaluations included:
- assessing the extent to which the use of the remitted 20% of national park entry fees has been effective in improving the livelihood of the communities in Local Government districts surrounding the wildlife protected areas;
- exploring the challenges in the Local Government staffing while focusing on the filling of critical positions in the Local Government structures in Uganda.
This blog highlights my experiences of successfully undertaking two rapid evaluations, paying special attention to: benefits of the use of a hybrid/facilitated model for doing rapid evaluation; developing rapid evaluation topics; ensuring buy-in for rapid evaluation and use; and lastly, ensuring the quality assurance of the rapid evaluation in Uganda.
Benefits of the use of a hybrid model for doing rapid evaluations
As outlined in the Rapid Evaluation Guidelines (add hyperlink here) there are a few options for governments wanting to start doing rapid evaluations. In Uganda’s case, they have moderate team capacity and wanted to use an external facilitator to assist the internal team with the technical aspects of the evaluations. This is what we call a ‘hybrid model’.
There have notable benefits along the process of using this capacity develop approach to undertaking rapid evaluations. Some of these include;
i) a mix of ideas and expertise from the mainstream government technical teams and the independent consultant, which is a good practice in evaluations;
ii) a quick consensus on the clarity of the study scope, methodologies and expected evaluation results;
iii) knowledge sharing and transfer is quite evident;
iv) an increasing impetus for buy-in, internal clearances, uptake of the evaluation findings to inform policy change as well as improve implementation of government programmes/projects.
Developing rapid evaluation topic
Developing rapid evaluation topics and questions requires demand-driven approaches. In most cases, it begins with identification of the knowledge gap/evaluation problem that seeks for quick solutions to better improve service delivery. These gaps are often identified through quarterly, semi-annual and annual reviews of various government programmes and projects, or from Cabinet and Ministers. The Office of the Prime Minister is then tasked with the process of drafting the terms of reference (ToR), securing funding, setting-up internal and/or external evaluation teams including the quality assurance procedures.
In this case, the internal team had developed selected the evaluation topics through consultation with relevant departments and ministers, and designed the evaluations during a Twende Mbele-led training on how to do rapid evaluations. As the team leader, my job was to ensure the design was feasible and rigorous, and that it could be done within the desired timelines.
Ensuring rapid evaluation buy-in and use
Throughout the rapid evaluation process, it is absolutely necessary that the commissioners seek clearance from both the technical and policy organs of government. All the processes from evaluation design to dissemination should include participation of the relevant stakeholders/consumers of the evaluation findings. It is important that several forms of disseminations are conducted such as reports, policy briefs, press conferences, symposia, website blogs, etc.
There should be follow-up meetings after the rapid evaluation reports disseminations to ensure that there is not only uptake but also use of the evaluation results to improve service delivery. As a team leader, I was responsible for ensuring there is a plan for dissemination and for quality assuring some of the written outputs done by the team.
Ensuring high quality rapid evaluations is about managing the risks and errors along the processes. A clear multi-sectoral quality assurance framework was utilised to keep track of the evaluation processes. In Uganda, the quality assurance mechanisms are premised on the national M&E strategy framework of a country’s M&E eco-system. Within a quality assurance framework are principles and standards to follow in conducting rapid evaluations. The rapid evaluations in Uganda are effectively coordinated by the national M&E multi-sectoral working group guided by the national M&E strategy. In some cases, reference groups are set up to provide unique technical input to enrich the evaluation processes and findings.
As we can see, there are many elements that come together to ensure the success of undertaking a rapid evaluation – many of which are not dissimilar to traditional evaluations. In this blog I have highlighted some of these elements which was critical in the process of Uganda successfully doing two rapid evaluations utilising a facilitated/hybrid model.
Observations from Aloyce Ratemo and Timothy Lubanga
When COVID19 hit, it took us all by surprise. In response to the pandemic, many governments were unsure of which measures to take and for how long. Governments’ already limited resources were stretched, and economic activities which government rely on for service delivery were increasingly hampered.
Government demand for M&E
Kenya’s government prepared a Post-COVID19 Economic Recovery Strategy, a two-year strategy to put the Kenyan economy on a recovery path. This Strategy included M&E indicators to track the progress of the different initiatives. When preparing a recent progress annual report (2019/20 and 2020/21 Annual Progress Reports), it was clear that economic activity had reduced significantly, and that further monitoring information was needed to guide the next steps. Surveys from the Kenya National Bureau of Statistics painted a bleak picture of the economy particularly around job losses and business closures, further indicating the negative impact of COVID19 on the economy. These has aided in resource allocation and mobilization efforts.
In Uganda there was an increased demand for evidence, particularly real health and economic statistics and M&E data. The government of Uganda, facing social pressures, came up with a strategy to increase its social welfare programs for the hardest-hit citizens. However, it became clear that the Bureau of Statistics did not have the necessary data to find and understand who was hardest hit. As a result, the government made a directive to invest more in statistics and M&E systems so as to have the necessary information to target service delivery to vulnerable groups better.
In Uganda and Kenya there has been a reduction in development assistance funding over the last couple of years. Increased demand for M&E has been driven mainly by a desire by governments for prudent spending and budgeting; ensuring they put resources where it will yield greatest benefits. The government has been doing this by better prioritization and maximisation of resources, particularly in the last 18 months.
Impact on citizens’ participation in holding government accountable
Juggling the demands for social distancing and lockdowns on one hand with the desire for citizen participation and feedback to government on the other has been a challenge, Existing for a like Baraza’s continues to exist, but with limited numbers and a greater focus on covid-19 activities, which may mean other services have suffered.
In Kenya, one of the governments’ containment measures was to ban all forms of in person meetings and trainings. Unfortunately this has stalled a lot engagement points with community and progress reports from the various government agencies show that most targets were not achieved due to COVID-19 containment measures.
Expanding Information and Communications Technologies (ICTs)
The move to working from home has not always been easy, but the evaluation community has adapted well. In a recent National M&E Conference Kenya (9th National M&E Conference), over 600 participants engaged from online platforms. This was the highest attendance of the event since its inception. Another positive outcome of the online migration has to been allow a more streamlined annual progress report (track the implementation of the Medium Term Plan of Kenya). Online methods of data collection and validation facilitated completion of the annual progress report in record time – three months earlier than usual.
Uganda saw a slower uptake of online engagements and tools, however, since embarrassing the transition, online migration has increased participation and attendance at meetings, workshops and conferences. Secondly, it has encouraged essential upgrades to information and communications technologies, while also saving money on international and local travel.
Expectations for the future post COVID
The COVID-19 pandemic brought to light the lack of adequate systems, data and statistics needs to be able to guide decisions and learning in a more nimble way. Investing in ICT will provide opportunities for greater collaboration, communication and cost-effective monitoring. However, governments must continue to prioritise their own evidence needs and invest in quality data systems, rapid evaluations and long term goals. There is need for capacity building on how to conduct M&E in the context of pandemics. M&E Plans need to be revised to address recovery strategies. Explore new data collections methods and methodologies to conduct M&E.
With the ten-year countdown to the Sustainable Development Goals underway, Systems to evaluate the impact of policies and monitor their progress are more important than ever. To sustain the momentum of global efforts to promote monitoring and evaluation capacity the CLEAR Initiative will convene the second annual gLOCAL Evaluation Week from June 1nd to 5th of this year. To take part in this year’s gLOCAL Evaluation Week, simply register and submit your proposal. Applications close 6th March 2020. Click here to register….
Instituted in 2017, the Directorate of Monitoring, Evaluation and Inspection (DMEI), in the Office of the Prime Minister (OPM), is mandated to monitor and evaluate government policies, programs and projects across Ministries, Departments and other Public Institutions. This mandate is ensured through generating evidence (eg. data) on government interventions, good practices, challenges, lessons learnt, and disseminating these to the relevant stakeholders.
However, to fully meet the objectives of DMEI a strategic communication function needs to be formulated. Therefore, DMEI realised a need to deliberately form a team of specialists to be in charge of reporting and disseminating the monitoring, evaluation and inspection results. The design of a country-level Communications Strategy was supported by Twende Mbele – a multi-country peer-learning initiative aiming at strengthening use of monitoring & evaluation systems, processes and results to improve performance and accountability of African governments.
A journey towards the communication strategy
Under the DMEI, the Evaluation Communications Team (ECT) has been working with Twende Mbele to discern different stakeholder needs, and strategies for engagement so as to best build this results-orientation. The first result of this work has been the development of a Communication Strategy that outlines;
- objectives of communication (broadly for the Directorate and for each campaign),
- target audience and key messages,
- channels of communication for effective delivery of the intended messages by each given audience,
- modes of communication,
- personnel responsible for implementing the communications strategy.
Further, we want to work with other government Ministries, Departments and Agencies and the public to;
- increase awareness of the evaluation process, the results and the actions being taken,
- gain support for government changes resulting from the evaluation
- facilitate stakeholder input into the evaluation process.
Achieving our objectives through implementing the Communications Strategy, we will be able to foster DMEI’s work guiding policy formulation and implementation, improving service delivery, and appropriate resource allocation.
Finding the Audience
The focus of communication actions depends on the nature of the targeted audience and their influence in bringing about the desired changes. The OPM has a wide-range of audiences to communicate information and results with, including citizens, policy makers, and legislators, institutions of learning, researchers, civil society organizations, and other partners. Messaging and communication channels/formats for the different types of stakeholders are tailored according to audience needs based on orientation, perception and influence towards government business.
Communications functions integral for improving engagement with non-traditional actors, for example, reaching out to CSOs to review the National Public Policy on M&E to ascertain its responsiveness to equity and capacity needs of non-government actors. Other, more participatory approaches are already underway, such as Baraza, which are community engagement fora which has been found to be effective in generating feedback on service delivery for improvement.
In the same custom, the audience is the basis for the decisions on the communication channels and tools used in the process of transmitting the intended messages. Again, the diversity of stakeholder requires the ECT to be flexible and adaptable to differing needs and to take a learning approach to communications.
The ECT takes lead in evaluation communication process and at the end of the year will embark on evaluating the communication approaches used. Evaluation of the communication strategy is done retrospectively by reflecting on the objectives that were set during the design of the strategy and measuring performance against them. For instance, OPM will assess the change in policy formulation, service delivery, political support, resource allocation among others, against the initial communication objectives.
Lessons for future success
It is central that the initiators of an effective communication strategy provide a better capacitated to further communication efforts. Additionally, building the capacity of multiple stakeholder groups (particularly working with the media on evaluation) to strengthen their understanding and skills, will also be required. It can be observed that the limited participation in capacity building activities has resulted in weak ownership on some interventions.
For more information on the work of the DMEI or communications in the OPM M&E please contact;
- Acting Director M&E-Mr. Timothy Lubanga
- Acting Assistant Commissioner M&E- Mr.Abdul Muwanika
- Information Scientist M&E Mr. Joseph Muserero
- Twende Mbele National Coordinator M&E-Ms. Doris Kembabazi
- Information Systems Officer M&E- Ms.Florence Mbabazi
The need to understand Evaluation use by NGOs
This blog draws from research conducted by Elizabeth Asiimwe, a Twende Mbele PhD Fellow at the Mbarara University of Science and Technology, Uganda, in 2017. The study is premised on the assertion that, although there is a growing discourse on evaluation utilisation for evidence-based policy making in the public sector, very few studies have focused in evaluation utilisation in the NGO sector in Uganda. This is despite considerable investments by funding agencies such as the Department for International Development (DFID) and the United States Agency for International Development (USAID) in Uganda. More information on the USAID evaluation policy can be found at: https://www.usaid.gov/evaluation, and for DFID, in the Uganda Operational Plan 2011-2016 https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/389293/Uganda.pdf.
In essence, NGO work provides important social change work that should inform policy making. In this regard, continuous improvement of NGOs’ work through monitoring and evaluation use is fundamental for enhancing evidence-based policy making – provided the system is set up to receive this information. However in Uganda there is a huge gap on studies that explore utilisation of evaluation results. Asiimwe’s study aimed at contributed towards addressing this empirical gap.
Focus of the Study
The study assessed evaluation utilisation among non-governmental organisations in Uganda, aiming to generate evidence to guide policy makers, project planners, evaluation planners, commissioners and implementers on determinants of evaluation utilisation. A mixed methods approach was utilised to collect data from a random sample of 404 respondents across 101 NGOs, using an online tool (SurveyCTO), focus group discussions and key informant interviews. The study adopted a conceptual framework developed by researchers that include Fleischer and Christie, (2009) http://www.managingforimpact.org/sites/default/files/resource/evaluation_use_results_from_a_survey.pdf. The framework illustrates three major factors that influence evaluation utilisation as: evaluation implementation characteristics, organisational and evaluator’s characteristics.
What were the key research findings?
The respondents where from national, community-based and international NGOs working in the areas of health, governance, agriculture, water & sanitation, education and child support. National and international NGOs reported having a monitoring and evaluation department, M&E staff, an M&E strategy, and had conducted at least one evaluation in the past five years. The study established that 80% of the respondents had conducted an evaluation in the last five years (Figure1) and of these, 68% reported that they had utilised evaluation findings (Figure2). Evaluation implementation factors that strongly determined evaluation utilisation were: the relevance of that particular evaluation to the organisation’s current strategy, and the credibility and quality the evaluation. Although the timeliness and quality of communicating evaluation results were seen as determinants, they were not reported as very crucial. Organisational factors identified included: the organisation’s receptiveness to new ways of doing things, the need for evaluation information to support decision-making, the organisation’s trust of the results presented from a particular evaluation and perceived implication on the results on continuity of projects/programs. Evaluators’ characteristics included evaluators’ experience and methodological expertise. The evaluators’ relationship or communication with staff did not have a strong influence on whether their evaluation results were utilised.
Figure 1: Proportion of organisations that have done evaluations
Figure 2: Utilizing of evaluation findings and Non-use of evaluation findings
A proposed way forward
The study clearly illustrated that evaluation use is determined by a cocktail and interplay of different factors. Not all factors that influence evaluation utilisation elsewhere (including interest of other stakeholders in the evaluations, relationship of staff with the evaluators, evaluator’s communication skills) actually determine the same among non-governmental organisations in Uganda. It is therefore worthwhile for commissioners and/or evaluators among NGOs in Uganda, to keenly consider those factors that would influence utilisation as they plan and execute evaluations. It is imperative that intended users are considered and are receptive of evaluation findings for utilisation to happen. Importantly planners, donors and policy-makers ought to mainstream evaluations for policy and programming and to consider sufficient budgets for evaluation to allow generation of evidence that supports decision-making as well as informing programming in general. A full version of the study will be available on the Twende Mbele website soon.