The need to understand Evaluation use by NGOs
This blog draws from research conducted by Elizabeth Asiimwe, a Twende Mbele PhD Fellow at the Mbarara University of Science and Technology, Uganda, in 2017. The study is premised on the assertion that, although there is a growing discourse on evaluation utilisation for evidence-based policy making in the public sector, very few studies have focused in evaluation utilisation in the NGO sector in Uganda. This is despite considerable investments by funding agencies such as the Department for International Development (DFID) and the United States Agency for International Development (USAID) in Uganda. More information on the USAID evaluation policy can be found at: https://www.usaid.gov/evaluation, and for DFID, in the Uganda Operational Plan 2011-2016 https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/389293/Uganda.pdf.
In essence, NGO work provides important social change work that should inform policy making. In this regard, continuous improvement of NGOs’ work through monitoring and evaluation use is fundamental for enhancing evidence-based policy making – provided the system is set up to receive this information. However in Uganda there is a huge gap on studies that explore utilisation of evaluation results. Asiimwe’s study aimed at contributed towards addressing this empirical gap.
Focus of the Study
The study assessed evaluation utilisation among non-governmental organisations in Uganda, aiming to generate evidence to guide policy makers, project planners, evaluation planners, commissioners and implementers on determinants of evaluation utilisation. A mixed methods approach was utilised to collect data from a random sample of 404 respondents across 101 NGOs, using an online tool (SurveyCTO), focus group discussions and key informant interviews. The study adopted a conceptual framework developed by researchers that include Fleischer and Christie, (2009) http://www.managingforimpact.org/sites/default/files/resource/evaluation_use_results_from_a_survey.pdf. The framework illustrates three major factors that influence evaluation utilisation as: evaluation implementation characteristics, organisational and evaluator’s characteristics.
What were the key research findings?
The respondents where from national, community-based and international NGOs working in the areas of health, governance, agriculture, water & sanitation, education and child support. National and international NGOs reported having a monitoring and evaluation department, M&E staff, an M&E strategy, and had conducted at least one evaluation in the past five years. The study established that 80% of the respondents had conducted an evaluation in the last five years (Figure1) and of these, 68% reported that they had utilised evaluation findings (Figure2). Evaluation implementation factors that strongly determined evaluation utilisation were: the relevance of that particular evaluation to the organisation’s current strategy, and the credibility and quality the evaluation. Although the timeliness and quality of communicating evaluation results were seen as determinants, they were not reported as very crucial. Organisational factors identified included: the organisation’s receptiveness to new ways of doing things, the need for evaluation information to support decision-making, the organisation’s trust of the results presented from a particular evaluation and perceived implication on the results on continuity of projects/programs. Evaluators’ characteristics included evaluators’ experience and methodological expertise. The evaluators’ relationship or communication with staff did not have a strong influence on whether their evaluation results were utilised.
Figure 1: Proportion of organisations that have done evaluations
Figure 2: Utilizing of evaluation findings and Non-use of evaluation findings
A proposed way forward
The study clearly illustrated that evaluation use is determined by a cocktail and interplay of different factors. Not all factors that influence evaluation utilisation elsewhere (including interest of other stakeholders in the evaluations, relationship of staff with the evaluators, evaluator’s communication skills) actually determine the same among non-governmental organisations in Uganda. It is therefore worthwhile for commissioners and/or evaluators among NGOs in Uganda, to keenly consider those factors that would influence utilisation as they plan and execute evaluations. It is imperative that intended users are considered and are receptive of evaluation findings for utilisation to happen. Importantly planners, donors and policy-makers ought to mainstream evaluations for policy and programming and to consider sufficient budgets for evaluation to allow generation of evidence that supports decision-making as well as informing programming in general. A full version of the study will be available on the Twende Mbele website soon.
Twende Mbele is searching for a Communications and Media Consultant to provide communications support to Twende Mbele core countries (Benin, Uganda, South Africa) to help them build a foundation for effectively communicating on M&E at the national level.
All interested applicants should send their CV, motivation letter addressing the selection criteria and a sample of writing to firstname.lastname@example.org by 23rd July 2018.
Only short-listed candidates will be contacted.
Twende Mbele is searching for a Knowledge Management (KM) Consultant to, firstly support the Programme Secretariat in establishing a foundation for knowledge management on M&E topics and experience sharing within the initiative and beyond. Secondly, to compile the Twende learnings for stakeholders of M&E systems in Africa who seek to learn from the experience of others.
All interested applicants should send their CV, motivation letter addressing the selection criteria and a sample of writing to email@example.com by 16th July 2018.
Only short listed candidates will be contacted.
By Dr Nana Opare-Djan
Over the past year the M&E landscape in Ghana has seen tremendous measures towards institutionalising development evaluation in the entire public sector. Contributions from several stakeholders at different fora, held across the country, gave indications of the high degree of interest in M&E especially the need in drafting of a National Evaluation Policy (NEP) to guide the conduct of evaluation for evidence based decision making at all levels of government.
M&E architecture in Ghana – NDPC/Ministry of Planning and Ministry of M&E
Given the mandate under articles 86 and 87 of the 1992 Constitution, National Development Planning Commission (NDPC) continues with the preparation of the Annual Progress Report (APR) within the framework of the Cross-Sectoral Planning Groups (CSPGs). The process involved the engagement of technical experts with the relevant background and knowledge to review, analyse, and report on progress in the implementation of policies, strategies and programmes using agreed sets of national indicators. The Ministries of Planning, Monitoring & Evaluation (MME) has been created by the amendment of the Civil Service Act, (PNDCL 327) with an Executive Instrument E.I 38 to support government results delivery.
Official Launch of the Postgraduate Diploma in Monitoring and Evaluation Programme
The Center for Learning on Evaluation and Results (CLEAR) at the Ghana Institute of Management has been playing a pioneering role in professionalization of Monitoring and Evaluation in the West Africa Sub region. As one of our key partner institutions in the M&E professionalization journey, we successfully hosted of our first official Postgraduate Diploma in Monitoring and Evaluation Programme as of February 2018.
National Public Sector Reform Strategy (NPSRS), 2018-2023
NPSRS is a five (5)-year project conceptualized to improve public sector performance, especially the delivery of services to citizens and the private sector. A stakeholders’ consultation meeting was held to finalize work on the Public Sector for Results Project (PSRRP) 2018-2023. The objectives of the meeting were to review and finalise project objectives, scope, design features and results framework with key public sector stakeholders; review the proposed components of the PSRRP and agree on details of coverage; discuss financial and procurement arrangements, including necessary assessments; and to define and agree on a preparation programme, key dates, and the budget.
National Evaluation Policy (NEP)
The Steering Committee members however met recently to update the roadmap for drafting the national evaluation policy. The committee recognized the need to have other experts to comment on the draft NEP and provide policy guidance and identify lessons learnt from other contexts for inclusion. A Reference Group responsible for providing the technical and strategic guidance is to be established to support the Steering Committee.