The
evaluation will cover the full programme period (Nov 2021–Nov 2026) and
all four expected results. It will include both programme-level performance and
a purposive sample of research consortia as case studies to examine pathways
from research outputs to uptake and outcome-level change.
Geographic
scope will be the COMIFAC/CEEAC region and other countries covered by
RESSAC-funded research activities. The evaluation team will propose a feasible
sampling plan during inception, balancing country coverage with depth.
Cross-cutting
dimensions that the evaluation must address include:
Indicative key evaluation questions
(organized by evaluation criteria)
The
evaluation will be guided by criteria commonly used for research programme
evaluations, including relevance, scientific quality, efficiency,
effectiveness, impact (with an emphasis on outcome-level influence), and
sustainability. This is a reduced list of indicative questions (to be finalized
during inception) that maintains a balanced coverage of themes and key
evaluation priorities.
Relevance
and coherence
Scientific
quality, interdisciplinarity and knowledge production
Effectiveness,
results and uptake / use
Capacities,
post-docs and unexpected outcomes
Governance, efficiency and implementation
learning (including MTE)
1. Governance & management: To what extent did governance and management arrangements (programme and consortia) enable timely, high-quality implementation, as well as effective partner involvement in knowledge co-production and use of results?
2. Bottlenecks & MTE: What were the main bottlenecks (mobility/visa, administrative capacities, transfers, reporting), how were they managed, and to what extent were lessons/recommendations from the mid-term evaluation taken up?
Impact, sustainability and forward-looking
perspectives (RESSAC 2)
Methodology and evaluation approach
The
evaluation will use a mixed-methods approach, theory-based, suited to research
programmes where outcomes may occur through multiple contribution pathways and
time lags. The team is expected to triangulate evidence across sources and
stakeholder perspectives and to be explicit about attribution/contribution
limits.
Overall
design
Sampling
strategy (to be finalized in inception)
The
evaluation team will propose a sampling strategy that is feasible and
defensible, balancing breadth and depth. At minimum, the sample should:
Analysis
and synthesis
Limitations
and mitigation
Data availability and collection
The
evaluation will draw on programme documentation and existing monitoring
information, complemented by primary data collection with key stakeholders.
Data
sources and methods (indicative)
Data
management and ethics
The
evaluation team will apply informed consent procedures, ensure confidentiality
of interviewees, and comply with applicable safeguarding and data protection
requirements.
A
set of key documents will be made available to the evaluation team. The team
may request additional materials, including consortium final reports,
consolidated monitoring data, and evidence of uptake.

EY

Mondelēz International

Clinton Foundation

Arrow ECS Finland Oy

Logitech

CIFOR-ICRAF

CIFOR-ICRAF

CIFOR-ICRAF