Insight #2: Monitoring & Evaluating Impact
Posted 2 months ago
Could M&E be a valued tool rather than a ticked box?
When our ISO 56000 Innovation Management Systems workshops get to the monitoring and evaluation guidelines, one or more of the following tends to happen: groans, grimaces, and glassy eyed stares.
The Standards promote embedding M&E into the DNA of an innovation system, yet the R&D space seems to grapple with how to do that effectively. Because we’re curious to understand this reaction better, we’ve asked some uncomfortable questions to identify “why?”.
This brief insight shares what we’re hearing from the research community and what some standout organisations are doing differently. It outlines where the constraints lie and what funders, grant managers, and research providers can do to remove them. It’s offered in the spirit of collaboration, with the aim of helping everyone who works in and around research to lift the overall measurability and appreciation of R&D impact.
What’s the central issue?
It’s the misalignment of what the metrics mean, why they matter, and when they’re actually useful for decision-makers.
Although M&E frameworks include KPIs, it’s often unclear what “improved capability” or “increased adoption” looks like. M&E frameworks designed by someone else become onerous and meaningless add-ons when there’s no shared understanding of what the researchers should be identifying as they collect and interpret the data.
When it’s vague, it has no value.
5 common misconceptions that don’t help decision-makers
M&E designers believe their KPI checklists will reveal the ROI and what shape it takes. But if they don’t understand R&D processes like iteration and validation, the emphasis leans on the evaluation instead of the monitoring – “Was that a good investment?” instead of “Is this investment still a viable solution for the problem it seeks to solve?”.- Research teams only need high-level M&E frameworks. Many high-level M&E frameworks look good in theory, but they don’t provide practical advice on how to collect and analyse data that interests investors.
- Because research outcomes are not guaranteed, we can’t specify outcome targets. Without outcome targets there’s no “north star” for the project to follow. KPIs that use vague wording like ”enhanced” or ”improved” without targets can let research behaviour default back to only being concerned about activities and outputs.
- M&E frameworks are just mandatory measuring sticks for the end of the project. Good ones are actually management tools that help you make better decisions about optimising the project’s potential as a valued solution from day one, and throughout the life of the project.
- Well-managed research projects don’t need to invest in M&E. But research without M&E, even when the project seems to be effective, doesn’t help to gauge progress towards outcome and impact delivery.
In our Principles for R&D Impact (2024), we explain that impact doesn’t just refer to long-term outcomes. It’s also about how an organisation, industry or society can derive value from more efficient use of R&D resources – and that includes time, expertise and facilities as well as money.
We’ve shared the principles with more than 500 R&D professionals to date, including numbers 4 and 5, which relate to M&E. When we ask, “What if M&E could be integrated instead of tacked on? What difference would that make?”, here’s what they tell us.
What if M&E tracked incremental impact at the same time as activities and outputs?
What if outcome progression were measured throughout the project, not just at the end?
Some argue this doesn’t reduce the M&E “burden”; it just spreads it out. Others suggest it might help researchers with managing expectations as well as their workload.
From an investment perspective, what would support more confident and timely decision-making: measurable difference reported at milestones or waiting to weigh up the “was it worth it?” at the end of the project?
M&E-focused research organisations don’t measure more; they design differently, resource differently, and use findings differently.
Here’s what’s different about R&D organisations that integrate outcome and impact tracking
When we talk to clients about what they want to gain from doing M&E differently, we share an aggregation of what we’ve learned from 18+ years of designing and reviewing frameworks across different industries. These are the common traits of effective M&E-savvy organisations:
- Clear purpose: They decide what they want the process to do – support accountability, learning, improvement, impact delivery, etc. – then design M&E frameworks accordingly.
- Early integration: Frameworks are built into their project and program design. M&E isn’t “done to” researchers or managers, but is a shared responsibility that is “done with” them.
- Collaboration: Input is sought from evaluation specialists for methodological rigour; program managers to ensure operational feasibility; researchers responsible for technical accuracy; and industry stakeholders to maintain relevance.
- Communication and clarity: KPIs include sufficient detail for researchers to know what impact looks like and how it will be measured, in clearly defined rather than abstract terms.
- Appropriate investment: They allocate ~5-10% of budget to M&E because it’s considered integral to research excellence, not an overhead.
- Capacity building: Program managers and researchers are trained to use M&E tools meaningfully.
- Good governance: Boards or panels regularly review results AND whether the M&E framework remains fit for purpose. They welcome “lessons learned” reports, not just success stories, because M&E is a strategic lever.
The bigger opportunity
We believe that transforming M&E from a tacked-on compliance function into an integrated management tool builds trust and confidence in the program’s potential to generate a measurable ROI. This then makes the R&D project more attractive to ‘next investors’. It also accelerates learning and strengthens an organisation’s reputation for accountable, robust, high-impact research.
Have you noticed similar patterns in your experience? What’s working well in your field or industry that we should be documenting? Let us know what you think the R&D landscape needs for impact to be recognised and prioritised: community.mgr@impactinnovation.com
Read more: Insight #1 RFP Design and its Influence on Impact