In the past three weeks I have been working quite a lot on monitoring again, as one of my focus areas (together with knowledge management/learning and communications): processing and analysing the results of RiPPLE monitoring for the first time, developing the WASHCost monitoring and learning framework and generally thinking about how to improve monitoring, in line with recent interest in impact assessment (IRC is about to launch a thematic overview paper about this), complexity theory and even the general networks/ learning alliance angle etc.
I think monitoring is going and growing the right way – following an organic development curve – and for me it is one of the avenues that can really improve in the future, which perhaps explains the current enthusiasm for impact assessments etc. As mentioned in a previous blog post, I think the work we carry out with process documentation will be integrated as part of monitoring later, an intelligent way to monitor, which makes sense for donors, implementers (of a given initiative) and beneficiaries.
So what would/could be characteristics of good monitoring, in the future? I can come up with the following:
Integrated: in many cases, monitoring is a separate activity from the rest of the intervention, giving an impression of additional work and no added value. But if monitoring was indeed linked with intervention activities and particularly planning and reporting, it would help a lot and make it seem more useful. In the work on the WASHCost monitoring and learning framework, the key trick was to focus M&L on the ongoing reporting exercise and it did a wonderful trick. In addition to this, monitoring should also be linked with (mid-term and final) evaluations so that the evaluation team – usually external to the project – can come up with a more consistent methodology while keeping distance and a certain degree of objectivity. Evaluations are a different aspect and I’m not explicitly dealing with them here, even though they share a number of points with monitoring.
Informed: If monitoring is integrated with planning, before the project intervention there should be an analysis about the issue at hand and the potential best area of intervention. In line with this, a baseline should be established for what processes and outputs will be monitored. This helps prepare monitoring activities that make sense and interventions that are really focusing on how to improve what doesn’t work (but could help tremendously if it would);
Conscious: about what is at stake and therefore what should be monitored. The intervention should be guided by a certain vision of development, a certain ‘hypothesis of change’ that probably includes a focus on behaviour changes by certain actors, on some systems, processes and products/services and more generally on the system as a whole in which the development intervention is taking place. This conscious approach would therefore be well informed not to focus exclusively on hardware aspects (how many systems were built) nor exclusively on software issues (how much the municipality and private contractors love each other now);
Transparent and as objective as possible: Now that’s a tricky one. But a rule of thumb is that good monitoring should be carried out with the intention to report to donors (upward accountability) and to intended beneficiaries (downward accountability) – this guarantees some degree of transparency – and should be partly carried out by external parties to ensure a more objective take on monitoring (with no bias towards only positive changes). Current attempts to involve journalists to monitor development projects are a sound way forward and many more options prevail.
Versatile: Because monitoring should focus on a number of specific areas, it shouldn’t just use quantitative or qualitative approaches and tools but a mixture of them. This would help make monitoring more acceptable (with the accountability vs. learning discussion for instance) and would provide a good way to triangulate monitoring results, to ensure more objectivity in turn.
Inclusive: If monitoring includes external parties, it should focus on establishing a common understanding, a common vision of what is required to monitor the intervention, and it should also involve training activities for those that will be monitoring the intervention. So monitoring should include activities for communities as for donors, it should bring them together and persuade them that they all have a role to play in proving the value of the intervention and especially improving it.
Flexible: A project intervention rarely follows the course it primarily intended to follow… equally, monitoring should remain flexible to adapt to the evolution of the intervention. It should remain flexible in its design, in the areas that are monitored and in the methods that help monitor those specific areas. That is the value of process documentation and e.g. the Most Significant Change approach: revealing deeper patterns that have a bearing on the intervention but were not identified or recognised as important.
Long-term: Assuming that development is among others about behaviour and social changes, these changes are long-term, they don’t happen overnight; Subsequently monitoring should also have a long term perspective and indeed build ex-post evaluations to revisit intervention sites and see what are the later outcomes of a given intervention.
Finally, and with all else said before, monitoring would gain in being more simple, planned according to what is necessary to monitor and what is good to monitor, in line with existing resources and perhaps following a certain donor’s perspective: to monitor only what is necessary.
Hopefully that kind of monitoring will not feel as an intrusion by external parties in the way people are carrying out their job, and/or it will not feel like just an additional burden to carry without expecting anything from it. Hopefully that kind of monitoring will put the emphasis on learning, on getting value for the action, and on connecting people to improve the way development work is going on.
- (Im)Proving the value of knowledge work: A KMers chat on monitoring / assessing knowledge management
- M&E of KM: the phoenix of KM is showing its head again – how to tackle it?