M&E of KM: the phoenix of KM is showing its head again – how to tackle it?

I’ve started working on a summary of two papers commissioned by the IKM-Emergent programme to unpack the delicate topic of monitoring (and evaluation) of knowledge management (1). This could be just about the driest, un-sexiest topic related to KM. Yet, it seems precisely one of the most popular topics and one that keeps resurfacing on a regular basis.

On the KM4DEV community alone, since the beginning of 2009, nine discussions (2) have focused on various aspects of monitoring of knowledge management, some of them generating a traffic of over 30 emails!! Are we masochistic? Or just thirsty for more questions?

M&E the phoenix of KM? (photo credits: Onion)

Anyway, this summary piece of work is a good opportunity to delve again into the buzz, basics, bells and whistles of monitoring knowledge management (as in the practice of observing/ assessing/ learning inherent to both M&E rather than on the different conditions in which M or E generally occur).

In attempting to monitor knowledge and/or knowledge management, one can look at an incredible amount of issues. This is probably the reason why there is so much confusion and questioning around this topic (see this good blog post by Kim Sbarcea of ‘ThinkingShift’, highlighting some of these challenges and confusion).

In this starting work – luckily supported by colleagues from the IKM working group 3 – I am trying to tidy things up a bit and to come up with a kind of framework that helps us understand the various approaches to M&E of KM (in development) and the gaps in this. I would like to introduce here a very preliminary half-baked framework that consists of:

  • Components,
  • Levels,
  • Perspectives.

And I would love to hear your views on these, to improve this if it makes sense, or to stop me at once if this is utter gibberish.

First, there could be various components to look at as items to monitor. These items could be influenced by a certain strategic direction or could happen in a completely ad hoc manner – a sort of pre-put. The items themselves could be roughly sorted as inputs, throughputs or outputs (understood here as results of the former two):

Pre-put Input (resources and starting point) Throughput (work processes & activities) Output (results)
– None (purely ad hoc)- Intent or objective 

– Structured assessment of needs (e.g. baseline / benchmarking)

– Strategy (overall and KM-focused)

– People (capacities and values)- Culture (shared values) 

– Leadership

– Environment

– Systems to be used

– Money / budget

– Methods / approaches followed to work on KM objectives- (Co-)Creation of knowledge artefacts 

– Use of information systems

– Relationships involved

– Development of a learning/innovation space

– Attitudes displayed by actors involved or concerned

– Rules, regulations, governance of KM

– Creation of products & services- Appreciation of products & services 

– Use/application of products & services

– Behaviour changes: doing different things, things differently or with a different attitude

– Application of learning (learning is fed back to the system)

– Reinforcement of capacities

All these components are then affected by the various levels at which a KM intervention (or strategy) is monitored, which could be:

  • Individual level;

    Different levels at which M&E of KM could take place
  • Team level;
  • Organisational level;
  • Inter-organisational level i.e. communities of practice, multi-stakeholder processes, potentially verging on to sectoral level – though with the problem of defining ‘a sector’;
  • Societal level affecting a society entirely.

And then of course comes perhaps the most crucial – yet implicit – element: the worldview that motivates the approach that will be followed with monitoring of knowledge management.

Because this is often an implicit aspect of knowledge-focused activities, this is largely a grey area in the way knowledge management is monitored. Yet on a spectrum of grey shades I would distinguish three world views that lead to three types of approaches on monitoring of knowledge (management). These approaches can potentially be combined in innumerable ways. The three strands would be:

  1. Linear approaches to monitoring of KM with a genuine belief in cause and effect and planned intervention;
  2. Pragmatic approaches to monitoring of KM, promoting trial and error and a mixed attention to planning and observing. I would argue this is perhaps the dominant model in the development sector, judging from the literature available anyhow (more on this soon).
  3. Emergent approaches to M&E of KM, stressing natural combinations of factors, relational and contextual elements, conversations and transformations.

In the comparative table below I have tried to sketch differences between the three groups as I see them now, even though I am not convinced that in particular the third category is giving a convincing and consistent picture.

Worldview Linear approaches to M&E of KM Pragmatic approaches to M&E of KM Emergent approaches to M&E of KM
Attitude towards monitoring Measuring to prove Learning to improve Letting go of control toexplore natural relations and context
Logic What you planned à what you did à what is the difference? What you need à what you do à what comes out? What you do à how and who you do it with à what comes out?
Chain of key elements Inputs – activities –outputs – outcomes – impact Activities – outcomes – reflections Conversations – co-creations – innovations –transformations – capacities and attitudes
Key question How well? What then? Why, what and how?
Outcome expected Efficiency Effectiveness Emergence
Key approach Logical framework and planning Trial and error Experimentation and discourse
Attitude towards knowledge Capture and store knowledge (stock) Share knowledge (flow) Co-create knowledge and apply it to a specific context
Component focus Information systems and their delivery Knowledge sharing approaches / processes Discussions and their transformative potential
I, K or? What matters? Information Knowledge and learning Innovation, relevance and wisdom
Starting point of monitoring cycle Expect as planned Plan and see what happens Let it be and learn from it
End point of monitoring cycle Readjust same elements to the sharpest measure(single loop learning) Readjust different elements depending on what is most relevant(double loop learning) Keep exploring to make more sense, explore your own learning logic(triple loop learning)

The very practical issue of budgeting does not come in the picture here but it definitely influences the M&E approach chosen and the intensity of M&E activities.

Aside from all these factors, there are of course many challenges that are plaguing an effective practice of monitoring knowledge management, but this framework offers perhaps a more comprehensive approach to M&E of KM?

Again, I am inviting you to improve this half-baked cake or to reject it as plainly indigestible. So feel free to shoot!


(1)    Knowledge management understood here as ”encompassing any processes and practices concerned with the creation, acquisition, capture, sharing and use of knowledge, skills and expertise (Quintas et al., 1996) whether these are explicitly labelled as ‘KM’ or not (Swan et al., 1999)”. This definition is extracted from the first IKM-Emergent working paper. Even though I don’t entirely agree with this definition, let’s consider it’s creating enough clarity for the sake of understanding this blog post.

(2)    Previous discussions related to M&E of KM on KM4DEV:

  • Managing community of practice: creative entrepreneurs (22/11/2009) with a specific message on the impact of communities of practice
  • Value and impact of KS & collaboration (11/10/2009)
  • Evaluation of KM and IL at SDC (08/07/2009)
  • KM self-assessment (18/03/2009)
  • Organisational learning indicators (13/12/2009)
  • Monitoring and evaluating online information (05/02/2009)
  • Monitoring and evaluating online information portals (03/02/2009)
  • Evaluation of KM processes (30/01/2009)
  • Evidence of sector learning leading to enhanced capacities and performances (05/01/2009)

Related posts:


Published by Ewen Le Borgne

Collaboration and change process optimist motivated by ‘Fun, focus and feedback’. Nearly 20 years of experience in group facilitation and collaboration, learning and Knowledge Management, communication, innovation and change in development cooperation. Be the change you want to see, help others be their own version of the same.

Join the Conversation


Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

  1. Hello Joitske,

    Thank you for your comment!

    I like (and agree with) the idea of inherent and extractive monitoring. In my starting framework, this is related to aspects of governance of monitoring and who orders/benefits from monitoring, because if an external party (especially a donor) sits on top of it, the monitoring exercise is likely to be more of an accountable / extractive piece of work, while if carried out by/for local constituents it is more likely to be an inherent piece of monitoring.
    This is certainly a critical area and one that should be fleshed out more, picking upon the ideas and references in your paper.

    In the elements I proposed in this post I haven’t made a value judgment (yet) and I agree that in the current climate of ‘complexity hype’ there is a certain trend in promoting emergent approaches and refuting linear approaches. I agree with you that it is not a matter of either/or but rather one of understanding who sits on top of monitoring and what world view dictates their approach. However I do see implications in the way monitoring activities are designed and the areas they focus on. And roughly there it seems to me that you find the two ends of the spectrum: linear / emergent and a kind of grey area in the middle (what I call the pragmatic approach) which perhaps corresponds to the mainstream approach to M&E of KM.
    The objective of this summary piece (indeed drawing upon your paper and that of Apin, as well as on IKM evaluation papers by Chris Mowles) is to offer a more comprehensive view on how M&E is designed and elaborated, the useful areas that current M&E approaches are assessing but also their caveats and other areas that have not been sufficiently studied, such as emergent approaches. The latter are interesting because they are likely to affect M&E of KM in years to come.

    So in summary: I will definitely re-emphasise aspects of ownership, and the relation to inherent or extractive monitoring, and will try to provide some keys on how M&E of KM is built and the implications this has.

    Thanks again and feel free to engage again, we will keep everyone posted on the progress with the summary paper on the Giraffe blog.

  2. Hi Ewen, I think one of the paper is ours :).
    I don’t think the distinction between linear, pragmatic and emergent approaches is a useful one. How does this distinction help you? I find the distinction between inherent and extractive monitoring for instance much more useful, because it gives a different dynamic and direction to the whole exercise. Even though in reality it is not either, or, but often a mix.

    I think the problem with IKMemergent is that they use this distinction and on top of that label linear as the wrong approach and emergent as the right approach. Then you get stuck…

%d bloggers like this: