At the IKM table: linearity, participation, accountability and individual agency on the practice-based change menu (1)


On 20 and 21 February 2012, the  London-based Wellcome Collection is the stage for the final workshop organised by the Information Knowledge Management Emergent (IKM-Emergent or ‘IKM-E’) programme. Ten IKM-E members are looking at the body of work completed in the past five years in this DGIS-funded research programme and trying to unpack four key themes that are interweaving insights from the three working groups which have been active in the programme:

  1. Linearity and predictability;
  2. Participation and engagement;
  3. Individual agency and organisational remit;
  4. Accountability

This very rich programme is also an intermediary step towards a suggested extension for the programme (“IKM 2”).

In this post I’m summarising quite a few of the issues tackled during the first day of the workshop, covering the first two points on the list above.

On linearity and predictability:

Linear approaches to development – suggesting that planning is a useful exercise to map out and follow a predictable causal series of events – are delusional and ineffective. We would be better advised using  emergent perspectives as they are more realistic, for lack of being more certain.

Linearity and predictability strongly emphasise the current (and desired alternative) planning tools that we have at our disposal or are sometimes forced to use, and the relation that we entertain with the actors promoting these specific planning tools.

Planning tools

After trying out so many ineffective approaches for so long, it seems clear that aspirational intent might act as a crucial element to mitigate some of the negative effects of linearity and predictability. Planning tools can be seen as positivist, urging a fixed and causal course of events, indeed focusing on one highlighted path – as is too often the case with the practice around logical framework – or can have an aspirational nature, in which case they focus on the end destination or the objective hoped for and strive to test out the assumptions underlying a certain pathway to impact (at a certain time).

Different situations require different planning approaches. Following the Cynefin framework approach, we might be facing simple, complicated, complex or chaotic situations and we will not respond the same way to each of those. A complex social change process may require planning that entails regular or thorough consultation from various stakeholder groups, a (more) simple approach such as an inoculation campaign may just require ‘getting on with the job’ without a heavy consultation process.

At any rate, planning mechanisms are one thing but the reality on the ground is often different and putting a careful eye to co-creating reality on the ground is perhaps the best approach to ensure a stronger and more realistic development, reflecting opportunities and embracing natural feedback mechanisms (the reality call).

There are strong power lobbies that might go against this intention. Against such remote control mechanisms – sometimes following a tokenistic approach to participation though really hoarding discretionary decision-making power – we need  distanced control checks and balances, hinting at accountability.

Managing the relationship leading to planning mechanisms

Planning tools are one side of the coin. The other side of the coin is the relationship that you maintain with the funding or managing agency that requires you to use these planning tools.

Although donor agencies might seem like ‘laggards’ in some way, managing the relationship with them implies that we should not stigmatise their lack of flexibility and insufficient will to change. In a more optimistic way, managing our relationship with them may also mean that we need to move away from the contractual nature of the relations that characterise much of development work.

Ways to influence that relationship include among others seeking evidence and using evidence that we have (e.g. stories of change, counter-examples from the past either from one’s own past practice or from others’ past practice etc.) and advocating itProcess documentation is crucial here to demonstrate the evidence around the value of process work and the general conditions under which development interventions have been designed and implemented. It is our duty to negotiate smart monitoring and evaluation in the intervention, including e.g.  process documentation, the use of a theory of change and about the non instrumentalisation (in a way that logical frameworks have been in the past). In this sense, tools do not matter much as such; practice behind the tools matters a lot more.

Finally, still, there is much importance in changing relationships with the donor to make the plan more effective: trust is central to effective relationships. And we can build trust with donors by reaching out to them: if they need some degree of predictability, although we cannot necessarily offer it, we can try, talk about our intent to reduce uncertainty. However, most importantly, in the process we are exposing them to uncertainty and forcing them to deal with it, which helps them feel more comfortable with uncertainty and paradox and find ways to deal with it. Convincing donors and managers about this may seem like a major challenge at first, but then again, every CEO or manager knows that their managing practice does not come from a strict application of ‘the golden book of management’. We all know that reality is more complex than we would like it to be. It is safe and sound management practice to recognise the complexity and the .

Perhaps also, the best way to manage our relationship with our donors in a not-so-linear-not-so-predictable way is to lead by example: by being a shining living example of our experience and comfort with a certain level of uncertainty, and showing that recognising the complexity and the impossibility to predict a certain course of events is a sound and realistic management approach to development. Getting that window of opportunity to influence based on our own example depends much on the trust developed with our donors.

Trust is not only a result of time spent working and discussing together but also the result of surfacing the deeper values and principles that bind and unite us (or not). The conception of development as being results-based or relationship-based influences this, and so does the ‘funding time span’ in which we implement our initiatives.

Time and space, moderating and maintaining the process

The default development cooperation and funding mechanism is the project, with its typically limited lifetime and unrealistic level of endowment (in terms of resources, capacities etc. available). In the past, a better approach aimed at funding institutions, thereby allowing those organisations to afford the luxury of learning, critical thinking and other original activities. An even more ideal funding mechanism would be to favour endemic (e.g. civic-driven) social movements where local capacities to self-organise are encouraged and supported over a period that may go over a project lifetime. If this was the default approach, trust would become a common currency and indeed we would have to engage in longer term partnerships, a better guarantee for stronger development results.

A final way to develop tolerance to multiple knowledges and uncertainty is to bring together various actors and to use facilitation in these workshops so as to allow all participants to reveal their personal (knowledge culture) perspective, cohabiting with each other. Facilitation becomes de facto a powerful approach to plant new ideas, verging on the idea  of ‘facipulation’ (facilitation-manipulation).

Beyond a given development intervention, a way to make its legacy live on is to plug those ideas onto networks that will keep exploring the learning capital of that intervention.

What is the value proposition of all this to donors? Cynically perhaps the innovativeness of working in those ways; much more importantly, the promise of sustainable results – better guaranteed through embedded, local work. The use of metaphors can be enlightening here, in the sense that it gives different ideas: what can you invest in projects and short term relationships? e.g. gardening for instance planting new initiatives in an existing soil/bed or putting fertilizer in existing plants…

Interesting links related to the discussion:

This slideshow requires JavaScript.

On participation and engagement:

Sustainable, effective development interventions are informed by careful and consistent participation and engagement, recognising the value of multiple knowledges and cherishing respect for different perspectives, as part of a general scientific curiosity and humility as to what we know about what works and what doesn’t, in development and generally.

The second strand we explored on day 1 was participation and engagement with multiple knowledges. This boils down to the question: how to value different knowledges and particularly ‘local knowledge’, bearing in mind that local knowledge is not a synonym to Southern knowledge because we all possess some local knowledge, regardless of where we live.

A sound approach to valuing participation and engagement is to recognise the importance of creating the bigger picture in our complex social initiatives. The concept of cognitive dissonance is particularly helpful here: As communities of people we (should) value some of our practices and document them so that we create and recognise a bigger collective whole but then we have to realise that something might be missing from that collective narrative, that we might have to play the devil’s advocate to challenge our thinking – this is the ‘cognitive dissonance at play – and it is more likely to happen by bringing external views or alternative points of view, but also e.g. by using facilitation methods that put the onus on participants to adopt a different perspective (e.g. DeBono’s six-thinking hats). Development work has to include cognitive dissonance to create better conditions to combine different knowledges.

Participation and engagement is also conditioned by power play of course, but also by our comfort zones; e.g. as raised in a recent KM4Dev discussion, we are usually not keen on hiring people with different perspectives, who might challenge the current situation. We also don’t like the frictions that come about with bringing different people to the table: we don’t like to rediscuss the obvious, we don’t like to renegotiate meaning but that is exactly what is necessary for multiple knowledges to create a trustworthy space. The tension between deepening the field and expanding it laterally with new people is an important tension, in workshops as in development initiatives.

We may also have to adopt different approaches and responses in front of a multi-faceted adversity for change: Some people need to be aware of the gaps; others are aware but not willing because they don’t see the value or feel threatened by inviting multiple perspectives; others still are also aware and don’t feel threatened but need to be challenged beyond their comfort zone. Some will need ideas, others principles, others yet actions.

At any rate, inviting participation calls for inviting related accountability mechanisms. Accountability (which will come back on the menu on day 2) is not just towards donors but also towards the people we invite participation, or we run the risk of ‘tokenising’ participation (pretending that we are participatory but not changing the decision-making process). When one interviews a person, they  have to make sure that what they are transcribing faithfully reflects what the interviewee said. So with participation, participants have to be made aware that their inputs are valued and reflected in the wider engagement process, not just interpreted as ‘a tick on the participatory box’.

Participation and engagement opens up the reflective and conversation space to collective engagement, which is a very complex process as highlighted in Charles Dhewa’s model of collective sense-making in his work on traducture. A prerequisite in that collective engagement and sense-making is the self-confidence that you develop in your own knowledge. For ‘local knowledge’, this is a very difficult requirement, not least because even in their own context, proponents of local knowledge might be discriminated and rejected by others for the lack of rigor they display.

So how to invite participation and engagement?

Values and principles are guiding pointers. Respect (for oneself and others) and humility or curiosity are great lights on the complex path to collective sense-making (as illustrated by Charles Dhewa’s graph below). They guide our initiatives by preserving a learning attitude among each and every one of us. Perhaps development should grow up to be more about  ‘ignorance management’, an insatiable thirst for new knowledge. The humility about our own ignorance and curiosity might lead us to unravel ever sharper questions, on the dialectical and critical thinking path, rather than off-the-shelf (and upscaling-friendly) answers – which we tend to favour in the development sector. The importance here is the development of shared meaning.

A collective sensemaking framework (by Charles Dhewa)

A collective sensemaking framework (by Charles Dhewa)

As highlighted in the previous conversation, not every step of a development initiative requires multi-stakeholder participation, but a useful principle to invite participation and engagement is iteration. By revisiting at regular intervals the assumptions we have, together with various actors, we can perhaps more easily ensure that some key elements from the bigger picture are not thrown away in the process. This comes back to the idea of assessing the level of complexity we are facing, which is certainly affected by a) the amount of people that are affected by (or have a crucial stake in) the initiative at hand and b) the degree of inter-relatedness of the changes that affect them and connect them.

Iteration and multi-stakeholder engagement and participation are at the heart of the ‘inception phase’ approach. This is only one model for participation and un-linear planning:

  • On one end of the spectrum, a fully planned process with no room for (meaningful) engagement because the pathway traced is not up for renegotiation;
  • Somewhere in the middle, a project approach using an inception period to renegotiate the objectives, reassess the context, understand the motivations of the stake-holders;
  • At the other end of the spectrum, a totally emergent approach where one keeps organising new processes as they show up along the way, renegotiating with a variety of actors.

Seed money helps here for ‘safe-fail’ approaches, to try things out and draw early lessons and perhaps then properly budget for activities that expand that seed initiative. Examples from the corporate sector also give away some interesting pointers and approaches (see Mintzberg’s books and the strategy safari under ‘related resources’). The blog post by Robert Chambers on ‘whose paradigm’

”]Adaptive pluralism - a useful map to navigate complexity? [Credits: Robert Chambers]counts and his stark comparison between a positivist and adaptive pluralism perspectives are also very helpful resources to map out the issues we are facing here.

At any rate, and this can never be emphasised enough, in complex environments – as is the case in development work more often than not – a solid context analysis is in order if one is to hope for any valuable result, in the short or long run.

Related resources:

These have been our musings on day 1, perhaps not ground-breaking observations but pieces of an IKM-E collage that brings together important pointers to the legacy of IKM-Emergent. Day 2 is promising…

Related blog posts:

Advertisements

Communication, KM, monitoring, learning – The happy families of engagement


Many people seem to be struggling to understand the differences between communication, knowledge management, monitoring, learning etc.

Finding the happy families (Photo: 1st art gallery)

Finding the happy families (Photo: 1st art gallery)

Let’s consider that all of them are part of a vast family – the ‘engagement’ family. Oh, let’s be clear, engagement can happen in many other ways but for the  sake of simplicity, let’s focus on these four and say that all of these family members have in common the desire – or necessity – to engage people with one another, to socialise, for a reason or another. And let’s try to unpack this complex family tree, to discover the happy families of engagement.

The engagement family is big, it contains different branches and various members in each of these. The main branches are roughly the Communication (Comms), Knowledge management (KM) and Monitoring & Evaluation (M&E).

Communicating

Communi-cating

The comms branch is large and old. Among the many siblings, the most prominent ones are perhaps Public Relations and Marketing. They used to be the only ones around in that branch, for a time that seems endless. All members of this branch like to talk about messages, though their horizon has been expanding to other concepts and approaches, of late.

  • Public relations has always made the point that it’s all about how you come across to other folks and enjoys very much the sheen and the idea of looking smart. But some accuse him of being quite superficial and a little too self-centred.
  • His old sibling marketing has adopted a more subtle approach. Marketing loves to drag people in a friendly conversation, make them feel at ease and get them to do things that perhaps they didn’t want in the first place. Marketing impresses everyone in the family by its results, but he has also upset quite some people in the past. He doesn’t always care for all that, as he thinks he can always find new friends, or victims.
  • Another of their sibling has been around for a while too: Advocacy is very vocal and always comes up with a serious message. Some of his family members would like him to adopt a less aggressive approach. Advocacy’s not silly though, so he’s been observing how his brother marketing operates and he’s getting increasingly subtle, but his image is very much attached to that of an ‘angry and hungry revolutionary loudmouth’.
  • Their sister communication is just as chatty but she is a bit behind the scene. Communication doesn’t care about promoting her family, selling its treasures or claiming a message, she just wants people to engage with one another, in and out of the family. She is everywhere. In a way she might be the mother of this branch.
  • Their youngest sister, internal communication, has been increasingly present over the past few years and she really cares for what happens among all members of her family. She wants people to know about each other and to work together better. She has been getting closer and closer to the second main branch of the engagement family tree: knowledge management, but she differs from that branch in focusing on the internal side of things only.
Knowledge management

Knowledge management

The Knowledge management branch also comprises many different members and in some way is very heterogeneous. This branch doesn’t care so much for messages as for (strategic) information and conversations. For them it’s all about how you can use information and communication to improve your approach.

  • The old uncle is information management. He has been around for a while and he still is a pillar of the family. He collects and organises all kinds of documents, publications, reports and puts them neatly on shelves and online in ways that help people find information. His brothers and sisters mock up his focus on information. Without people engaging with it, information does little.
  • His younger sister knowledge sharing was long overshadowed in the KM branch but she’s been sticking her head out a lot more, taking credit for the more human face of the KM branch. She wants people to share, share and share, engage and engage. She’s very close to her cousin Communication from the Comms branch, but what she really wants is to get people to get their knowledge out and about, to mingle with one another. She has close ties with her colourful cousins facilitation, storytelling and a few more.
  • They have another brother called ‘organisational learning’, who was very active for a while. He wanted everyone to follow him and his principles but he has lost a lot of visibility and momentum over the years when many people found out that the way he showed was not so straightforward as he claimed;
  • The little brother PKM (personal knowledge management) was not taken seriously for a long time but he is really a whiz kid and has given a lot of people confidence that perhaps his branch of the family is better off betting on him, at least partly. He says that everyone of us can do much to improve the way we keep our expertise sharp and connect with akin spirits. To persuade his peeps, PKM often calls upon on his friends from social media and social networks (though these fellas are in demand by most family members mentioned above).
  • A very smart cousin of the KM branch, innovation, is marching up to the limelight. She’s drop-dead gorgeous and keeps changing, never settling with one facet of her identity. Her beauty, class and obvious commonsense strike everyone when they see her, but she disappears quickly if she’s not entertained. In fact, many in the KM family would like to get her on their side but she’s alluding. Perhaps if many family members got together they would manage to keep her at their side.
Monitoring

Monitoring

The M&E branch has always been the odd group out. They are collectors and reporters. Through their history they have mostly focused on indicators, reportspromises made, results and lessons learnt. Other family members consider this branch to be little fun and very procedural, even though of late they have bended their approach – but not everyone around seems to have realised that.

  • Planning is not the oldest but perhaps the most responsible one of this branch. He tries to coordinate his family in a concerted manner. But he is also quite idealistic and sometimes he tends to ignore his siblings and stick to his own ideas, for better (or usually for worse). Still, he should be praised for his efforts to give some direction and he does so very well when he brings people to work with him;
  • Reporting, the formal oldest brother, is perhaps the least likely to change soon. He takes his job very seriously and indeed he talks to all kinds of important people. He really expects everyone to work with him, as requested by those important contacts of his. He doesn’t always realise that pretty much everyone consider him rather stuffy and old-fashioned, but he knows – and they sometimes forget – that he does matter a lot as a connector between this whole funky family and the wider world.
  • Data collection is the next sister who tends to wander everywhere; she lacks the sense of prioritisation, which is why planning really has to keep an eye on her. She’s very good at collecting indeed a lot of stuff but she doesn’t always help her siblings make sense of it. Everyone in the family agrees she has an important role to play but they don’t quite know how.
  • Therefore her other sister reflection is always behind to absorb what data collection brought forward and make sense of it. She is supposedly very astute but occasionally she does her job too quickly and misses crucial lessons or patterns. Or perhaps she’s overwhelmed by what data collection brought to her and she settles for comfort. But she usually has great ideas.
  • They have a young sister called process documentation. She’s a bit obscure to her own kin but she seems to have built a nice rapport with the other branches of the wider family and seems more agile than her own brothers and sisters. She goes around and observes what’s going on, picking up the bizarre and unexpected, the details of how people do things and how it helps for their wider work.
Learning is patient

Learning is patient

The wise godmother (1) of them all is learning. Learning generously brings her good advices to all her family, for them to improve over time. She wants her Comms branch offspring to engage in ways that benefit everyone; she encourages their KM siblings to nurture more genuine and deeper conversations that lead to some more profound insights and more effective activities; she invites the sidetracked M&E branch to find their place, not be obtuse and use their sharp wits to bring common benefits and help understand what is going well or not and why. More than anything, she encourages all her godchildren to get along with one another because she sees a lot of potential for them to join hands and play together.

Learning could do it all on her own but she prefers to socialise, she loves socialising in fact, and that’s how she keeps on top of the game, and keeps bringing the light over to other parts of the family. It’s not an easy game for her to bring all her flock to play together. There’s a lot of strong egos in there, but she is patient and versatile, and she knows that eventually people will come to seek her wisdom…

Do you recognise your work in those happy families? Who am I missing and where in the tree should they fit?

Related posts:

What is good in a project?


Where to start again on this blog after such a long interruption? Not with a digression (*) straight away!

Anyways, I’ll start again with a question that has been tickling me for a long time:

What are the good parts of a project to keep and use?

Development, oops rather aid (i.e. donor-driven development) is largely structured around projects. This is how many of us out there work. We end up cooperating for three, four, five years in a given place, with a group of people and institutions, following semi-random streams of activities sometimes called ‘work packages’. And throughout the project years we come up with ideas, principles plans, activities, approaches, tools, reports, templates, lessons, publications to do what we think we have to do. And then one day the party is over, alliances fade, activities stop, the flow of knowledge and information dries out. And then comes the question: what really makes sense to keep track of at the end of the day, other than the great moments spent together and the nuggets that have pleased project beneficiaries, staff and/or donors?

Much like there are various beef cuts that can be used in a cow (sorry for any vegetarian or vegan reader out there), what are the parts of the project that we can use (in this case, again) because they could be useful?

Beef cuts and project nu(gge)ts - what should we keep? (Photo credits: global wildlife warriors)

Beef cuts and project nuggets - what should we keep? (Photo credits: global wildlife warriors)

What is there to ‘capitalise’ on afterwards? This question is becoming crucial for me as I’m involved in a soon ending project and am puzzled as to what to do with all the process information we have collected through the years.

We have of course, like many projects, the official documents – the emerged side of the iceberg: the papers, newsletters, websites and the upcoming book we’re writing… the flashy documents we have happily commit to produce as agreed in the contract.

But hidden all around, are the guidelines, templates, checklists, information sheets, how-to’s, process reports etc. that we have developed in the past five years.

Usually these documents do not make it to the official ‘documentation’ of any given project. And yet perhaps what might be precisely most useful to others, more than the results of the project even is that process information describing how a project has looked at certain activities and proposed to go about them. This is what can be re-used, learned from, integrated. So that next time a team starts similar work, they focus on slightly better sets of questions and issues…

What do you think? What is good to keep? Does it make sense to keep track of all the ‘process’ outputs of a project? Is it worth investing time to polish them so they can be understood by an external audience? How shareable are they compared to the project outputs?

I hope you can shed your light on this, as this may be an important KM question for development projects… And that specific project I mentioned is about to be cooked up so it might as well be useful and inspire others…

(*) It probably doesn’t matter much where I start blogging again, since I suspect only few people are checking this page after several very quiet months. Besides, those that do visit this blog sometimes tell me they don’t always understand what I am saying. So, for you puzzled reader, read my profile. And by the way, and I always welcome questions, so share your puzzles!

Related blogposts:

What the *tweet* do we know (about monitoring/assessing KM)?


This week Tuesday I moderated my first ever Twitter chat thanks to the opportunity provided by KMers (as mentioned in one recent blog post). A very rich and at times overwhelming experience in terms of moderating – more on this in the process notes at the bottom of this post.

KMers provides a great opportunity to host Twitter chats! Tweet on! (photo credits: ~ilse)

The broad topic was about ‘monitoring / assessing KM’ and I had prepared four questions to prompt Tweeters to engage with the topic:

  1. What do you see as the biggest challenge in monitoring KM at the moment?
  2. Who to involve and who to convince when monitoring KM?
  3. What have been useful tools and approaches to monitor KM initiatives?
  4. Where is M&E of KM headed? What are the most promising trends (hot issues) on the horizon?

After a couple of minutes at the beginning to wait for all participants, we started listing a number of key challenges in terms of monitoring/ assessing KM:

  • Understanding what we are trying to assess and how we qualify success – and jointly agreeing on this from originally different perspectives and interests;
  • The disconnect between monitoring and the overall strategy and perhaps its corollary of (wrongly) obsessing on KM rather than on the contribution of KM to overall objectives;
  • The crucial problem of contribution / attribution of KM: how can we show that KM has played a role when we are dealing with behaviour changes and improved personal/organisational/inter-institutional effectiveness?;
  • The dichotomy between what was described as ‘positive’ monitoring (learning how we are doing) and ‘negative’ monitoring (about censoring and controlling peoples’ activities);
  • The occasional hobby horses of management and donors to benchmark KM, social media, M&E of KM etc.
  • The problem of focusing on either quantitative data (as a short-sighted way of assessing KM – “Most quantitative measures are arbitrary and abstract. …adoption rate doesn’t really equate to value generation” – Jeff Hester) or rather qualitative data (leaving a vague feeling and a risk of subjective biases);
  • The challenge of demonstrating added value of KM.
  • The much need leadership buy-in which would make or break assessment activities;

The challenges were also felt as opportunities to ‘reverse engineer successful projects and see where KM played a role and start a model’.

An interesting perspective from Mark Neff – that I share – was about monitoring from the community perspective, not from that of the business/organisation.

This last issue hinted at the second part of the chat, which was dedicated to what turned out to be a crux of the discussion: who do you need to involve and who to convince (about the value of KM) when monitoring KM.

Who to involve? Customers / beneficiaries, communities (for their capacity to help connect), even non-aligned communities, users / providers and sponsors of KM, employees (and their capacity to vote with their feet). Working in teams was suggested (by Boris Pluskowski) as a useful way to get knowledge to flow which eventually helps the business then.

Who to convince? Sponsors/donors (holding the purse strings), leaders (who are not convinced about measurement like managers but instead like outputs and systems thinking).

What is the purpose of your monitoring activities? Management? Business? Productivity? Reuse? Learning? Application? Membership? Mark Neff rated them as all interesting (another challenge there: choose!). Rob Swanwick made the interesting point of measuring within each unit and having KM (and social media at that) mainstreamed in each unit, rather than dedicated to a small group.

Raj Datta shared his interesting perspective that it is key to explore and expand from the work of communities that are not aligned with business objectives.

The third part continued with some tools and approaches used to assess KM.

The key question came back: What are we looking at? Increasing profits, sales and the engagement of customers? Participation in CoPs? Answers provided in 48 hours? Adoption rates (with the related issue of de-adoption of something else that Rob Swanwick pointed out)? Project profile contributions? Percentage of re-use in new projects? Stan Garfield suggested setting three goals and measuring progress for each (as described in his masterclass paper about identifying objectives). Mark Neff also stressed that it all depends on the level of maturity of your KM journey: better to build a case when you begin with KM, to look at implementing something or at the adoption rate when you’re a bit more advanced… At his stage, the man himself sees “efforts to measure the value we provide to clients and hope to extend that to measures of value they provide”,

And storytelling wins again! The most universal and memorable way to share knowledge? (photo credits: Kodomut)

In spite of these blueskying considerations, the KMers’ group nonetheless offered various perspectives and experiences with tools and approaches… social network analysis (to measure community interaction), Collison’s and Parcell’s KS Self assessment, outcome mapping (to assess behaviour change), comparative analysis (of call centre agents using the KM system or not), a mix of IT tools and face-to-face to create conversations.

But really what stole the show were success stories. Jeff Hester mentioned “they put the abstract into concrete terms that everyone can relate to”. Stories could also take the form of testimonials and thank you messages extracted from threaded discussions. But at any rate they complement other measurements and they sell and are memorable.

Rob Swanwick pondered: “Should stories be enough to convince leaders“? Roxana Samii suggested that “leaders wil be convinced if they hear the story from their peers or if they really believe in the value of KM – no lip service” and Boris Pluskowski finished this thread with a dose of scepticism, doubting that leaders would find stories enough to be convinced. In that respect, Mark Neff recommended assessing activities on our own and leading by example, even without the approval of managers or leaders, because they might not be convinced by stories or even numbers.

Of course the discussion bounced off to other dimensions… starting with the gaming issue. A new term to me anyway but indeed how to reduce biases induced by expectations on behalf of the people that are either monitoring or being monitored? And should we hide the measurements to avoid gaming (“security by obscurity” as mentioned by Lee Romero) or should we on the other hand explain them to reveal some parts of the variables to get buy-in and confidence, as suggested by Raj Datta or the transparency that is important for authentic behaviours as professed by Mark Neff?

Finally, the question about where M&E of KM is headed (fourth part) didn’t really happen in spite of some propositions:

A Twitter chat can also mean a lot of tweets running in parallel (photo credits: petesimon)

  • Focusing more on activities and flows in place of explicit knowledge stock (Raj Datta)
  • Mobile buzzing for permanent monitoring (Peter Bury)
  • Some sort of measurement for all projects to determine success (Boris Pluskowski)
  • Providing more ways for users to provide direct feedback (e.g., through recommendations, interactions, tagging, etc.) (Stan Garfield)

After these initial efforts, the group instead happily continued discussing the gaming issue to come to the conclusion that a) most KMers present seemed to go for a transparent system rather than a hidden one that aims at preventing gaming and b) gaming can also encourage (positive) behaviours that reveal the flaws of the system and can be useful in that respect (e.g. Mark’s example: “people were rushing through calls to get their numbers up. People weren’t happy. Changed to number of satisfied customers.”).

With the coming of V Mary Abraham the thorny question of KM metrics was revived: how to prove the positive value of KM? Raj Datta nailed an earlier point by mentioning that anyway “some quantitative (right measures at right time in KM rollout) and qualitative, some subjective is good mix”. On the question raised by V Mary Abraham he also offered his perspective of simplicity: “take traditional known measures – and show how they improve through correlation with KM activity measures”. This seemed to echo an earlier comment by Rob Swanwick” Guy at Bellevue Univ has been doing work to try to isolate ROI benefits from learning. Could be applied to general KM”.

In the meantime Mark Neff mentioned that to him customer delight was an essential measure and other tweeters suggested that this could be assessed by seeing the shared enthusiasm, returning and multiplying customers (through word of mouth with friends).

Boris Pluskowski pushed the debate towards innovation as well, as an easier way to show the value of intangibles, as opposed to KM. V Mary Abraham approved in saying “Collab Innov draws on KM principles, but ends up with more solid value delivery to the org”. To which Raj Datta replied: “to me KM is about collaboration and innovation – through highly social means, supported by technology”. And the initial tweeter on this thread went on about the advantages of innovation as being a problem solving exercise at its heart, including a before and an after / result – and it is possible to measure results. V Mary Abraham: “So #KM should focus on problem-solving. Have a baseline (for before) and measure results after”, because solving problems buys trust. But next to short-term problem-solving Mark Neff also pointed at the other face of the coin: long-term capacity building: “Focus people on real solutioning and it will help focus their efforts. Expose them to different techniques so they can build longterm”.

And in parallel, with the eternal problem of proving the value of KM, Raj Datta (correctly) stated: “exact attribution is like alchemy anyway – consumers of data need to be mature”.

It was already well past the chat closing time and after a handful of final tweets, this first KMers’ page of monitoring/assessing KM was turned.

At any rate it was a useful and fresh experience to moderate this chat and I hope to get at it a second time, probably in April and probably on a sub-set of issues related to this vast topic. So watch the KMers’ space: http://www.kmers.org/chatevents!

Process Notes:

As mentioned earlier in this post, the Twitter chat moderation was a rather uncanny experience. With the machine gun-like speed of our group of 25 or so Tweeters, facilitating, synthesising / reformulating and answering to others as one participant all at once was a hectic experience – and I’m a fast blind typer!

This is how I felt sometimes during the moderation: banging too many instruments at the same time (photo credits: rod_bv)

But beyond the mundane I think what stroke me was: The KMers’ group is a very diverse gang of folks from various walks of life, from the US or the rest of the world, from the business perspective or the development cooperation side. This has major implications on the wording that each of us uses – which may not be granted (such as this gaming issue that got me triggered at first) but also on the kind of approaches we seem to favour, the people we see as the main stumbling block or on the other hand the champions that we see as aspirational forces, and the type of challenges that we are facing… More in a later post about this.

There is finally the back-office side of organising such a Twitter event, and I think as much about preparing / framing the discussion, as inviting people to check out your framing post, preparing a list of relevant links to share, sharing the correct chat link when the event starts (and sending related instructions for new Tweeters), but also generating the full chat transcript (using http://wthashtag.com/KMers, thank you @Swanwick 😉 all the way down to this blog post and the infographic summary that I’m still planning on preparing… it’s a whole lot of work, but exciting one and as the web 2.0 follows a ‘share the love / pay it forward’ mentality, so why don’t you give it back to the community out there? This was my first attempt, and I hope many more will follow…

Related blog posts (thank you Christian Kreutz for giving me this idea):

The full transcript for this KMers twitter chat is available here.

(Im)Proving the value of knowledge work: A KMers chat on monitoring / assessing knowledge management


KMers chat on 16/02/2010 on monitoring/assessing KM

On 16 February 2010, I will be hosting a KMers chat about the topic of ‘monitoring / assessing knowledge management’ (1).

When Johan Lammers (one of the founders of KMers and of WeKnowMore.org) invited KMers (the people, not the platform) to host a discussion I jumped on the occasion. It’s new, it’s fresh, it’s fun, it’s useful: what else can you dream of? And talking about useful discussions, it just fitted my work on this topic of monitoring knowledge management very well.

So here you go, if you are interested, this is the pitch for this KMers chat:

Knowledge management is ill-defined but even more crucially ill-assessed. The inaccuracy and inadequacy of monitoring (2) approaches for KM has left behind a trail of tensions, heated debates, frustrations and disillusions. Differing perspectives on the value of KM and on ways to conduct monitoring have further entrenched these reactions.

How to reconcile expectations from managers / donors on the one hand, from teams in charge of monitoring knowledge management and clients / beneficiaries on the other hand? How to conjugate passion for and belief in knowledge-focused work with business realism and sound management practice?

What are approaches, methods, tools and metrics that seem to provide a useful perspective on monitoring the intangible assets that KM pretends to cherish (and/or manage)? What are promising trends and upcoming hot issues to turn monitoring of KM into a powerful practice to prove the value of knowledge management and to improve KM initiatives?

Join this Twitter chat to hear the buzz and share your perspective…

In this particular KMers chat we will grapple with four key questions, i.e.:

  1. What do you see as the biggest challenge in monitoring KM at the moment?
  2. Who to involve and who to convince when monitoring KM?
  3. What have been useful tools and approaches to monitor KM initiatives?
  4. Where is M&E of KM headed? What are the most promising trends (hot issues) on the horizon?

This discussion ties in closely with a couple of posts on this topic on this blog (see for instance this and that post) and three on IKM-Emergent programme’s The Giraffe blog (see 1, 2 and 3). Simon Hearn, Valerie Brown, Harry Jones and I are on the case.

Back on this KMers’ chat, here is an outlook on some issues at stake – I think:

Fig. 1 The starting model we are using for monitoring KM (credits: S. Hearn)

  • KM is not well defined and the very idea of ‘monitoring’ knowledge (related to the M in KM) is fallacious – this is partly covered in this post. What does this mean in terms of priorities defined behind a KM approach? What is the epistemology (knowledge system) guiding KM work in a given context?
  • KM is often monitored or assessed from the perspective of using intangible assets to create value. Is this the real deal? Perhaps monitoring may look at various dimensions: knowledge processes and initiatives (inputs & activities), intangible assets (outputs), behaviour changes and ultimately valuable results (outcomes and impact). See fig. 1 for a representation of this model.
  • In this, where should we monitor/assess knowledge, knowledge management, knowledge sharing and possibly all knowledge-focused processes – from the knowledge value chain or another reference system?
  • Monitoring is itself a contested practice that is sometimes associated with only the simple focus of ‘progress monitoring’ i.e. establishing the difference between the original plan and the reality, to prove whether the plan is accomplished or not. Where is the learning in this? What is more valuable: to prove or to improve? And could we not consider that monitoring of KM should arguably look at other valuable monitoring purposes (like: capacity strengthening, self-auditing for transparency, sensitisation, advocacy etc. (3)?
  • With respect to the different epistemologies and ontologies (world views), isn’t it sensible to explore the different knowledge communities (see slide 8 on Valerie Brown’s presentation on collective social learning) and expectations of the parties involved in monitoring/ assessing KM? After all, the monitoring commissioner, implementer and ultimate beneficiary (client) may have a totally different view point on the why, what and how of monitoring KM.
  • If we take it that monitoring moves beyond simple progress monitoring and does not simply rest upon SMART indicators and a shopping basket for meaningless numbers, what are useful approaches – both quantitative and qualitative – that can help us understand the four dimensions of KM monitoring mentioned above and do this with due consideration for the context of our knowledge activities?
  • And finally what can we expect will be the future pointers of this discussion? I am thinking here both in terms of broadening the conceptual debate, looking at promising new approaches (such as the semantic web and its possibilities to map contextualised information, Dave Snowden’s Sense Maker, Rick Davies’s most recent work on the basis of his old Most Significant Change method) or developing a more practical approach to make sense of knowledge and to support the work of KMers (us), our patrons, our partners and our beneficiaries / clients?
  • Do you have case studies or stories about the issues sketched above?

Hopefully, further down the line, we may have a clearer idea as to turning what is too often a costly and tiresome exercise into an exciting opportunity to prove the value of knowledge-focused work and to improve our practices around it…

If you are interested in this topic or want to find out more about KMers’ chats, please check in on 16 February and join the chat; oh, and spread the word!

Notes:

(1)    KMers is an initiative that was started in late 2009 and has already generated a few excellent discussions (the last one was about knowledge for innovation), usually hosted on Tuesday around 1800 CET (Central European Time). The chats Twitter-based and always involve a group of dedicated KM heads that are really passionate and savvy about the broad topic of knowledge management.

(2)    By monitoring we mean here the ‘follow up of the implementation of programme activities AND periodic assessment of the relevance, performance, efficiency and impact of a piece of work with respect to its stated objectives’ as regularly carried out in the development sector. In this respect we include the purposes of evaluation in monitoring as well. In the corporate world I guess you would translate this in regular assessment. Monitoring / assessment may happen by means of measurement and other methods.

(3)    A forthcoming IKM-E paper by Joitske Hulsebosch, Sibrenne Wagenaar and Mark Turpin refers to the nine different purposes for monitoring, that Irene Guijt proposed in her PhD ‘Seeking Surprise’ (2008). These purposes are: Financial accountability, Operational improvement, Strategic readjustment, Capacity strengthening, Contextual understanding, Deepening understanding, Self-auditing, Advocacy, Sensitisation).

Related blogposts:

M&E of KM: the phoenix of KM is showing its head again – how to tackle it?


I’ve started working on a summary of two papers commissioned by the IKM-Emergent programme to unpack the delicate topic of monitoring (and evaluation) of knowledge management (1). This could be just about the driest, un-sexiest topic related to KM. Yet, it seems precisely one of the most popular topics and one that keeps resurfacing on a regular basis.

On the KM4DEV community alone, since the beginning of 2009, nine discussions (2) have focused on various aspects of monitoring of knowledge management, some of them generating a traffic of over 30 emails!! Are we masochistic? Or just thirsty for more questions?

M&E the phoenix of KM? (photo credits: Onion)

Anyway, this summary piece of work is a good opportunity to delve again into the buzz, basics, bells and whistles of monitoring knowledge management (as in the practice of observing/ assessing/ learning inherent to both M&E rather than on the different conditions in which M or E generally occur).

In attempting to monitor knowledge and/or knowledge management, one can look at an incredible amount of issues. This is probably the reason why there is so much confusion and questioning around this topic (see this good blog post by Kim Sbarcea of ‘ThinkingShift’, highlighting some of these challenges and confusion).

In this starting work – luckily supported by colleagues from the IKM working group 3 – I am trying to tidy things up a bit and to come up with a kind of framework that helps us understand the various approaches to M&E of KM (in development) and the gaps in this. I would like to introduce here a very preliminary half-baked framework that consists of:

  • Components,
  • Levels,
  • Perspectives.

And I would love to hear your views on these, to improve this if it makes sense, or to stop me at once if this is utter gibberish.

First, there could be various components to look at as items to monitor. These items could be influenced by a certain strategic direction or could happen in a completely ad hoc manner – a sort of pre-put. The items themselves could be roughly sorted as inputs, throughputs or outputs (understood here as results of the former two):

Pre-put Input (resources and starting point) Throughput (work processes & activities) Output (results)
– None (purely ad hoc)- Intent or objective 

– Structured assessment of needs (e.g. baseline / benchmarking)

– Strategy (overall and KM-focused)

– People (capacities and values)- Culture (shared values) 

– Leadership

– Environment

– Systems to be used

– Money / budget

– Methods / approaches followed to work on KM objectives- (Co-)Creation of knowledge artefacts 

– Use of information systems

– Relationships involved

– Development of a learning/innovation space

– Attitudes displayed by actors involved or concerned

– Rules, regulations, governance of KM

– Creation of products & services- Appreciation of products & services 

– Use/application of products & services

– Behaviour changes: doing different things, things differently or with a different attitude

– Application of learning (learning is fed back to the system)

– Reinforcement of capacities

All these components are then affected by the various levels at which a KM intervention (or strategy) is monitored, which could be:

  • Individual level;

    Different levels at which M&E of KM could take place

  • Team level;
  • Organisational level;
  • Inter-organisational level i.e. communities of practice, multi-stakeholder processes, potentially verging on to sectoral level – though with the problem of defining ‘a sector’;
  • Societal level affecting a society entirely.

And then of course comes perhaps the most crucial – yet implicit – element: the worldview that motivates the approach that will be followed with monitoring of knowledge management.

Because this is often an implicit aspect of knowledge-focused activities, this is largely a grey area in the way knowledge management is monitored. Yet on a spectrum of grey shades I would distinguish three world views that lead to three types of approaches on monitoring of knowledge (management). These approaches can potentially be combined in innumerable ways. The three strands would be:

  1. Linear approaches to monitoring of KM with a genuine belief in cause and effect and planned intervention;
  2. Pragmatic approaches to monitoring of KM, promoting trial and error and a mixed attention to planning and observing. I would argue this is perhaps the dominant model in the development sector, judging from the literature available anyhow (more on this soon).
  3. Emergent approaches to M&E of KM, stressing natural combinations of factors, relational and contextual elements, conversations and transformations.

In the comparative table below I have tried to sketch differences between the three groups as I see them now, even though I am not convinced that in particular the third category is giving a convincing and consistent picture.

Worldview Linear approaches to M&E of KM Pragmatic approaches to M&E of KM Emergent approaches to M&E of KM
Attitude towards monitoring Measuring to prove Learning to improve Letting go of control toexplore natural relations and context
Logic What you planned à what you did à what is the difference? What you need à what you do à what comes out? What you do à how and who you do it with à what comes out?
Chain of key elements Inputs – activities –outputs – outcomes – impact Activities – outcomes – reflections Conversations – co-creations – innovations –transformations – capacities and attitudes
Key question How well? What then? Why, what and how?
Outcome expected Efficiency Effectiveness Emergence
Key approach Logical framework and planning Trial and error Experimentation and discourse
Attitude towards knowledge Capture and store knowledge (stock) Share knowledge (flow) Co-create knowledge and apply it to a specific context
Component focus Information systems and their delivery Knowledge sharing approaches / processes Discussions and their transformative potential
I, K or? What matters? Information Knowledge and learning Innovation, relevance and wisdom
Starting point of monitoring cycle Expect as planned Plan and see what happens Let it be and learn from it
End point of monitoring cycle Readjust same elements to the sharpest measure(single loop learning) Readjust different elements depending on what is most relevant(double loop learning) Keep exploring to make more sense, explore your own learning logic(triple loop learning)

The very practical issue of budgeting does not come in the picture here but it definitely influences the M&E approach chosen and the intensity of M&E activities.

Aside from all these factors, there are of course many challenges that are plaguing an effective practice of monitoring knowledge management, but this framework offers perhaps a more comprehensive approach to M&E of KM?

Again, I am inviting you to improve this half-baked cake or to reject it as plainly indigestible. So feel free to shoot!

Notes:

(1)    Knowledge management understood here as ”encompassing any processes and practices concerned with the creation, acquisition, capture, sharing and use of knowledge, skills and expertise (Quintas et al., 1996) whether these are explicitly labelled as ‘KM’ or not (Swan et al., 1999)”. This definition is extracted from the first IKM-Emergent working paper. Even though I don’t entirely agree with this definition, let’s consider it’s creating enough clarity for the sake of understanding this blog post.

(2)    Previous discussions related to M&E of KM on KM4DEV:

  • Managing community of practice: creative entrepreneurs (22/11/2009) with a specific message on the impact of communities of practice
  • Value and impact of KS & collaboration (11/10/2009)
  • Evaluation of KM and IL at SDC (08/07/2009)
  • KM self-assessment (18/03/2009)
  • Organisational learning indicators (13/12/2009)
  • Monitoring and evaluating online information (05/02/2009)
  • Monitoring and evaluating online information portals (03/02/2009)
  • Evaluation of KM processes (30/01/2009)
  • Evidence of sector learning leading to enhanced capacities and performances (05/01/2009)

Related posts:

Network monitoring & evaluation: Taking stock


Another stock-taking post: not DVDs but network M&E (credits: Hooverdust)

Another stock-taking post on the collection of network M&E resources (Photo credits: Hooverdust)

It was about time to prepare another of those stock-taking blog posts, don’t you think?

This time the topic is monitoring and evaluation (M&E) for networks, among others because there are a number of networks that I am involved in which will need to develop a solid M&E framework for themselves and for their respective donors so this post could help come up with a better approach. And, who knows, perhaps you will also find something useful in there. If this is all rubbish, please put me out of my misery and help me read some quality references on the topic, ok?

When it comes to M&E of networks, documents are a lot more scattered than for the capacity development stock-taking post I wrote earlier. And to spice things up, on Google, there is a hell of a lot of misleading resources pointing to LAN/WAN network monitoring – clearly the web is still the stronghold of a self-serving (IT) community.

Fair enough! But luckily there are also relevant resources among my documents, of which I would like to mention:

Guides, tools and methods for evaluating networks (direct link to a Word document)

(Amy Etherington – 2005)

As the title indicates, this paper focuses on evaluation rather than monitoring of networks – as a means for networks to remain relevant and adapt if need be. Three major considerations are taken into account here:

  • measuring intangible assets (related to characteristics of networks such as social arrangements, adding value, creating forums for social exchange and joint opportunities);
  • issues of attribution (linked to issues of geographic and asynchronous complexity of networks, joint execution of activities, broad and long term goals of networks);
  • looking at internal processes: the very nature of networks renders internal processes – of mobilisation, interlinking, value-adding – very interesting. The further effects of the network on each individual member are also useful to look into.

And then follows a selection of nine evaluation methods (all dating from 1999 to 2005 though), very well documented, including checklists of questions, tables with dimensions of networks, interesting (or sometimes scary) models, innumerable steps referring to various maturity stages of communities. This seems one of the most relevant references to find at least practical methods to tackle network M&E.

Evaluating International Social Change Networks: A Conceptual Framework for a Participatory Approach (PDF)

(Ricardo Wilson-Grau and Martha Nuñez – 2006)

Among the most influential authors on the topic of M&E and networks, Wilson-Grau and Nuñez have been writing a lot of documents referred to in other papers mentioned here. This paper – which also focuses on the evaluation of networks – introduces the 8 or so functions that networks perform and considers four qualities and three operational dimensions. The result is a table of 56 criteria – shaped as questions – which ought to be answered by members of the network – with a careful eye for justification behind each criterion, because each network is different. The authors continue with the four types of achievements one can hope for social change networks: operational outputs, organic outcomes, political outcomes (judged as most useful by the authors themselves) and impact. Again the table is of great help and this document is a useful introduction to the author’s body of work.

A Strategic Evaluation of IDRC-Support to Networks (Word)

(Sarah Earl – 2004)

Epitomising the long term experience of the Canadian International Development Research Centre (IDRC) with monitoring and evaluation of networks, Sarah Earl presents, in this seven-page briefing note, a questioning process to evaluate the function of IDRC in supporting networks. In doing so, she stresses a series of questions pertaining to the coordination, sustainability and intended results / development outcomes of networks. She further explains the methodology used (literature review, key informant interviews and electronic survey of network coordinators, lesson learning sessions leading to writing stories from IDRC staff). This paper can be useful for actually setting up a methodology to collect evidence about the functioning of a network.

Network evaluation paper (Word).

(June Holley – 2007)

June Holley has been working for over 20 years on economic networks. This five-page paper  introduces a method that focuses on network maps and metrics, network indicators and outcomes. The paper suggests using scores and looking at awareness (of the network as a whole), influence, connectors, integration, resilience, diversity and core/periphery.

Network mapping and core-periphery (credits: Ross Dawson)

Network mapping and core-periphery (Image credits: Ross Dawson)

In terms of indicators, Ms. Holley recommends a series of questions that point to the self-organising and outcome-producing characteristics of the network, but also at questions of culture (as in shared norms and values) and evidence of skills that allow the network to change.

There are more (*) papers specifically focused on networks and their evaluation but I found them less relevant, often mostly because they are a bit dated.

Of course there are many other references on monitoring and evaluation in publications and resource sites about networks. Here is another, shorter, selection:

While on the topic of network M&E and its link with the specific monitoring of knowledge management, I would like to point to the summary of a discussion that took place in 2008 on the KM4Dev mailing list on the topic of M&E of KM: http://wiki.km4dev.org/wiki/index.php/Impact_and_M%26E_of_KM. This topic will probably remain interesting. It has been explored various times on the KM4DEV mailing list, it was recently touched upon in the francophone KM4DEV CoP SA-GE and it is likely to reappear as a topic of choice in 2010 on various platforms, not least because IKM-Emergent is planning to work more on this issue after having released the first of two commissioned papers on M&E of KM (this working paper on monitoring and evaluation of knowledge was written by Serafin Talisayon). I will certainly report about this in the coming weeks / months.

As ever with this series of stock-taking posts, I will try and keep this overview updated with any other interesting resource I get my hands on. So feel free to enlighten me with additional resources that go deeper, provide a lot of synthetic clarity or provide a refreshing perspective on the topic of network monitoring. What has worked for you in your work with networks? What have you found useful ways to measure their effectiveness and other dimensions? What would be your words of caution when assessing networks?

Networks are here to stay for a while so this discussion goes on…

(*)

I came across a number of other papers that all have something to say but are a bit out of date and I decided not to reference them here.

Related posts:

G(r)o(w)ing organically and the future of monitoring


In the past three weeks I have been working quite a lot on monitoring again, as one of my focus areas (together with knowledge management/learning and communications): processing and analysing the results of RiPPLE monitoring for the first time, developing the WASHCost monitoring and learning framework and generally thinking about how to improve monitoring, in line with recent interest in impact assessment (IRC is about to launch a thematic overview paper about this), complexity theory and even the general networks/ learning alliance angle etc.

Monitoring growing organically

I think monitoring is going and growing the right way – following an organic development curve – and for me it is one of the avenues that can really improve in the future, which perhaps explains the current enthusiasm for impact assessments etc. As mentioned in a previous blog post, I think the work we carry out with process documentation will be integrated as part of monitoring later, an intelligent way to monitor, which makes sense for donors, implementers (of a given initiative) and beneficiaries.

So what would/could be characteristics of good monitoring, in the future? I can come up with the following:

Integrated: in many cases, monitoring is a separate activity from the rest of the intervention, giving an impression of additional work and no added value. But if monitoring was indeed linked with intervention activities and particularly planning and reporting, it would help a lot and make it seem more useful. In the work on the WASHCost monitoring and learning framework, the key trick was to focus M&L on the ongoing reporting exercise and it did a wonderful trick. In addition to this, monitoring should also be linked with (mid-term and final) evaluations so that the evaluation team – usually external to the project – can come up with a more consistent methodology while keeping distance and a certain degree of objectivity. Evaluations are a different aspect and I’m not explicitly dealing with them here, even though they share a number of points with monitoring.

Informed: If monitoring is integrated with planning, before the project intervention there should be an analysis about the issue at hand and the potential best area of intervention. In line with this, a baseline should be established for what processes and outputs will be monitored. This helps prepare monitoring activities that make sense and interventions that are really focusing on how to improve what doesn’t work (but could help tremendously if it would);

Conscious: about what is at stake and therefore what should be monitored. The intervention should be guided by a certain vision of development, a certain ‘hypothesis of change’ that probably includes a focus on behaviour changes by certain actors, on some systems, processes and products/services and more generally on the system as a whole in which the development intervention is taking place. This conscious approach would therefore be well informed not to focus exclusively on hardware aspects (how many systems were built) nor exclusively on software issues (how much the municipality and private contractors love each other now);

Transparent and as objective as possible: Now that’s a tricky one. But a rule of thumb is that good monitoring should be carried out with the intention to report to donors (upward accountability) and to intended beneficiaries (downward accountability) – this guarantees some degree of transparency – and should be partly carried out by external parties to ensure a more objective take on monitoring (with no bias towards only positive changes). Current attempts to involve journalists to monitor development projects are a sound way forward and many more options prevail.

Versatile: Because monitoring should focus on a number of specific areas, it shouldn’t just use quantitative or qualitative approaches and tools but a mixture of them. This would help make monitoring more acceptable (with the accountability vs. learning discussion for instance) and would provide a good way to triangulate monitoring results, to ensure more objectivity in turn.

Inclusive: If monitoring includes external parties, it should focus on establishing a common understanding, a common vision of what is required to monitor the intervention, and it should also involve training activities for those that will be monitoring the intervention. So monitoring should include activities for communities as for donors, it should bring them together and persuade them that they all have a role to play in proving the value of the intervention and especially improving it.

Flexible: A project intervention rarely follows the course it primarily intended to follow… equally, monitoring should remain flexible to adapt to the evolution of the intervention. It should remain flexible in its design, in the areas that are monitored and in the methods that help monitor those specific areas. That is the value of process documentation and e.g. the Most Significant Change approach: revealing deeper patterns that have a bearing on the intervention but were not identified or recognised as important.

Long-term: Assuming that development is among others about behaviour and social changes, these changes are long-term, they don’t happen overnight; Subsequently monitoring should also have a long term perspective and indeed build ex-post evaluations to revisit intervention sites and see what are the later outcomes of a given intervention.

Finally, and with all else said before, monitoring would gain in being more simple, planned according to what is necessary to monitor and what is good to monitor, in line with existing resources and perhaps following a certain donor’s perspective: to monitor only what is necessary.

Hopefully that kind of monitoring will not feel as an intrusion by external parties in the way people are carrying out their job, and/or it will not feel like just an additional burden to carry without expecting anything from it. Hopefully that kind of monitoring will put the emphasis on learning, on getting value for the action, and on connecting people to improve the way development work is going on.

Related posts:

That PD thing again


And here we go again! Second major process documentation workshop after the Lodz workshop in July 2007, a workshop where IRC and partners tried not so much to settle a definition for the concept as to allow participants to play around with three media: text, video, photography. This time, the workshop is sponsored by the WASHCost project and includes participants from other background (see my latest blog post about this).

On this first day, we have covered the why (aims of process documentation), the principles of P.D., the basics of interviewing and the initial steps into a process documentation plan.

First observations from the field – more like a hotel room if you ask me:

  • A definition may emerge. The exercise about prioritising the aims – from a list of over 20 aims that our facilitator Peter McIntyre collected from five different projects using process documentation – went amazingly well and placed a few objectives high up – does this mean an agreement comes naturally or certain messages have been crafted well enough or repeated often enough to influence our participants? Either way, this is a very encouraging result.
  • The lines between communication and monitoring are still very much bordering process documentation work. As my colleague Nick Dickinson put it, process documentation helps identify interesting areas to document – leading to crafting communication messages – and it helps again at the end of the loop to monitor how stakeholders have responded to our interactions.
  • Principles of process documentation are emerging, and the real of information integrity is getting unpacked: one needs to check that outputs are correct (either directly with the stakeholders concerned or at least in the team if the output does not make it publicly); it is clear that some of your partners will not accept your (partial) vision; inside the team, constructive criticism should be encouraged: if the process documentation specialist is roughly a 75% team member role, s/he should also play a 25% external ‘ journalist’ role where s/he feels free to provide constructive feedback.
  • The importance of short feedback loops is essential! Regardless of the final process documentation outputs, key insights from process documentation work should quickly inform the team operating. This is part of the constructive feedback mentioned above.
  • The name, however unsexy it is, has made it in the common language – granted, in certain circles only. The India team didn’t want to change the term ‘process documentation’ because it is known by their learning alliance partners and changing names would create more confusion.
  • In spite of all these very encouraging signs, it is remarkable to see that when it comes to process documentation planning (perhaps an oxymoron?), most teams quickly jump on outputs/products, reinforcing the quick consumption culture of the development sector. Slow food (read: learning) is not on the menu quite yet. Adopting a learning culture is not yet an easy reality to implement. According to one of the external (non WASHCost) participants (in charge of communication activities in her organisation), this kind of process documentation activities was not in the agenda because it takes too much time. Ooh, that battle is far from being won, but hey, one starts somewhere… and still improvements are noticeable.
Process documentation as a reflection on and of reality

Process documentation as a reflection on and of reality

Anyway, with an approach (process documentation) that’s increasingly meaningful, I personally think that it’s never been as good a moment to name this thing differently. No one has come up with alternative names yet, in spite of our repeated urge to devise new names.

My personal brainstorm outcomes: process enquirers (booo), rapid reporters (duh), effective(ness) detectives, action investigators, agents provocateurs (revealing the invisible), change rangers (scouting for and identifying trails), trail hunters… the list could go on and on I guess. It would be fun doing an exercise about the kind of figure (hero, character or even animal) that process documentation specialists think about when considering their function?

At any rate, of all three key PD actions (observe, analyse, disseminate), I would say observe/intervene is the key one. And for that reason, detective or ranger sounds like the closest match.

I can’t wait for tomorrow… see what our productive detectives come up with…

Capitalising on process documentation – and changing names please!


Next week, a group of 4-5 of us from IRC will be in Accra with all country teams from the WASHCost project to work together on ‘process documentation’. Starting from a training workshop, the idea behind this workshop has moved forward to become a kind of orientation and training workshop.

The objectives are manifold: a) agree on a working definition of process documentation (what the heck is it?) b) train all staff about the use of photography, video, interviews etc. and c) decide what we are going to document in WASHCost, starting from the ‘hypothesis of change’ of the project.

There’s a few very interesting sides to this workshop:

  • It will be the largest workshop dedicated to process documentation since the one we organised in Poland in July 2007 – which resulted in a very nice blog.
  • It will not only be about the practice but also a little bit about the theory of process documentation which really needs some agreement. That’s really one of the problems with new trensd and buzz words: everyone uses them in a slightly different way. In Accra, we hope to come up with a common understanding.
  • Leading from that, we should be able to capitalise a bit on all kinds of experiences with process documentation from the RiPPLE project, WASPA Asia, EMPOWERS and SWITCH. We have accumulated quite some ideas and insights from this ‘soft monitoring’ work and IRC is dedicated to documenting process documentation (multiple loop learning here 😉 this year, perhaps to make a toolbox, some case studies, many examples of outputs available…
  • We hope to come up with a better name for ‘process documentation’ and particularly for the person in charge. ‘Process documentalist’ seems to refer to a very scientific entomologist studying hot air, so it’s time to jazz this up a bit and end embarrassment when mentioning the PD words…
  • Finally, some partners from CREPA, WaterAid and the resource centre network in Ghana will also participate to the workshop. They should help challenging our ideas and ways of working, and hopefully they will also spread the word about this process documentation work and perhaps take it up in their own line of work.

Another interesting aspect from this work is that it should very nicely complement the upcoming publication about impact assessment planned for later this year.

What I personally hope is to find a place to park process documentation in the hall of concepts that we have produced in the last few years – and perhaps to sound out colleagues and partners on their take of process documentation. I still think that PD is what essentially what intelligent monitoring should cover as well, but since donors are following different frameworks of reference for monitoring, it is no wonder that process documentation is still an undefined and ill-accepted practice among donors. Perhaps the capitalisation work around process documentation will help change this perspective. And perhaps a sexier name would…