A knowledge management primer (3): JKLMNO


The KM alphabet primer continues (Credits: Le web pedagogique)

The KM alphabet primer continues (Credits: Le web pedagogique)

This is a new series of posts, an alphabet primer of agile knowledge management (KM), to touch upon some of the key concepts, approaches, methods, tools, insights in the world of KM. And because there could have been different alternatives for each letter I’m also introducing the words I had to let go of here.

Today, after the ABC of KM and the next six letters (DEFGHI), I’m pursuing the alphabet discovery with JKLMNO.


 

J for Journey

Any and every KM initiative is a journey unto itself and because it is a learning journey with no fully guaranteed results, the journey matters as much as the destination. It brings up lots of ideas, feedback, insights and more.

J could also have been…

Journaling – A great practice for documentation, journaling (as blogging is) has the potential of revealing deeper patterns that explain a lot of things. For KM, journaling on the KM initiative, documenting the process, and even impressions of individuals involved can be the difference between success and failure, between quick and slow, between good quality and sloppy.

Knowledge (Credits: Iqbal Osman)

Knowledge (Credits: Iqbal Osman)

K for Knowledge

Of course, what else? Knowledge is the capacity to turn information to action, and if it’s the sum of insights we have, but not a commodity that can be transferred. There are many (also visual) understandings of knowledge. I’m just offering my definition here. But knowledge is certainly what puts KM in a mystical world, as it relates to how our brains work and how we connect with each other to form a collective intelligence.

K could also have been…

Know-how – Next to what we know there are also many processes set know that help us to do things. Practical knowledge, hands-on, instructional stuff to move from theory to practice, including practice smarts.

L for learning

I wouldn’t leave the last part of my definition of KM as it is the most important one to justify the existence of knowledge management. And whether it’s about learning how to retain institutional memory or how to innovate, learning is the driving force to make us every better equipped to deal with challenges and to increase our capacity to adapt and anticipate, to be resilient etc.

L could also have been…

Management versus Leadership (Credits: David Sanabria)

Management versus Leadership (Credits: David Sanabria)

Leadership – leadership is the vision that drives initiatives, shows the way  and rallies support all along. No KM endeavour survives without strong leadership and leading by example – and innovating. And this is true at all levels, not just about top management. The KM project leader, management and personnel alike must demonstrate that sort of leadership – but they can only do so if they have all been properly involved and empowered to do so of course.

Library – Libraries used to be the crude epitome of knowledge management in the times of old. The vast quantity of information that codified the knowledge of the ancients was so great that it’s no wonder the first era of KM wanted to mimic this in the digital world. But that was not enough. Online brochures’ advocates learned that at a high cost.

M for Management

Leadership is key in KM. But management is also very important. Managing change, managing assets, managing processes, managing tools and managing people to make sure all these elements work in synergy and support each other.

M could also have been…

Monitoring – Part of the management of KM is monitoring how it is going, collecting metrics that give indications of visibility, use, appreciation and gains in produce of any kind. Monitoring is at the heart of learning and thus of KM too – even though it is usually the reason why people give up on KM because it is so difficult to go beyond the use of information platforms and learning processes to point to what people are doing with it.

Meta tags – An essential element of curation are the meta tags that allow to describe a resource and make it easier to retrieve later through search.

N for Network

From networkshops to communities of practice and assessing networked value, from personal learning networks to engaging in social networks, networks are ubiquitous. The world of KM in 2016 cannot avoid this fact, and it explains why so much emphasis goes nowadays on distributed learning, on massive open online courses, on cultivating personal learning networks etc. Knowledge management always was a network thing in itself. It now hast just become utterly obvious.

Networks, interconnection (Credits: Rob/FlickR)

Networks, interconnection (Credits: Rob/FlickR)

N could also have been…

Your suggestions?

O for Open 

If the ultimate goal of knowledge management is to connect and convert everyone to cultivating our collective intelligence, then a general state of Open-ness is central to it. Open knowledge, open source, open access, working out loud and all the rest of it.

The reality is still a bit more subtle than this: in certain areas where the mindset is not all that open, agile KM has to create safe closed spaces where progressively people can taste the power of Open, little by little, in smaller groups first. But open KM is almost a tautology.

Open Knowledge

Advertisements

The wealth of communities of practice – pointers to assess networked value?


Building upon the CoP 4L model by Seely Brown, Wenger and Lave (credits: dcleesfo / FlickR)

Building upon a.o. the CoP 4L model by Seely Brown, Wenger and Lave (credits: dcleesfo / FlickR)

The KM4Dev community of practice is going through an intensive action phase, beyond the conversations, as the grant given by the International Fund for Agricultural Development (IFAD) for the period 2012-2013 is leading to a number of interesting activities.

Among them is a learning and monitoring (L&M) plan which really focuses on learning from all other IFAD-funded activities, rather than focusing on monitoring (in the sense of checking delivery of outputs against the plans). And the focus of our L&M plan is about the networked development and value-creation of a community of practice (CoP). How does it fare against governance principles, identity and level of activity, engagement channels and learning / outcomes (which really focus on the most important value creation).

I am involved in the learning and monitoring team and as part of it have started (with support from other L&M team members) developing the table below.

This table offers a suggested selection of ‘learning areas’ that in our eyes matter when working in communities of practice such as KM4Dev.

Learning area Specific issues in this area Description
Governance Transparency Systematic sharing of and accessibility of results of conversations, decisions, initiatives, reification (see below) activities etc. also including the selection process for core group members
Vision, values and principles Development, existence, clarity, understanding and acceptance of general vision, principles and values for the  community of practice by and for its members / normally this is not really a ‘learning’ area but if it isn’t in place it becomes one.
Leadership Demonstrated (and particularly accepted) leadership of the core group and occasionally others by other members of the KM4Dev community. Is there any dissonance between the two groups?
Mobilisation and commitment See below. This is also mentioned under governance as people involved in the CoP governance have to mobilise resources and commit themselves to activities in a specific way
Identity and activity Diversity and expansion Profile of members of the community and the core group (language, region, type of organisation etc.); Growth and expansion (frequency of new members, how etc.) and ties with external networks
Conversation Frequency and quality of conversations around the domain (knowledge management for development) or the community (KM4Dev)
Reification Tendency (quality and frequency) of the community to ‘reify’ conversations into tangible outputs e.g. blog post, wiki entry, journal article etc. Also has a bearing on learning and outcomes
Mobilisation and commitment Capacity of core group members and other KM4Dev members to mobilise themselves and commit to activities (which activities? to what extent/degree of involvement?) and indeed deliver according to the plan and with strong learning. This also has bearing on the governance
Participation Degree of participation of different members to conversations and other activities
Reflection Evidence of social learning, development and sharing of new insights as part of activities (and results – this has bearing on learning/outcomes)
Cohesion Evidence that the relationship between members of the community is good and that everyone finds their place in the community while feeling they are part of a whole
(Learning and) Outcomes Reification / outputs See above. Production of outputs (quality/frequency?) – planned or spontaneous
Reflection / changed thinking and discourse See above. Evidence that reflections from the KM4Dev community have achieved change in thinking and/or discourse among others e.g. citations, semantic analysis.
Inspiration / changed behaviour Evidence of change as a new way to proceed, inspired by KM4Dev activities
Innovation / changed artefact or approach Evidence of KM4Dev influencing development of a new artefact or method, codified concretely
Impact Evidence of larger changes (in autonomy of decision and well-being related to livelihood) where KM4Dev activities have inspired/influenced others within community and particularly beyond. Caveat: attribution.
KM4dev engagement channels Suitability for participation The different KM4Dev channels (mailing list, wiki, ning community), annual meetings) foster dialogue and engagement, and learning
Ease of use / Availability of KM4Dev outputs The different channels are easy to use and complement each other. They make KM4Dev activity outputs visible, and available.
Identity Governance of Km4dev is clear in all engagement channels

This table and the plan which we highlighted triggered a very rich discussion in the KM4Dev core group over the  past couple of weeks. This conversation was meant to provide some initial reactions before opening it more widely with the entire community. As we are about to embark on a much wider and open consultation process with the rest of the community, I thought it might be useful to post this here and see if any of you has any suggestion or feedback on these learning areas…

Communication, KM, monitoring, learning – The happy families of engagement


Many people seem to be struggling to understand the differences between communication, knowledge management, monitoring, learning etc.

Finding the happy families (Photo: 1st art gallery)

Finding the happy families (Photo: 1st art gallery)

Let’s consider that all of them are part of a vast family – the ‘engagement’ family. Oh, let’s be clear, engagement can happen in many other ways but for the  sake of simplicity, let’s focus on these four and say that all of these family members have in common the desire – or necessity – to engage people with one another, to socialise, for a reason or another. And let’s try to unpack this complex family tree, to discover the happy families of engagement.

The engagement family is big, it contains different branches and various members in each of these. The main branches are roughly the Communication (Comms), Knowledge management (KM) and Monitoring & Evaluation (M&E).

Communicating

Communi-cating

The comms branch is large and old. Among the many siblings, the most prominent ones are perhaps Public Relations and Marketing. They used to be the only ones around in that branch, for a time that seems endless. All members of this branch like to talk about messages, though their horizon has been expanding to other concepts and approaches, of late.

  • Public relations has always made the point that it’s all about how you come across to other folks and enjoys very much the sheen and the idea of looking smart. But some accuse him of being quite superficial and a little too self-centred.
  • His old sibling marketing has adopted a more subtle approach. Marketing loves to drag people in a friendly conversation, make them feel at ease and get them to do things that perhaps they didn’t want in the first place. Marketing impresses everyone in the family by its results, but he has also upset quite some people in the past. He doesn’t always care for all that, as he thinks he can always find new friends, or victims.
  • Another of their sibling has been around for a while too: Advocacy is very vocal and always comes up with a serious message. Some of his family members would like him to adopt a less aggressive approach. Advocacy’s not silly though, so he’s been observing how his brother marketing operates and he’s getting increasingly subtle, but his image is very much attached to that of an ‘angry and hungry revolutionary loudmouth’.
  • Their sister communication is just as chatty but she is a bit behind the scene. Communication doesn’t care about promoting her family, selling its treasures or claiming a message, she just wants people to engage with one another, in and out of the family. She is everywhere. In a way she might be the mother of this branch.
  • Their youngest sister, internal communication, has been increasingly present over the past few years and she really cares for what happens among all members of her family. She wants people to know about each other and to work together better. She has been getting closer and closer to the second main branch of the engagement family tree: knowledge management, but she differs from that branch in focusing on the internal side of things only.
Knowledge management

Knowledge management

The Knowledge management branch also comprises many different members and in some way is very heterogeneous. This branch doesn’t care so much for messages as for (strategic) information and conversations. For them it’s all about how you can use information and communication to improve your approach.

  • The old uncle is information management. He has been around for a while and he still is a pillar of the family. He collects and organises all kinds of documents, publications, reports and puts them neatly on shelves and online in ways that help people find information. His brothers and sisters mock up his focus on information. Without people engaging with it, information does little.
  • His younger sister knowledge sharing was long overshadowed in the KM branch but she’s been sticking her head out a lot more, taking credit for the more human face of the KM branch. She wants people to share, share and share, engage and engage. She’s very close to her cousin Communication from the Comms branch, but what she really wants is to get people to get their knowledge out and about, to mingle with one another. She has close ties with her colourful cousins facilitation, storytelling and a few more.
  • They have another brother called ‘organisational learning’, who was very active for a while. He wanted everyone to follow him and his principles but he has lost a lot of visibility and momentum over the years when many people found out that the way he showed was not so straightforward as he claimed;
  • The little brother PKM (personal knowledge management) was not taken seriously for a long time but he is really a whiz kid and has given a lot of people confidence that perhaps his branch of the family is better off betting on him, at least partly. He says that everyone of us can do much to improve the way we keep our expertise sharp and connect with akin spirits. To persuade his peeps, PKM often calls upon on his friends from social media and social networks (though these fellas are in demand by most family members mentioned above).
  • A very smart cousin of the KM branch, innovation, is marching up to the limelight. She’s drop-dead gorgeous and keeps changing, never settling with one facet of her identity. Her beauty, class and obvious commonsense strike everyone when they see her, but she disappears quickly if she’s not entertained. In fact, many in the KM family would like to get her on their side but she’s alluding. Perhaps if many family members got together they would manage to keep her at their side.
Monitoring

Monitoring

The M&E branch has always been the odd group out. They are collectors and reporters. Through their history they have mostly focused on indicators, reportspromises made, results and lessons learnt. Other family members consider this branch to be little fun and very procedural, even though of late they have bended their approach – but not everyone around seems to have realised that.

  • Planning is not the oldest but perhaps the most responsible one of this branch. He tries to coordinate his family in a concerted manner. But he is also quite idealistic and sometimes he tends to ignore his siblings and stick to his own ideas, for better (or usually for worse). Still, he should be praised for his efforts to give some direction and he does so very well when he brings people to work with him;
  • Reporting, the formal oldest brother, is perhaps the least likely to change soon. He takes his job very seriously and indeed he talks to all kinds of important people. He really expects everyone to work with him, as requested by those important contacts of his. He doesn’t always realise that pretty much everyone consider him rather stuffy and old-fashioned, but he knows – and they sometimes forget – that he does matter a lot as a connector between this whole funky family and the wider world.
  • Data collection is the next sister who tends to wander everywhere; she lacks the sense of prioritisation, which is why planning really has to keep an eye on her. She’s very good at collecting indeed a lot of stuff but she doesn’t always help her siblings make sense of it. Everyone in the family agrees she has an important role to play but they don’t quite know how.
  • Therefore her other sister reflection is always behind to absorb what data collection brought forward and make sense of it. She is supposedly very astute but occasionally she does her job too quickly and misses crucial lessons or patterns. Or perhaps she’s overwhelmed by what data collection brought to her and she settles for comfort. But she usually has great ideas.
  • They have a young sister called process documentation. She’s a bit obscure to her own kin but she seems to have built a nice rapport with the other branches of the wider family and seems more agile than her own brothers and sisters. She goes around and observes what’s going on, picking up the bizarre and unexpected, the details of how people do things and how it helps for their wider work.
Learning is patient

Learning is patient

The wise godmother (1) of them all is learning. Learning generously brings her good advices to all her family, for them to improve over time. She wants her Comms branch offspring to engage in ways that benefit everyone; she encourages their KM siblings to nurture more genuine and deeper conversations that lead to some more profound insights and more effective activities; she invites the sidetracked M&E branch to find their place, not be obtuse and use their sharp wits to bring common benefits and help understand what is going well or not and why. More than anything, she encourages all her godchildren to get along with one another because she sees a lot of potential for them to join hands and play together.

Learning could do it all on her own but she prefers to socialise, she loves socialising in fact, and that’s how she keeps on top of the game, and keeps bringing the light over to other parts of the family. It’s not an easy game for her to bring all her flock to play together. There’s a lot of strong egos in there, but she is patient and versatile, and she knows that eventually people will come to seek her wisdom…

Do you recognise your work in those happy families? Who am I missing and where in the tree should they fit?

Related posts:

Share your questions: The personal effectiveness and knowledge survey


What a chance!

What makes some of us fly high? (Photo credits: KenSchneiderUsa, FlickR)

What makes some of us fly high? (Photo credits: KenSchneiderUsa, FlickR)

I always thought that knowledge sharing and information management inside my organisation was left to the basics of organic gardening, that is: chaos, spontaneity and emergence. We always gave more attention to our external projects and clients; rightly so, of course, since our purpose is to work for others… But then you find that you have at times slightly dysfunctional communication internally and ‘pockets of expertise’ somewhat not connected as much as they could. Nothing extraordinary here, we are talking about universal KM challenges, the kind of issues that all organisations are dealing with, to some extent.

What is really interesting in such situations though, most people find work-arounds. As human beings we are resilient, so we adapt. And our work-arounds sometimes fill gaps even better than the policy in place or its absence. The challenge here is to tap into that creative potential, seek, explain and amplify the smart work-arounds already in use in some pockets.  The absence of guidance or ailments of frameworks and procedures in place can be very powerful sources of wider innovation – if indeed channelled.

And so it seems I might be able to work on this set of issues for my own organisation, so I am happy to compile a series of questions to interview my colleagues and find out more about the way they carry out their knowledge work and reach personal effectiveness.

After a discussion with my colleague and partner in KM crime (1), I’ve decided to design this questionnaire around a) explicitly seeking their good practices and tips to reach personal effectiveness and b) implicitly finding out how they use information and knowledge to leverage that.

I would love to tap into your collective smart folk brainpower to find more (or fewer) sharper questions:

Reaching personal effectiveness (explicit questions):

  • Keeping on top of your field: how do you keep track of relevant information for your field of expertise and how do you keep the knowledge and skills you need sharply up-to-date?
  • Planning: How frequently do you plan, on what time horizon and what tools do you use for this?
  • Time spending: how do you fill your timesheet and what are your observations?
  • Prioritising and making decisions: how do you juggle with multiple activities? How do you prioritise, on what basis, with what outlook?
  • Monitoring: how do you monitor your expertise, your work, outputs, outcomes?

Knowledge and information (implicit questions):

  • Identifying information and answers: how do you find good questions and identify the information gaps?
  • Finding information and answers: Where do you find it? Via who? How?
  • Creating knowledge: How do you create it? Where do you record it? Using what systems? How do you find focus and develop a creative environment? Do you create knowledge preferably alone or with others?
  • Using information: What do you use information for? Whose information (what sources) do you use? What for (for research, to write articles, to develop proposals etc.)?
  • Sharing knowledge: How do you share knowledge, with know, on what channels?
  • Documenting and storing information: Do you document discussions and events? What do you document?  How and on what systems or devices? Where (on what systems) do you store your information generally? How often do you do that, when exactly (at what moment)? Do you archive your information? How and what for?

For each of these areas, I intend to ask them about their personal advice or tips and tricks and sources of inspiration. In the process, I also intend to raise their awareness about a number of social media tools such as del.icio.us/Diggo, Slideshare, Twitter, Yammer, blogs on WordPress and Blogger, Quora etc.

A subsidiary question will be to ask them who, in their opinion is, the most effective colleague and for what reason. I hope this will really help us boost our information & knowledge processes and understand some homegrown sources of creative and productive inspiration. There should be some very useful lessons to tease out for the rest of you too – I’ll be sure to post here an overview of the key lessons!

For now though, your questions are more than welcome – make it work for you too!

Notes:

(1)    Jaap Pels.

Related posts:

What the *tweet* do we know (about monitoring/assessing KM)?


This week Tuesday I moderated my first ever Twitter chat thanks to the opportunity provided by KMers (as mentioned in one recent blog post). A very rich and at times overwhelming experience in terms of moderating – more on this in the process notes at the bottom of this post.

KMers provides a great opportunity to host Twitter chats! Tweet on! (photo credits: ~ilse)

The broad topic was about ‘monitoring / assessing KM’ and I had prepared four questions to prompt Tweeters to engage with the topic:

  1. What do you see as the biggest challenge in monitoring KM at the moment?
  2. Who to involve and who to convince when monitoring KM?
  3. What have been useful tools and approaches to monitor KM initiatives?
  4. Where is M&E of KM headed? What are the most promising trends (hot issues) on the horizon?

After a couple of minutes at the beginning to wait for all participants, we started listing a number of key challenges in terms of monitoring/ assessing KM:

  • Understanding what we are trying to assess and how we qualify success – and jointly agreeing on this from originally different perspectives and interests;
  • The disconnect between monitoring and the overall strategy and perhaps its corollary of (wrongly) obsessing on KM rather than on the contribution of KM to overall objectives;
  • The crucial problem of contribution / attribution of KM: how can we show that KM has played a role when we are dealing with behaviour changes and improved personal/organisational/inter-institutional effectiveness?;
  • The dichotomy between what was described as ‘positive’ monitoring (learning how we are doing) and ‘negative’ monitoring (about censoring and controlling peoples’ activities);
  • The occasional hobby horses of management and donors to benchmark KM, social media, M&E of KM etc.
  • The problem of focusing on either quantitative data (as a short-sighted way of assessing KM – “Most quantitative measures are arbitrary and abstract. …adoption rate doesn’t really equate to value generation” – Jeff Hester) or rather qualitative data (leaving a vague feeling and a risk of subjective biases);
  • The challenge of demonstrating added value of KM.
  • The much need leadership buy-in which would make or break assessment activities;

The challenges were also felt as opportunities to ‘reverse engineer successful projects and see where KM played a role and start a model’.

An interesting perspective from Mark Neff – that I share – was about monitoring from the community perspective, not from that of the business/organisation.

This last issue hinted at the second part of the chat, which was dedicated to what turned out to be a crux of the discussion: who do you need to involve and who to convince (about the value of KM) when monitoring KM.

Who to involve? Customers / beneficiaries, communities (for their capacity to help connect), even non-aligned communities, users / providers and sponsors of KM, employees (and their capacity to vote with their feet). Working in teams was suggested (by Boris Pluskowski) as a useful way to get knowledge to flow which eventually helps the business then.

Who to convince? Sponsors/donors (holding the purse strings), leaders (who are not convinced about measurement like managers but instead like outputs and systems thinking).

What is the purpose of your monitoring activities? Management? Business? Productivity? Reuse? Learning? Application? Membership? Mark Neff rated them as all interesting (another challenge there: choose!). Rob Swanwick made the interesting point of measuring within each unit and having KM (and social media at that) mainstreamed in each unit, rather than dedicated to a small group.

Raj Datta shared his interesting perspective that it is key to explore and expand from the work of communities that are not aligned with business objectives.

The third part continued with some tools and approaches used to assess KM.

The key question came back: What are we looking at? Increasing profits, sales and the engagement of customers? Participation in CoPs? Answers provided in 48 hours? Adoption rates (with the related issue of de-adoption of something else that Rob Swanwick pointed out)? Project profile contributions? Percentage of re-use in new projects? Stan Garfield suggested setting three goals and measuring progress for each (as described in his masterclass paper about identifying objectives). Mark Neff also stressed that it all depends on the level of maturity of your KM journey: better to build a case when you begin with KM, to look at implementing something or at the adoption rate when you’re a bit more advanced… At his stage, the man himself sees “efforts to measure the value we provide to clients and hope to extend that to measures of value they provide”,

And storytelling wins again! The most universal and memorable way to share knowledge? (photo credits: Kodomut)

In spite of these blueskying considerations, the KMers’ group nonetheless offered various perspectives and experiences with tools and approaches… social network analysis (to measure community interaction), Collison’s and Parcell’s KS Self assessment, outcome mapping (to assess behaviour change), comparative analysis (of call centre agents using the KM system or not), a mix of IT tools and face-to-face to create conversations.

But really what stole the show were success stories. Jeff Hester mentioned “they put the abstract into concrete terms that everyone can relate to”. Stories could also take the form of testimonials and thank you messages extracted from threaded discussions. But at any rate they complement other measurements and they sell and are memorable.

Rob Swanwick pondered: “Should stories be enough to convince leaders“? Roxana Samii suggested that “leaders wil be convinced if they hear the story from their peers or if they really believe in the value of KM – no lip service” and Boris Pluskowski finished this thread with a dose of scepticism, doubting that leaders would find stories enough to be convinced. In that respect, Mark Neff recommended assessing activities on our own and leading by example, even without the approval of managers or leaders, because they might not be convinced by stories or even numbers.

Of course the discussion bounced off to other dimensions… starting with the gaming issue. A new term to me anyway but indeed how to reduce biases induced by expectations on behalf of the people that are either monitoring or being monitored? And should we hide the measurements to avoid gaming (“security by obscurity” as mentioned by Lee Romero) or should we on the other hand explain them to reveal some parts of the variables to get buy-in and confidence, as suggested by Raj Datta or the transparency that is important for authentic behaviours as professed by Mark Neff?

Finally, the question about where M&E of KM is headed (fourth part) didn’t really happen in spite of some propositions:

A Twitter chat can also mean a lot of tweets running in parallel (photo credits: petesimon)

  • Focusing more on activities and flows in place of explicit knowledge stock (Raj Datta)
  • Mobile buzzing for permanent monitoring (Peter Bury)
  • Some sort of measurement for all projects to determine success (Boris Pluskowski)
  • Providing more ways for users to provide direct feedback (e.g., through recommendations, interactions, tagging, etc.) (Stan Garfield)

After these initial efforts, the group instead happily continued discussing the gaming issue to come to the conclusion that a) most KMers present seemed to go for a transparent system rather than a hidden one that aims at preventing gaming and b) gaming can also encourage (positive) behaviours that reveal the flaws of the system and can be useful in that respect (e.g. Mark’s example: “people were rushing through calls to get their numbers up. People weren’t happy. Changed to number of satisfied customers.”).

With the coming of V Mary Abraham the thorny question of KM metrics was revived: how to prove the positive value of KM? Raj Datta nailed an earlier point by mentioning that anyway “some quantitative (right measures at right time in KM rollout) and qualitative, some subjective is good mix”. On the question raised by V Mary Abraham he also offered his perspective of simplicity: “take traditional known measures – and show how they improve through correlation with KM activity measures”. This seemed to echo an earlier comment by Rob Swanwick” Guy at Bellevue Univ has been doing work to try to isolate ROI benefits from learning. Could be applied to general KM”.

In the meantime Mark Neff mentioned that to him customer delight was an essential measure and other tweeters suggested that this could be assessed by seeing the shared enthusiasm, returning and multiplying customers (through word of mouth with friends).

Boris Pluskowski pushed the debate towards innovation as well, as an easier way to show the value of intangibles, as opposed to KM. V Mary Abraham approved in saying “Collab Innov draws on KM principles, but ends up with more solid value delivery to the org”. To which Raj Datta replied: “to me KM is about collaboration and innovation – through highly social means, supported by technology”. And the initial tweeter on this thread went on about the advantages of innovation as being a problem solving exercise at its heart, including a before and an after / result – and it is possible to measure results. V Mary Abraham: “So #KM should focus on problem-solving. Have a baseline (for before) and measure results after”, because solving problems buys trust. But next to short-term problem-solving Mark Neff also pointed at the other face of the coin: long-term capacity building: “Focus people on real solutioning and it will help focus their efforts. Expose them to different techniques so they can build longterm”.

And in parallel, with the eternal problem of proving the value of KM, Raj Datta (correctly) stated: “exact attribution is like alchemy anyway – consumers of data need to be mature”.

It was already well past the chat closing time and after a handful of final tweets, this first KMers’ page of monitoring/assessing KM was turned.

At any rate it was a useful and fresh experience to moderate this chat and I hope to get at it a second time, probably in April and probably on a sub-set of issues related to this vast topic. So watch the KMers’ space: http://www.kmers.org/chatevents!

Process Notes:

As mentioned earlier in this post, the Twitter chat moderation was a rather uncanny experience. With the machine gun-like speed of our group of 25 or so Tweeters, facilitating, synthesising / reformulating and answering to others as one participant all at once was a hectic experience – and I’m a fast blind typer!

This is how I felt sometimes during the moderation: banging too many instruments at the same time (photo credits: rod_bv)

But beyond the mundane I think what stroke me was: The KMers’ group is a very diverse gang of folks from various walks of life, from the US or the rest of the world, from the business perspective or the development cooperation side. This has major implications on the wording that each of us uses – which may not be granted (such as this gaming issue that got me triggered at first) but also on the kind of approaches we seem to favour, the people we see as the main stumbling block or on the other hand the champions that we see as aspirational forces, and the type of challenges that we are facing… More in a later post about this.

There is finally the back-office side of organising such a Twitter event, and I think as much about preparing / framing the discussion, as inviting people to check out your framing post, preparing a list of relevant links to share, sharing the correct chat link when the event starts (and sending related instructions for new Tweeters), but also generating the full chat transcript (using http://wthashtag.com/KMers, thank you @Swanwick 😉 all the way down to this blog post and the infographic summary that I’m still planning on preparing… it’s a whole lot of work, but exciting one and as the web 2.0 follows a ‘share the love / pay it forward’ mentality, so why don’t you give it back to the community out there? This was my first attempt, and I hope many more will follow…

Related blog posts (thank you Christian Kreutz for giving me this idea):

The full transcript for this KMers twitter chat is available here.

(Im)Proving the value of knowledge work: A KMers chat on monitoring / assessing knowledge management


KMers chat on 16/02/2010 on monitoring/assessing KM

On 16 February 2010, I will be hosting a KMers chat about the topic of ‘monitoring / assessing knowledge management’ (1).

When Johan Lammers (one of the founders of KMers and of WeKnowMore.org) invited KMers (the people, not the platform) to host a discussion I jumped on the occasion. It’s new, it’s fresh, it’s fun, it’s useful: what else can you dream of? And talking about useful discussions, it just fitted my work on this topic of monitoring knowledge management very well.

So here you go, if you are interested, this is the pitch for this KMers chat:

Knowledge management is ill-defined but even more crucially ill-assessed. The inaccuracy and inadequacy of monitoring (2) approaches for KM has left behind a trail of tensions, heated debates, frustrations and disillusions. Differing perspectives on the value of KM and on ways to conduct monitoring have further entrenched these reactions.

How to reconcile expectations from managers / donors on the one hand, from teams in charge of monitoring knowledge management and clients / beneficiaries on the other hand? How to conjugate passion for and belief in knowledge-focused work with business realism and sound management practice?

What are approaches, methods, tools and metrics that seem to provide a useful perspective on monitoring the intangible assets that KM pretends to cherish (and/or manage)? What are promising trends and upcoming hot issues to turn monitoring of KM into a powerful practice to prove the value of knowledge management and to improve KM initiatives?

Join this Twitter chat to hear the buzz and share your perspective…

In this particular KMers chat we will grapple with four key questions, i.e.:

  1. What do you see as the biggest challenge in monitoring KM at the moment?
  2. Who to involve and who to convince when monitoring KM?
  3. What have been useful tools and approaches to monitor KM initiatives?
  4. Where is M&E of KM headed? What are the most promising trends (hot issues) on the horizon?

This discussion ties in closely with a couple of posts on this topic on this blog (see for instance this and that post) and three on IKM-Emergent programme’s The Giraffe blog (see 1, 2 and 3). Simon Hearn, Valerie Brown, Harry Jones and I are on the case.

Back on this KMers’ chat, here is an outlook on some issues at stake – I think:

Fig. 1 The starting model we are using for monitoring KM (credits: S. Hearn)

  • KM is not well defined and the very idea of ‘monitoring’ knowledge (related to the M in KM) is fallacious – this is partly covered in this post. What does this mean in terms of priorities defined behind a KM approach? What is the epistemology (knowledge system) guiding KM work in a given context?
  • KM is often monitored or assessed from the perspective of using intangible assets to create value. Is this the real deal? Perhaps monitoring may look at various dimensions: knowledge processes and initiatives (inputs & activities), intangible assets (outputs), behaviour changes and ultimately valuable results (outcomes and impact). See fig. 1 for a representation of this model.
  • In this, where should we monitor/assess knowledge, knowledge management, knowledge sharing and possibly all knowledge-focused processes – from the knowledge value chain or another reference system?
  • Monitoring is itself a contested practice that is sometimes associated with only the simple focus of ‘progress monitoring’ i.e. establishing the difference between the original plan and the reality, to prove whether the plan is accomplished or not. Where is the learning in this? What is more valuable: to prove or to improve? And could we not consider that monitoring of KM should arguably look at other valuable monitoring purposes (like: capacity strengthening, self-auditing for transparency, sensitisation, advocacy etc. (3)?
  • With respect to the different epistemologies and ontologies (world views), isn’t it sensible to explore the different knowledge communities (see slide 8 on Valerie Brown’s presentation on collective social learning) and expectations of the parties involved in monitoring/ assessing KM? After all, the monitoring commissioner, implementer and ultimate beneficiary (client) may have a totally different view point on the why, what and how of monitoring KM.
  • If we take it that monitoring moves beyond simple progress monitoring and does not simply rest upon SMART indicators and a shopping basket for meaningless numbers, what are useful approaches – both quantitative and qualitative – that can help us understand the four dimensions of KM monitoring mentioned above and do this with due consideration for the context of our knowledge activities?
  • And finally what can we expect will be the future pointers of this discussion? I am thinking here both in terms of broadening the conceptual debate, looking at promising new approaches (such as the semantic web and its possibilities to map contextualised information, Dave Snowden’s Sense Maker, Rick Davies’s most recent work on the basis of his old Most Significant Change method) or developing a more practical approach to make sense of knowledge and to support the work of KMers (us), our patrons, our partners and our beneficiaries / clients?
  • Do you have case studies or stories about the issues sketched above?

Hopefully, further down the line, we may have a clearer idea as to turning what is too often a costly and tiresome exercise into an exciting opportunity to prove the value of knowledge-focused work and to improve our practices around it…

If you are interested in this topic or want to find out more about KMers’ chats, please check in on 16 February and join the chat; oh, and spread the word!

Notes:

(1)    KMers is an initiative that was started in late 2009 and has already generated a few excellent discussions (the last one was about knowledge for innovation), usually hosted on Tuesday around 1800 CET (Central European Time). The chats Twitter-based and always involve a group of dedicated KM heads that are really passionate and savvy about the broad topic of knowledge management.

(2)    By monitoring we mean here the ‘follow up of the implementation of programme activities AND periodic assessment of the relevance, performance, efficiency and impact of a piece of work with respect to its stated objectives’ as regularly carried out in the development sector. In this respect we include the purposes of evaluation in monitoring as well. In the corporate world I guess you would translate this in regular assessment. Monitoring / assessment may happen by means of measurement and other methods.

(3)    A forthcoming IKM-E paper by Joitske Hulsebosch, Sibrenne Wagenaar and Mark Turpin refers to the nine different purposes for monitoring, that Irene Guijt proposed in her PhD ‘Seeking Surprise’ (2008). These purposes are: Financial accountability, Operational improvement, Strategic readjustment, Capacity strengthening, Contextual understanding, Deepening understanding, Self-auditing, Advocacy, Sensitisation).

Related blogposts:

M&E of KM: the phoenix of KM is showing its head again – how to tackle it?


I’ve started working on a summary of two papers commissioned by the IKM-Emergent programme to unpack the delicate topic of monitoring (and evaluation) of knowledge management (1). This could be just about the driest, un-sexiest topic related to KM. Yet, it seems precisely one of the most popular topics and one that keeps resurfacing on a regular basis.

On the KM4DEV community alone, since the beginning of 2009, nine discussions (2) have focused on various aspects of monitoring of knowledge management, some of them generating a traffic of over 30 emails!! Are we masochistic? Or just thirsty for more questions?

M&E the phoenix of KM? (photo credits: Onion)

Anyway, this summary piece of work is a good opportunity to delve again into the buzz, basics, bells and whistles of monitoring knowledge management (as in the practice of observing/ assessing/ learning inherent to both M&E rather than on the different conditions in which M or E generally occur).

In attempting to monitor knowledge and/or knowledge management, one can look at an incredible amount of issues. This is probably the reason why there is so much confusion and questioning around this topic (see this good blog post by Kim Sbarcea of ‘ThinkingShift’, highlighting some of these challenges and confusion).

In this starting work – luckily supported by colleagues from the IKM working group 3 – I am trying to tidy things up a bit and to come up with a kind of framework that helps us understand the various approaches to M&E of KM (in development) and the gaps in this. I would like to introduce here a very preliminary half-baked framework that consists of:

  • Components,
  • Levels,
  • Perspectives.

And I would love to hear your views on these, to improve this if it makes sense, or to stop me at once if this is utter gibberish.

First, there could be various components to look at as items to monitor. These items could be influenced by a certain strategic direction or could happen in a completely ad hoc manner – a sort of pre-put. The items themselves could be roughly sorted as inputs, throughputs or outputs (understood here as results of the former two):

Pre-put Input (resources and starting point) Throughput (work processes & activities) Output (results)
– None (purely ad hoc)- Intent or objective 

– Structured assessment of needs (e.g. baseline / benchmarking)

– Strategy (overall and KM-focused)

– People (capacities and values)- Culture (shared values) 

– Leadership

– Environment

– Systems to be used

– Money / budget

– Methods / approaches followed to work on KM objectives- (Co-)Creation of knowledge artefacts 

– Use of information systems

– Relationships involved

– Development of a learning/innovation space

– Attitudes displayed by actors involved or concerned

– Rules, regulations, governance of KM

– Creation of products & services- Appreciation of products & services 

– Use/application of products & services

– Behaviour changes: doing different things, things differently or with a different attitude

– Application of learning (learning is fed back to the system)

– Reinforcement of capacities

All these components are then affected by the various levels at which a KM intervention (or strategy) is monitored, which could be:

  • Individual level;

    Different levels at which M&E of KM could take place

  • Team level;
  • Organisational level;
  • Inter-organisational level i.e. communities of practice, multi-stakeholder processes, potentially verging on to sectoral level – though with the problem of defining ‘a sector’;
  • Societal level affecting a society entirely.

And then of course comes perhaps the most crucial – yet implicit – element: the worldview that motivates the approach that will be followed with monitoring of knowledge management.

Because this is often an implicit aspect of knowledge-focused activities, this is largely a grey area in the way knowledge management is monitored. Yet on a spectrum of grey shades I would distinguish three world views that lead to three types of approaches on monitoring of knowledge (management). These approaches can potentially be combined in innumerable ways. The three strands would be:

  1. Linear approaches to monitoring of KM with a genuine belief in cause and effect and planned intervention;
  2. Pragmatic approaches to monitoring of KM, promoting trial and error and a mixed attention to planning and observing. I would argue this is perhaps the dominant model in the development sector, judging from the literature available anyhow (more on this soon).
  3. Emergent approaches to M&E of KM, stressing natural combinations of factors, relational and contextual elements, conversations and transformations.

In the comparative table below I have tried to sketch differences between the three groups as I see them now, even though I am not convinced that in particular the third category is giving a convincing and consistent picture.

Worldview Linear approaches to M&E of KM Pragmatic approaches to M&E of KM Emergent approaches to M&E of KM
Attitude towards monitoring Measuring to prove Learning to improve Letting go of control toexplore natural relations and context
Logic What you planned à what you did à what is the difference? What you need à what you do à what comes out? What you do à how and who you do it with à what comes out?
Chain of key elements Inputs – activities –outputs – outcomes – impact Activities – outcomes – reflections Conversations – co-creations – innovations –transformations – capacities and attitudes
Key question How well? What then? Why, what and how?
Outcome expected Efficiency Effectiveness Emergence
Key approach Logical framework and planning Trial and error Experimentation and discourse
Attitude towards knowledge Capture and store knowledge (stock) Share knowledge (flow) Co-create knowledge and apply it to a specific context
Component focus Information systems and their delivery Knowledge sharing approaches / processes Discussions and their transformative potential
I, K or? What matters? Information Knowledge and learning Innovation, relevance and wisdom
Starting point of monitoring cycle Expect as planned Plan and see what happens Let it be and learn from it
End point of monitoring cycle Readjust same elements to the sharpest measure(single loop learning) Readjust different elements depending on what is most relevant(double loop learning) Keep exploring to make more sense, explore your own learning logic(triple loop learning)

The very practical issue of budgeting does not come in the picture here but it definitely influences the M&E approach chosen and the intensity of M&E activities.

Aside from all these factors, there are of course many challenges that are plaguing an effective practice of monitoring knowledge management, but this framework offers perhaps a more comprehensive approach to M&E of KM?

Again, I am inviting you to improve this half-baked cake or to reject it as plainly indigestible. So feel free to shoot!

Notes:

(1)    Knowledge management understood here as ”encompassing any processes and practices concerned with the creation, acquisition, capture, sharing and use of knowledge, skills and expertise (Quintas et al., 1996) whether these are explicitly labelled as ‘KM’ or not (Swan et al., 1999)”. This definition is extracted from the first IKM-Emergent working paper. Even though I don’t entirely agree with this definition, let’s consider it’s creating enough clarity for the sake of understanding this blog post.

(2)    Previous discussions related to M&E of KM on KM4DEV:

  • Managing community of practice: creative entrepreneurs (22/11/2009) with a specific message on the impact of communities of practice
  • Value and impact of KS & collaboration (11/10/2009)
  • Evaluation of KM and IL at SDC (08/07/2009)
  • KM self-assessment (18/03/2009)
  • Organisational learning indicators (13/12/2009)
  • Monitoring and evaluating online information (05/02/2009)
  • Monitoring and evaluating online information portals (03/02/2009)
  • Evaluation of KM processes (30/01/2009)
  • Evidence of sector learning leading to enhanced capacities and performances (05/01/2009)

Related posts:

Network monitoring & evaluation: Taking stock


Another stock-taking post: not DVDs but network M&E (credits: Hooverdust)

Another stock-taking post on the collection of network M&E resources (Photo credits: Hooverdust)

It was about time to prepare another of those stock-taking blog posts, don’t you think?

This time the topic is monitoring and evaluation (M&E) for networks, among others because there are a number of networks that I am involved in which will need to develop a solid M&E framework for themselves and for their respective donors so this post could help come up with a better approach. And, who knows, perhaps you will also find something useful in there. If this is all rubbish, please put me out of my misery and help me read some quality references on the topic, ok?

When it comes to M&E of networks, documents are a lot more scattered than for the capacity development stock-taking post I wrote earlier. And to spice things up, on Google, there is a hell of a lot of misleading resources pointing to LAN/WAN network monitoring – clearly the web is still the stronghold of a self-serving (IT) community.

Fair enough! But luckily there are also relevant resources among my documents, of which I would like to mention:

Guides, tools and methods for evaluating networks (direct link to a Word document)

(Amy Etherington – 2005)

As the title indicates, this paper focuses on evaluation rather than monitoring of networks – as a means for networks to remain relevant and adapt if need be. Three major considerations are taken into account here:

  • measuring intangible assets (related to characteristics of networks such as social arrangements, adding value, creating forums for social exchange and joint opportunities);
  • issues of attribution (linked to issues of geographic and asynchronous complexity of networks, joint execution of activities, broad and long term goals of networks);
  • looking at internal processes: the very nature of networks renders internal processes – of mobilisation, interlinking, value-adding – very interesting. The further effects of the network on each individual member are also useful to look into.

And then follows a selection of nine evaluation methods (all dating from 1999 to 2005 though), very well documented, including checklists of questions, tables with dimensions of networks, interesting (or sometimes scary) models, innumerable steps referring to various maturity stages of communities. This seems one of the most relevant references to find at least practical methods to tackle network M&E.

Evaluating International Social Change Networks: A Conceptual Framework for a Participatory Approach (PDF)

(Ricardo Wilson-Grau and Martha Nuñez – 2006)

Among the most influential authors on the topic of M&E and networks, Wilson-Grau and Nuñez have been writing a lot of documents referred to in other papers mentioned here. This paper – which also focuses on the evaluation of networks – introduces the 8 or so functions that networks perform and considers four qualities and three operational dimensions. The result is a table of 56 criteria – shaped as questions – which ought to be answered by members of the network – with a careful eye for justification behind each criterion, because each network is different. The authors continue with the four types of achievements one can hope for social change networks: operational outputs, organic outcomes, political outcomes (judged as most useful by the authors themselves) and impact. Again the table is of great help and this document is a useful introduction to the author’s body of work.

A Strategic Evaluation of IDRC-Support to Networks (Word)

(Sarah Earl – 2004)

Epitomising the long term experience of the Canadian International Development Research Centre (IDRC) with monitoring and evaluation of networks, Sarah Earl presents, in this seven-page briefing note, a questioning process to evaluate the function of IDRC in supporting networks. In doing so, she stresses a series of questions pertaining to the coordination, sustainability and intended results / development outcomes of networks. She further explains the methodology used (literature review, key informant interviews and electronic survey of network coordinators, lesson learning sessions leading to writing stories from IDRC staff). This paper can be useful for actually setting up a methodology to collect evidence about the functioning of a network.

Network evaluation paper (Word).

(June Holley – 2007)

June Holley has been working for over 20 years on economic networks. This five-page paper  introduces a method that focuses on network maps and metrics, network indicators and outcomes. The paper suggests using scores and looking at awareness (of the network as a whole), influence, connectors, integration, resilience, diversity and core/periphery.

Network mapping and core-periphery (credits: Ross Dawson)

Network mapping and core-periphery (Image credits: Ross Dawson)

In terms of indicators, Ms. Holley recommends a series of questions that point to the self-organising and outcome-producing characteristics of the network, but also at questions of culture (as in shared norms and values) and evidence of skills that allow the network to change.

There are more (*) papers specifically focused on networks and their evaluation but I found them less relevant, often mostly because they are a bit dated.

Of course there are many other references on monitoring and evaluation in publications and resource sites about networks. Here is another, shorter, selection:

While on the topic of network M&E and its link with the specific monitoring of knowledge management, I would like to point to the summary of a discussion that took place in 2008 on the KM4Dev mailing list on the topic of M&E of KM: http://wiki.km4dev.org/wiki/index.php/Impact_and_M%26E_of_KM. This topic will probably remain interesting. It has been explored various times on the KM4DEV mailing list, it was recently touched upon in the francophone KM4DEV CoP SA-GE and it is likely to reappear as a topic of choice in 2010 on various platforms, not least because IKM-Emergent is planning to work more on this issue after having released the first of two commissioned papers on M&E of KM (this working paper on monitoring and evaluation of knowledge was written by Serafin Talisayon). I will certainly report about this in the coming weeks / months.

As ever with this series of stock-taking posts, I will try and keep this overview updated with any other interesting resource I get my hands on. So feel free to enlighten me with additional resources that go deeper, provide a lot of synthetic clarity or provide a refreshing perspective on the topic of network monitoring. What has worked for you in your work with networks? What have you found useful ways to measure their effectiveness and other dimensions? What would be your words of caution when assessing networks?

Networks are here to stay for a while so this discussion goes on…

(*)

I came across a number of other papers that all have something to say but are a bit out of date and I decided not to reference them here.

Related posts:

Capacity development: Taking stock


(This is potentially the first of a series of stock-taking posts about inspiring literature on topics I blog about – the series will start if you find this interesting, so plmk).

Recently I met all staff of the Water Integrity Network (WIN) which stands for more integrity and transparency and preventing more corruption in the WASH sector by organising coalitions of institutions and individuals to cooperate and share useful ideas, resources and tools and to join hands in this fight.

The starting point: a capacity development workshop by WIN (26-27 May)

The starting point: a capacity development workshop by WIN (26-27 May)

On 26 and 27 May, WIN will be organising a workshop on capacity building in order to define its priorities for the years to come and to develop a strategy in line with those priorities. As I met the person in charge of organising this workshop and we exchanged some ideas by mails and face-to-face, it gave me a nice opportunity to take stock of some good articles and papers I have read about this concept.

The following list represents an attempt at mentioning and briefly describing the contents of some of the reads I found most inspiring on the topic of capacity development. This is by no means an exhaustive list of resources on the topic so feel free to suggest your inspired reads.

Many of these articles have been written by or inspired after Peter Morgan (private consultant as far as I can see but in a brief search I wasn’t able to find the right Peter Morgan out of 40 Peter Morgan’s (on LinkedIn alone).

Capacity and capacity development – some strategies

(Peter Morgan – 1998)

The oldest reference of all papers, this article is interesting because a) it provides some pointers to define capacity development (the processes and strategies), capacity (organisational and technical abilities, relationships and values) and impact (developmental benefits and results) and b) it considers various ‘capacity development’ strategies that have been employed, namely:

  • supplying additional and physical resources;
  • helping to improve the organisational and technical capabilities of the organisation;
  • helping to settle a clear strategic direction;
  • protecting innovation and providing opportunities for experimentation and learning;
  • helping to strengthen the bigger organisational system;
  • helping to shape an enabling environment;
  • creating more performance incentives and pressures;

The article ends with a series of questions to address the strategic value of capacity development and the operational recommendations to make it work.

What is capacity?

(Peter Morgan, 2006) – this is a direct link to the paper in PDF format.

This paper is firstly valuable for pointing at the lack of a clear and agreed definition on capacity development – that ‘missing link’ in development according to the World Bank – and particularly its common confusion with (individual) training. As a result, capacity development becomes an umbrella concept devoid of any useful meaning. The second contribution of this paper is to single out five central characteristics of capacity development: 1) it’s about empowerment and identity, 2) it has to do with collective ability, 3) it is a systems phenomenon, 4) it is a potential state and 5) it’s about creating public value. A third pointer is the definition of individual competencies, organisational capabilities and institutional / systemic capacity. Then Peter Morgan focuses on the meso level (organisations and their capabilities) to extract five core capabilities:

  1. The capability to act: having a collective ability to define a vision and an agenda and implement it (related to leadership, human resources etc.);

    The 5 capabilities' framework (Credits: ECDPM)

    The 5 capabilities’ framework (Credits: ECDPM)

  2. The capability to generate development results: the thematic and technical capabilities that lead to results (outputs, outcomes), which is usually the central attention of capacity development – though the author argues it is in the combination of the five that capacity development becomes meaningful and effective.
  3. The capability to relate: connecting to other actors relevant in the field where an organisation is evolving; this relates to working on the exhausted (or rather over-used) concept of ‘enabling environment’ but also on power struggles and political intrigue in a sometimes seemingly uncompetitive sector (how wrong!).
  4. The capability to adapt and self-renew: learning, innovating, adapting to changing environments or pre-empting changes;
  5. The capability to achieve coherence: maintaining a focus while using all separate resources to the fullest of their abilities. This is a major challenge with the growing recognition of complex and intricate relations among development actors

Finally, the author opens the debate as to capacity being a means to an end or an end in itself.

A balanced approach to monitoring and evaluating capacity and performance

(Paul Engel, Tony Land, Niels Keijzer – 2006) – this is a direct link to the paper in PDF format.

This paper is very much in line with the previous one but it lists a number of useful questions to assess capacity and performance and provides a five-step approach to develop the assessment framework. These five steps are: 1) Situational reconnaissance and stakeholder analysis 2) Calibration of the assessment framework 3) Implementation 4) Review of the draft results with key stakeholders and 5) Sharing the assessment report with the full range of stakeholders.

Capacity for a change

(Peter Taylor, Peter Clarke – 2008).

The report from a workshop that IDS organised in 2007, this excellent resource is probably the reason why I’ve been thinking a lot more about capacity development (CD) recently. The 26 participants provided outstanding matter for reflection which led the authors to analyse the current situation of capacity development interventions, re-imagine CD processes and suggest ways forward.

The paper is a useful resource for its facts (e.g. figures on public expenditures on CD), its evidence from study: about the importance of knowledge and learning, power relations, having good theories of social change, the relations between intervention agents rather than just results and perhaps above all else the importance of the local context – here we go again! and finally it is useful for the recommendations to address capacity development systemically.

In the forward-looking part, the authors recommend considering five useful pointers for CD interventions:

  • Empowering relationships – having that empowerment perspective at the core;
  • Rallying ideas – favouring a clear language that comes from joint reflection;
  • Dynamic agents – recognising the importance of local champions to take things forward;
  • Framing and shaping context – favouring a flexible design through interaction with the local context;
  • Grounding enabling knowledge / skills – working on abilities to understand and interact with one another;

The report ends with some suggestions for donors, research institutes, service providers and practitioners at large to take their own share and improve CD interventions. Last but not least, the bibliography provides actually enough references for me to write another blog post…

Capacity development: between planned interventions and emergent processes. Implications for development cooperation

(Tony Land, Volker Hauck and Heather Baser – 2009) – this is a direct link to the paper in PDF format.

The most recent resource of the list, this policy management brief by ECDPM poses that complexity theories and particularly aspects of emergence and ‘complex adaptive systems’ provide a welcome contribution to unpacking capacity development. The authors consider capacity as an emergent property that cannot be ‘engineered’ by organisations (even less so by external agencies, often Northern-based I would argue). Their assessment is that the forces around organisations and capacities are sometimes far greater than the former and it is therefore important to map them to understand better what may play a role in the success of an intervention (hence the importance of carrying out a kind of ‘forcefield analysis‘ perhaps I would add). The brief continues with a comparison between ‘conventional’ (engineering, pre-determined, risk-averse) approaches to capacity development and approaches inspired by emergence and complex adaptive systems.

Emergence and iteration: two key factors of capacity?

Emergence and iteration: two key factors of capacity?

One interesting aspect of this brief is also the identification of 12 pointers that may help in organising capacity development interventions. The authors are cautious enough to warn against the chase for a silver bullet (in this case ‘complex adaptive systems’) but advise to consider the pointers to develop incremental approaches that reconcile intervention engineering (the current practice nowadays) with emergence.

As mentioned above, this is no exhaustive list, so what did you find useful references on the topic?

If you think it’s useful to publish such ‘stock-taking’ blog posts in the future, on capacity development or other topics, let me know (and about what topic).

To find all these resources in one place check my online bookmarks on capacity development: http://delicious.com/ewenirc/capacity_development.

Related posts:

G(r)o(w)ing organically and the future of monitoring


In the past three weeks I have been working quite a lot on monitoring again, as one of my focus areas (together with knowledge management/learning and communications): processing and analysing the results of RiPPLE monitoring for the first time, developing the WASHCost monitoring and learning framework and generally thinking about how to improve monitoring, in line with recent interest in impact assessment (IRC is about to launch a thematic overview paper about this), complexity theory and even the general networks/ learning alliance angle etc.

Monitoring growing organically

I think monitoring is going and growing the right way – following an organic development curve – and for me it is one of the avenues that can really improve in the future, which perhaps explains the current enthusiasm for impact assessments etc. As mentioned in a previous blog post, I think the work we carry out with process documentation will be integrated as part of monitoring later, an intelligent way to monitor, which makes sense for donors, implementers (of a given initiative) and beneficiaries.

So what would/could be characteristics of good monitoring, in the future? I can come up with the following:

Integrated: in many cases, monitoring is a separate activity from the rest of the intervention, giving an impression of additional work and no added value. But if monitoring was indeed linked with intervention activities and particularly planning and reporting, it would help a lot and make it seem more useful. In the work on the WASHCost monitoring and learning framework, the key trick was to focus M&L on the ongoing reporting exercise and it did a wonderful trick. In addition to this, monitoring should also be linked with (mid-term and final) evaluations so that the evaluation team – usually external to the project – can come up with a more consistent methodology while keeping distance and a certain degree of objectivity. Evaluations are a different aspect and I’m not explicitly dealing with them here, even though they share a number of points with monitoring.

Informed: If monitoring is integrated with planning, before the project intervention there should be an analysis about the issue at hand and the potential best area of intervention. In line with this, a baseline should be established for what processes and outputs will be monitored. This helps prepare monitoring activities that make sense and interventions that are really focusing on how to improve what doesn’t work (but could help tremendously if it would);

Conscious: about what is at stake and therefore what should be monitored. The intervention should be guided by a certain vision of development, a certain ‘hypothesis of change’ that probably includes a focus on behaviour changes by certain actors, on some systems, processes and products/services and more generally on the system as a whole in which the development intervention is taking place. This conscious approach would therefore be well informed not to focus exclusively on hardware aspects (how many systems were built) nor exclusively on software issues (how much the municipality and private contractors love each other now);

Transparent and as objective as possible: Now that’s a tricky one. But a rule of thumb is that good monitoring should be carried out with the intention to report to donors (upward accountability) and to intended beneficiaries (downward accountability) – this guarantees some degree of transparency – and should be partly carried out by external parties to ensure a more objective take on monitoring (with no bias towards only positive changes). Current attempts to involve journalists to monitor development projects are a sound way forward and many more options prevail.

Versatile: Because monitoring should focus on a number of specific areas, it shouldn’t just use quantitative or qualitative approaches and tools but a mixture of them. This would help make monitoring more acceptable (with the accountability vs. learning discussion for instance) and would provide a good way to triangulate monitoring results, to ensure more objectivity in turn.

Inclusive: If monitoring includes external parties, it should focus on establishing a common understanding, a common vision of what is required to monitor the intervention, and it should also involve training activities for those that will be monitoring the intervention. So monitoring should include activities for communities as for donors, it should bring them together and persuade them that they all have a role to play in proving the value of the intervention and especially improving it.

Flexible: A project intervention rarely follows the course it primarily intended to follow… equally, monitoring should remain flexible to adapt to the evolution of the intervention. It should remain flexible in its design, in the areas that are monitored and in the methods that help monitor those specific areas. That is the value of process documentation and e.g. the Most Significant Change approach: revealing deeper patterns that have a bearing on the intervention but were not identified or recognised as important.

Long-term: Assuming that development is among others about behaviour and social changes, these changes are long-term, they don’t happen overnight; Subsequently monitoring should also have a long term perspective and indeed build ex-post evaluations to revisit intervention sites and see what are the later outcomes of a given intervention.

Finally, and with all else said before, monitoring would gain in being more simple, planned according to what is necessary to monitor and what is good to monitor, in line with existing resources and perhaps following a certain donor’s perspective: to monitor only what is necessary.

Hopefully that kind of monitoring will not feel as an intrusion by external parties in the way people are carrying out their job, and/or it will not feel like just an additional burden to carry without expecting anything from it. Hopefully that kind of monitoring will put the emphasis on learning, on getting value for the action, and on connecting people to improve the way development work is going on.

Related posts: