Learning-blind development (aid) and the missed opportunities for a real difference


Failure's freeway, the road followed by most development initiatives, occasionally dimmed by learning though (Credits: StormKatt / FlickR)

Failure’s freeway, the road followed by most development initiatives, occasionally dimmed by learning though (Credits: StormKatt / FlickR)

The global development sector is a learning universe, a space of experimentation and failures. One can read this positively – as in “a lot of learning is happening in it” – or negatively – as in “a lot has to be learned still and so much effort goes to waste”. Are we blind to learning in development (aid)?

Thing is: this situation is totally systemic. And since we only realise now how all aspects of development are connected (e.g. roads and other infrastructures allow better access to market; access to water allows improvement on agriculture and further down the line education etc.) there has been indeed a lot of wasted resource reinventing the wheel in development (aid) which could have been used differently.

At a moment when the public scrutiny towards spending on development aid is ever more alert and leads to budget cuts (which is a good thing since it forces everyone to cooperate for scarcer resources), the imperative for learning has never been more important than now – something which is fortunately happening, somehow, even between regions (as e.g. between Africa and the Pacific).

But a lot remains to be done still. It is not always obvious how learning can really improve development (aid), and the costs of learning (i.e. the investment in it – and I’ll come back to this in a later post) can seem much steeper than the benefits. But the cost of not learning is quite obvious.

So now, let’s look at a typically bad – and alas too frequent – development (research) project and its missed opportunities for learning; and let’s compare this with an ideal project which pays much attention to learning throughout:

What happens too often with typical / bad development projects

What ideal (learning-focused) development initiatives would look like

Preparation

It usually has overloaded ambitions in an unreasonably short project period (it’s not realistic and not designed in the light of previous experiences) Upfront, there have been extensive consultations with key parties – current and future partners – to determine the agenda and the ideal duration of the initiative. Literature research and scoping past experiences is also instrumental in building upon the legacy for this new initiative

Initial phases

The new project or program is meant to start delivering almost as it starts – no consideration has been paid for the time it takes to build meaning, trust and abilities The new initiative spends considerable time (a flexible inception period) collecting insights to refine and work further on ownership, capacities relationships and better plans
The project only addresses the ‘what has to be done’ and assumes that the people involved can just get on with it

Initial briefing is only about what the project aims to achieve, its organogram and reporting lines etc. all bureaucratic information

Much consideration goes into ‘Why are we doing this’ as well as ‘how are we going to handle it’ and ‘do we have the right capacities or should we invest in our capacities or in extra capacity?’

Initial briefing has therefore taken stock of the capacity gaps among staff and partners and addresses simple things like working with Word/Excel, social media etc. a s well as concepts that matter for the initiative – so the why and how

Who’s driving the project

External parties are driving the whole agenda and exploring new (thematic and geographic) areas Endogenous parties (and perspectives) are in the driving seat and have been selected for their mandate and capacity, network and other assets to sustain the initiative in the context(s) of the initiative- all of this is known because there has been proper reflection and consultation with them at the onset
A small team organises all activities for everybody. Occasionally some ad hoc team meetings are held which help the central team pass on information to everyone else A small team facilitates the implementation by other teams. It puts emphasis on holding regular team meetings where real two-way conversations are held, with proper documentation of key discussion points and jointly agreed decisions

Running activities and events

Activities are implemented by a small team working in isolation from other teams – they’re too busy ‘fire fighting’ to share anything with anyone else Every opportunity is seized to see if there is sense in following a social learning approach, putting the emphasis on ‘genuine’ participation. And the teams take time to find alternative solutions if they see that they end up ‘fire fighting’ all the time.
Activities are following the plans because the plans dictate what has to be done according to donors, when they granted the money. Activities are following the outcome logic and theory of change but they are regularly revised along the way, in line with changes – and that has been agreed with donors as the latter rather focus on a more effective yet deviated initiative than a useless original plan
Events organised during the project are scarce and when they happen they consist in death-by-Powerpoint executions, are ill-documented and quickly forgotten about – everyone sticks back to ‘business as usual’

There is particularly a ‘big bang’ kick-off event with lots of money and the presence of national media, followed sometimes only by a major closing event. Nothing much in between

The initiative is all about learning and engagement therefore it offers many opportunities and contextual events for the people involved to come together, reflect, ask questions, take decisions, follow up with actions, revise activities and plans

Engagement means that there is constant interaction with key influencers and movers, not just at the onset and sunset. All events that take place are properly facilitated to ensure learning is maximised – and well documented in accessible and compelling formats for future reference and action monitoring

Involvement and engagement

Working with partners at this stage means that partners do some activities either on their own but with very close ‘big brother-like’ supervision or totally separated from the rest of the project. The interaction relates to executing a plan and reporting about it – all that matters are the results. Partnership for the project staff means ‘more reports, more work, fewer results’ Working with partners builds upon the trust from the pre-project and early stages. Everyone shares insights, regularly engages in a joint analysis and it means a lot of opportunities to do things differently, to do different things, to learn differently (three learning loops) and to develop everyone’s capacity – a good set of assets for future initiatives too! Partnerships here means more ideas, more capacity, more energy, better quality learning, better results, better relationships: SYNERGY!
When conducting ‘field activities’, local community members are invited to respond to a (sometimes excruciatingly long) predefined questionnaire. They may never see the results of this. But it’s ok, since they’re project beneficiaries, they will benefit in a way or another, won’t they (it’s just not very clear how ;) ) Field activities are guided by a certain ethics of engagement, are participatory in design and in practice, are developed jointly with the locals involved and results are therefore automatically shared, visions for the future elaborated collectively and plans adjusted together, starting from different world views
High level engagement consists in developing a few outputs at the end of the project and sending them to a mixed group of important decision-makers, hoping they will read and apply these High level engagement – which also contributes to leading to development outcomes – means that cherry-picked decision-makers have been involved in the process from the start, own the process and results (perhaps they have been involved in action-research themselves) and become the best advocates of the initiative’s work themselves
People involved in the project feel isolated and detached from the project and from each other. They don’t look critically at options to improve the situation for themselves and the whole group People in the project feel energised, involved, concerned, motivated. They all apply ‘personal knowledge management’ (PKM) to some extent so they personally improve and they connect their personal sphere and network with the initiative to question and improve it and to sharpen critical thinking. They are encouraged to reflect on their own as well as collectively

Capacity development

After the initial briefing, if there is any capacity development activity it is training, conducted by external trainers, to address general skills, not specific contextual issues that the project people are effectively facing. And that is, again, if there is anything planned to address capacity gaps. Everyone’s capacity is positively monitored (followed) and leads to several activities along the initiative, moving training from the theoretical terrain to the workplace experience and moving from just training to a whole set of capacity development activities (coaching, exchange visits, involving people in communities of practice etc.) focusing generally on improving the institutional capacity for change

Communication and outputs

The website and other communication channels are mostly unidirectional (‘here’s what we have to say to you’) and not well connectedStaff and partners deplore that so little communication is taking place - but they’re not doing much to change this The different communication channels are interrelated, engaging (they feature dialogues, consultations etc.) and although they look slightly messier perhaps, they are echoing and amplifying what the initiative is trying to accomplish, through multiple engagement routes

Everyone contributes to communication efforts

The outputs developed by the project are released at the end of the project and without much passion – more like ‘according to plan’ – and are not really informing other activities. Occasionally they are being promoted on e.g. the website, as standalone ‘results’ A variety of outputs are released throughout the project (away from dotty dotted communication), mirroring the different reflections, conversations and actions that have taken place by different people at different times and locations, about the thematic content focus of the initiative as well as about the process leading to its development. They are connected, refer to each other, and crucially are used (both content and output development process) for further engagement, reflection and action by the parties that are supposed to use them as opportunities and levers of change

Monitoring and evaluation

Monitoring and evaluation boils down to the bare minimum reporting. It is centralised but requires partners and outputs to provide a lot (random?) facts without background information as to why that matters. No one knows what happens to those M&E reports anyway. Probably they’re never read M&E organically addresses accountability, in an ongoing dialogue with the donor, but it also addresses learning needs to identify and address systemic gaps around the initiative’s objectives (and inform future initiatives). Everyone contributes to M&E although in practice M&E and project requirements are often the same because they have been integrated at the onset. Reports are just the (process) documentation of the conversations that happen in the initiative

What happens at the end of the project

The final project report is a sort of ‘annual report’ with some results but little passion and curiosity. It is only shared with the donor (Annual reports are an excellent measure of learning in organisations by the way)

Too bad, the website will never be updated – some people might think the project is still going on (luckily that horrible project is finally over though!)

The final project output is an interactive set of multimedia resources addressing different audiences, providing practical tools and guidance on approaches, in a variety of formats, distributed to all parties involved in the initiative, backed by an interactive event that looks forward and builds on previous conversations about this. All communication channels are also geared for that ‘post-initiative’ stage.
At the end of the project, it remains unclear what will happen with the people involved (staff, partners, beneficiaries), with the outputs (where will they be made accessible) and with the lessons that were gathered from the project – but at that stage, is there anything that should be saved from that horrible project, if not lessons about doing things differently next time? At the end of the initiative, a lot of options are on the table because there has been a thorough conversation throughout about sustainability, exit strategies etc. so everyone knows what they can do and have activated their networks to make it happen.

The initiative’s outputs are all openly accessible in a sustainable database and the many many lessons from this initiative have informed activities by many parties involved for future work – institutional memory across projects is taken care of.

Learning and sharing - the essence of smart development work

Learning and sharing – the essence of smart development work

It’s always dangerous to use such caricatures as it lends to think that it might refer to reality. It does not, of course, and a lot of development initiatives are somewhere on a continuum between situation A (the horrible project) and situation B (the ideal learning initiative), but clearly there are many opportunities for learning in development, so let’s focus on what’s being learned and use it to learn even more, rather than despair at all that is there to learn yet while ignoring the legacy from the past… 

Echoing, here, the man of the month (year, decade, century?):

Do not judge me by my successes, judge me by how many times I fell down and got back up again (Nelson Mandela).

Does this assessment ring a bell? Does it resonate with your experience of what is happening with development aid or not? What other options do you see?

Related blog posts:

Social web metrics: between the cracks of evidence and confidence


Assessing knowledge work is back on my menu for this year and I need to start somewhere simple(r): social web metrics.

Rather than focus on the high end of monitoring/evaluation (M&E) of knowledge work, I’d like to look into current web metrics in use and to understand what they are really capturing, what they fail to capture or what problems they pose and what links them together or what we could do with them.

The social media analytics framework below proposes a good entry point to this exploration. I will come back at a later stage to such analytical frameworks.

A framework for Social Media Analytics (Gaurav Mishra)

A framework for Social Media Analytics (Gaurav Mishra)

The social web metrics we have at our disposal to assess knowledge work are related along a chain from attention to action (the famous [social] AIDA again) – or from content to collective intelligence as suggested above. First comes discovering a particular resource and last comes using it, appreciating its use and being transformed (at scale) with it. Each resource, each piece of content,  hopes to tick as many of these goals. With it comes a potential useful insight, but also limitations… and mitigations.

Here is an overview of some objectives we might legitimately have with our content (which metrics try, mostly insufficiently, to capture).

Find me!

Sometimes we just bump into a site or resource on the web, while looking for something else… A page view is a reflection of that. Page views can thus be intentional views (effectively linked to your content/focus) or accidental views (someone ends on a web page after looking for term that is only vaguely connected with that content). An instance of this: I suspect quite a few visitors to my blog are actually looking for information about the gay podcast ‘the feast of fools’ but they end on this blog post - no connection other than the name. Then again, people do come across your content for good reasons too.

Limitations and mitigation: Page views are thus not entirely helpful. Oh, and in case you didn’t know by now, a hit really is not a useful metric, even though it’s all about finding stuff online.

The remedy is to properly (meta-)tag your content, use descriptions on your photos, add links to relevant related content. Linking is the currency of the time with search engine optimisation. The more you are linked to by others, the more likely people will find your content genuinely related to their focus, in relation with specific search terms.

Grab my attention!

Ending on a page or resource is the first step. Attracting our curiosity as online visitors is the second step, and it is not straightforward with our 8-second attention span. (Intentional) Page views are still the main metric here. But so are retweets on Twitter.

Limitations and mitigation:

This is where a good title comes in handy (one of the many useful tips of Ian Thorpe in sharing his blogging experience). But a retweet in particular doesn’t mean that the person re-tweeting the page/resource actually liked it… or even read it. These visitors just seemed to like your shop window’s look and feel! Mind that they like your content for the right reasons beyond that sweet first impression. All ‘Find me’ advices are applicable here too!

Like my content!

Ok, now people have checked your content. And they enjoy it! They ‘like’ it. Or they +1 it, or  they rate it… There are various ways to show appreciation for content. Perhaps the most valuable one is to comment on content and show appreciation this way. It’s useful feedback, provided it’s genuine.

Limitations and mitigation: The danger is that some people just ‘like’ because the like button is easy to push, with or without checking the content in the first place (see the shop window problem above). The other problem is that there is still no indication as to why they like your content (perhaps the tone, the image you chose, the serendipity effect that led them to your content at a moment when they were looking for something similar). Most liking metrics are only partly useful, unless a certain volume of these signals is aggregated throughout various collections and it starts indicating trends.

Focused comments, however, should be encouraged as they help find out why people liked your content and helps you engage with your audience one step further…

Pass it on to others!

If people liked your content, perhaps they didn’t rate it (most people find giving feedback a daunting step) but they might have shared it with others. Metrics here include: linking to your content, social shares (re-tweets are a point in case, but Facebook shares, Google+ shares and email shares are other examples), citations of your work etc. People might be sharing a link to your content or the full content (re-blogging content is an indirect metric of sharing here).

Limitations and mitigation: The same danger of people sharing without having checked your content is still looming. But sharing content is generally a better indication of appreciation for your content, especially when it is shared in quantity and quality. Pay attention to who shares your content. Trusted and valued sources are great indicators of the quality of your content. I am not aware of tools that track the sharing of content with a specific breakdown of the popularity of sharing sources but that would be useful.

Keep me for later!

People may keep track of your content for different reasons:

  • They haven’t read it yet but want to do so later when they find time for it;
  • They want to share it with others but haven’t gotten around it;
  • They like it so much – or find it useful enough – that they want to collect and curate your content.

At any rate, they seem attracted to your content enough to keep it for later.

Metrics here include: Bookmarks, favourites, downloads etc. These are possibly good measures of some following for your content.

Limitations and mitigation: Two out of three reasons above do not point to any particular appreciation. Resources could be put aside and never used again. Even when downloaded, their effective use depends on the discipline and willingness of the bookmarker to actually use his/her saved resources for another activity. Again here large numbers of these metrics can plot useful trends, but individual measurements or isolated bookmarks remain marginally useful.

(re-)Use me!

The objective of your content is to be used – and re-used. Directly or indirectly, now or later, as intended or otherwise, as direct inspiration or diffuse source of innovation. But this is very difficult to track. Only direct references in someone else’s work are (usually) straightforward indications that content is being used.

Readily available metrics thus include: reblogs, citations, links in other important writings and works. Testimonies (e.g. stories of change and the likes) are not a given in social media but are probably the best approach to hear about the use of content. Indirectly, comments may play a similar role, if they mention how the content is being applied somewhere else (as opposed to just reacting on the content itself).

Limitations and mitigation: It is very difficult to get such references and accounts of use – but from this point on it becomes really interesting and relevant. Aiming at collecting such testimonies and developing a culture of feedback and critical reflection (e.g. by means of comments, ratings etc.) all contribute to getting better at and closer to collecting interesting results about the use of content.

Let me make a better you!

One of the best results we can hope for any resource we develop is for it to contribute to changing behaviour. Using content doesn’t equate change. Change is very elusive and difficult to assess as it is an intimate matter, which perhaps requires the realisation of the person changing that they are changing. 

Among other metrics here, the most important one are testimonies, and to a lesser extent comments (provided these comments relate to the usefulness and effect of the resource itself, how it was used not just about the content of the resource). These are not available web metrics (yet?) and would be more typically part of process/outcome/impact monitoring efforts. But these results are worth tracking down.

Limitations and mitigation: As for the use of content, accounts of change brought about by resources or otherwise are very diffuse and hard to collect, even harder to attribute, unless  mentioned in the testimonies. The same approach as for the use matters here, it just goes one level deeper in the exploration.

Become a movement thanks to me

The ultimate goal of any resource is that it is so seminal that it is referred to over and over again and has the tendency to provoke a knock-over domino effect on the behaviour of many. What the Bible or the Coran or the little red book achieved. Tough job…

Limitations and mitigation: Frankly, if you are at that stage, you should be blogging about this instead of me ;) I can only say that radical innovation, use of locally nested word-of-mouth conversion effects and tapping into the viral potential of some technologies and their disruptive nature might offer shorter paths to this holy grail.

 In conclusion…

What is difficult is that there is no linear following along these metrics. Furthermore, some of these metrics only become useful at a certain scale – or in combination with other metrics occur e.g. only when various people have downloaded and favourited a resource can one tell that it probably has a transformative effect on people. The only sure way to get a relatively sure account of evidence is through testimonies – if they are truthful and sufficiently marginally biased.

The table below summarises some of the metrics available to suggest evidence of any impact of your content/resources.

Direct metrics Indirect metrics
Finding Page views, hits
Liking Likes, +1′s, ratings, comments
Retweets and other social shares
Sharing Links, Social shares, citations Downloads, comments, reblogs
Keeping Downloads, bookmarks, favourites Re-tweets, Social Shares, (some) social ratings
Using Citations, links, testimonies, reblogs Comments
Being transformed by it N/A Comments, testimonies

All in all, what matters in those web metrics are a combination of: effective consumption of the content, appreciation of that content (its quality and relevance), intent to use it, effective use of it, transformation brought by that use, scale of that transformation.

There are many tools to collect these. But the tools only address the collection part (your demand for it as content provider wishing feedback). What is more difficult is the supply of such evidence, and that comes only progressively with a culture of feedback and critical inquiry… Until that culture is there, we always navigate between the cracks of evidence and personal confidence.

Related blog posts:

A journey through five years of blogging


On this day, exactly five years ago, I started blogging. On this very blog. My first time ever. Not a particularly great post actually. Nor many posts that followed that primal scream on the web.

Five years of blogging and much more coming (Credits: Stephen Mitchell)

Five years of blogging on KM & co. and much more coming (Credits: Stephen Mitchell)

But like for many others before (Leo Babauta, Harold Jarche, Irving Wladawsky-Berger and most recently Jeff Bullas in the corporate world), blogging has become a central part of my practice. A hobby. A habit. A drug. A source of comfort and peace. A source of intuition and emotions. A passion – shared… And many more useful things

So for this five-year anniversary I’d like to offer a journey through these five years of blogging, selecting some posts that may have gone unnoticed (or not) but really matter to me and characterise the various phases I went through in this blogging journey…

The genesis: confusion of a confusiast

That first post was by a confusiast, but it was also quite confused. I knew I wanted to blog about knowledge management (my main field [of interest]), about communication (my main activity), about monitoring and evaluation (my extended hobby, to focus on learning), about complexity (my main source of confusion and fascination) and other things that popped up in my brain along the way. And I did a bit of all that.

Perhaps the most important posts of that period were:

Back in that period, there was not much quality in my blogging generally (not to say I don’t have my bad blogging days now either): I hadn’t clarified my thoughts, sources of information (sites) and knowledge (people and networks) and had not yet found my writing style, I didn’t link, I didn’t have anyone to converse with… But most importantly I had started blogging and that hugely helped make sense of information over time…

Another asset was my connection with KM4Dev. It is perhaps the main reason that pushed me to blog, but also to tweet, to use Slideshare, Del.icio.us, FlickR, to facilitate workshops in a different way etc. So in a way that genesis period of blogging owes much to this great community which has always been an extraordinary source of inspiration.

The IRC period

My previous employer – the International Water and Sanitation Centre (IRC) is a marvellous organisation, full of learning, innovation, critical thinking, autonomy and fun… so much so that I almost worked for 10 years there. IRC’s cutting edge work really gave me lots of inspiration for blogging before I really moved on to focusing on my own ‘pet topics’. So back in those days I blogged a lot about multi-stakeholder processes (such as learning alliances), process documentation, resource centre networks, sector learning generally.

This is a period during which I focused a lot on monitoring and evaluation (M&E), as I got more and more involved in that type of work. At any rate, most of my posts from that period related to the work I was doing at IRC.

Some blog posts I enjoyed writing, from that period:

My learning take at IRC

Progressively I defined my own route on the blogging seas and took more and more liberty to use my IRC work to reflect on broader topics of discussion. In that period I started to be involved in various initiatives that went beyond IRC: SA-GE the francophone KM4Dev network, the IKM-Emergent research programme, my work in the core group of KM4Dev and as KM4Dev journal editor, my involvement in the KMers group of Tweeters (backed by a much more thorough and consistent use of Twitter) etc.

This is where I also put more and more emphasis on learning in all my KM, comms and M&E work – realising that knowledge management was meant to serve that learning objective to improve, more than anything else – and that comms with learning (and sharing) was in my eyes a lot more valuable than comms with messages.

The blog posts from that period reflect that shift:

An escapist route?

As working at IRC became more of a burden – or fatigue – towards the end, I also shifted my focus even more on other topics and external networks that mattered to me: IKM-Emergent once again, but also the AgShare Fair group (which eventually led me to work for ILRI). During that period I also had a long blogging holiday as I went through a difficult period… only to come back with a renewed and firm commitment to blogging regularly, as I also realised I really enjoyed and needed it.

During the last 15 months of my time at IRC I therefore moved on from focusing on the IRC work to look more broadly at e.g. development work more generally, education, conditions for effectiveness etc.

Some of the blog posts from that period:

Working for ILRI

And then in November last year I started working for the International Livestock Research Institute (ILRI), in a fantastic team of really dedicated and good knowledge and information professionals. The bulk of my work when I started off working at ILRI revolved around facilitation (as you can see on this overview of events we supported, there were many workshops since November 2011). So it’s only normal that quite a few of my posts in this new phase have been around event and meeting facilitation.

But there have also been a few posts about the connection between communication and knowledge work / learning. Although my workload increased, paradoxically I have never been as active on this blog as since I joined ILRI, posting up to 3-4 posts some weeks. The work environment in our team and around its projects is stimulating enough that I find lots of matter to think and blog about.

Some blog posts from this period:

The work at ILRI is changing little by little and this means I might end up blogging about different matters…

(Agile) KM for me... and you? as a word cloud

(Agile) KM for me… and you? as a word cloud

The next fork on the road?

Now I’m still working for ILRI (for almost a year day for day, as I started on November 1, 2011) but also broadening my scope to other areas that reflect some of the relevant topics for ILRI and for me: information management, monitoring of knowledge work (re-delving into the IKM work I did on that but with an emphasis on practical routine indicators and ways to assess the use of our ‘knowledge work’), training people on information and knowledge management, complexity theories in the field of agriculture innovation systems, change management, agile KM and the importance of mobilising all people towards ongoing change…

I can’t see further than that, but perhaps you have ideas as to where I should focus my blogging and our conversations next?

The wealth of communities of practice – pointers to assess networked value?


Building upon the CoP 4L model by Seely Brown, Wenger and Lave (credits: dcleesfo / FlickR)

Building upon a.o. the CoP 4L model by Seely Brown, Wenger and Lave (credits: dcleesfo / FlickR)

The KM4Dev community of practice is going through an intensive action phase, beyond the conversations, as the grant given by the International Fund for Agricultural Development (IFAD) for the period 2012-2013 is leading to a number of interesting activities.

Among them is a learning and monitoring (L&M) plan which really focuses on learning from all other IFAD-funded activities, rather than focusing on monitoring (in the sense of checking delivery of outputs against the plans). And the focus of our L&M plan is about the networked development and value-creation of a community of practice (CoP). How does it fare against governance principles, identity and level of activity, engagement channels and learning / outcomes (which really focus on the most important value creation).

I am involved in the learning and monitoring team and as part of it have started (with support from other L&M team members) developing the table below.

This table offers a suggested selection of ‘learning areas’ that in our eyes matter when working in communities of practice such as KM4Dev.

Learning area Specific issues in this area Description
Governance Transparency Systematic sharing of and accessibility of results of conversations, decisions, initiatives, reification (see below) activities etc. also including the selection process for core group members
Vision, values and principles Development, existence, clarity, understanding and acceptance of general vision, principles and values for the  community of practice by and for its members / normally this is not really a ‘learning’ area but if it isn’t in place it becomes one.
Leadership Demonstrated (and particularly accepted) leadership of the core group and occasionally others by other members of the KM4Dev community. Is there any dissonance between the two groups?
Mobilisation and commitment See below. This is also mentioned under governance as people involved in the CoP governance have to mobilise resources and commit themselves to activities in a specific way
Identity and activity Diversity and expansion Profile of members of the community and the core group (language, region, type of organisation etc.); Growth and expansion (frequency of new members, how etc.) and ties with external networks
Conversation Frequency and quality of conversations around the domain (knowledge management for development) or the community (KM4Dev)
Reification Tendency (quality and frequency) of the community to ‘reify’ conversations into tangible outputs e.g. blog post, wiki entry, journal article etc. Also has a bearing on learning and outcomes
Mobilisation and commitment Capacity of core group members and other KM4Dev members to mobilise themselves and commit to activities (which activities? to what extent/degree of involvement?) and indeed deliver according to the plan and with strong learning. This also has bearing on the governance
Participation Degree of participation of different members to conversations and other activities
Reflection Evidence of social learning, development and sharing of new insights as part of activities (and results – this has bearing on learning/outcomes)
Cohesion Evidence that the relationship between members of the community is good and that everyone finds their place in the community while feeling they are part of a whole
(Learning and) Outcomes Reification / outputs See above. Production of outputs (quality/frequency?) – planned or spontaneous
Reflection / changed thinking and discourse See above. Evidence that reflections from the KM4Dev community have achieved change in thinking and/or discourse among others e.g. citations, semantic analysis.
Inspiration / changed behaviour Evidence of change as a new way to proceed, inspired by KM4Dev activities
Innovation / changed artefact or approach Evidence of KM4Dev influencing development of a new artefact or method, codified concretely
Impact Evidence of larger changes (in autonomy of decision and well-being related to livelihood) where KM4Dev activities have inspired/influenced others within community and particularly beyond. Caveat: attribution.
KM4dev engagement channels Suitability for participation The different KM4Dev channels (mailing list, wiki, ning community), annual meetings) foster dialogue and engagement, and learning
Ease of use / Availability of KM4Dev outputs The different channels are easy to use and complement each other. They make KM4Dev activity outputs visible, and available.
Identity Governance of Km4dev is clear in all engagement channels

This table and the plan which we highlighted triggered a very rich discussion in the KM4Dev core group over the  past couple of weeks. This conversation was meant to provide some initial reactions before opening it more widely with the entire community. As we are about to embark on a much wider and open consultation process with the rest of the community, I thought it might be useful to post this here and see if any of you has any suggestion or feedback on these learning areas…

At the IKM Table (2): individual agency vs. organisational remit, accountability and impact pathways for the future of IKM-Emergent


Day 2 of the final IKM workshop dedicated to ‘practice-based change’. As much as on day 1, there is a lot on the menu of this second day:

  • Individual agency vs. organisational remit;
  • Accountability;
  • Impact and change pathways;
  • A possible extension of the programme: IKM-2
Day 2 - the conversation and cross-thumping of ideas continues

Day 2 - the conversation and cross-thumping of ideas continues

On individual agency and organisational remit:

We are made of a complex set of imbricated identities and cultures that manifest themselves around us in relation with the other actors that we are engaging with. These complex layers of our personality may clash with the organisational remit that is sometimes our imposed ‘ball park’. Recognising complexity at this junction, and the degree of influence of individual agents is an important step forward to promote more meaningful and effective development.

Pressed for time, we did not talk a lot about this. Yet we identified a few drivers that have much resonance in development work:

  • As little as organisations tweet, people do, organisations do not trigger change, individual people do. Pete Cranston mentioned a study done about three cases of critical change within Oxfam, all triggered by individuals: a manager with the power to change, an aspirational individual quickly building an alliance etc. – our impact pathways need to recognise the unmistakable contribution of individual ‘change agents’ (or positive deviants) in any specific process or generic model of social change. Individuals that are closely related to resource generation obviously have crucial leverage power and play a special role in the constellation of agents that matter in the impact pathway;
  • We are obscured by our scale: In politics it took us a long time to realise there were crucial dynamics below nation-states and above them. In a similar swing, in development let’s go beyond merely the organisational scale to focus on the individual agency as well as the network scale – all organisations and individuals are part of various networks which impact both individuals and organisations engaged in them. Teams also play an important role to explore and implement new ways – it is at that level that trust is most actively built and activities planned and implemented. The riddles of impact from the teams emulate in sometimes mysterious ways to the organisational level;
  • These differences of scale tend to place subtle tensions on individuals between their personal perspectives and the organisational priorities. The multiple identities and knowledges (including local knowledge) are inherently in ourselves too, adding layers of complexity as the predominance of one identity layer over another plays out in relation to the other people around – see presentation by Valerie Brown.

On accountability:

Accountability is a central piece of the development puzzle yet, so far, we have embedded it in too linear a fashion, usually upwards, to our funders. Accountability should also embrace the wider set of stake-holders concerned in development initiatives, including beneficiaries and peers, and find alternative ways to be recognised, acted upon and expressed.

The crux of our accountability discussion was around the tension to reconcile accountability with the full set of actors that we are interacting with in our development initiatives. The work carried out by CARE in Nepal (recently finished and soon to be uploaded on the page listing all IKM documents) is a testimony that accountability can and should be multi-faceted.

  • At the core of this conversation lies the question: whose value, whose change, whose accountability? We perhaps too quickly jump on the idea that we know who is the (set of) actor(s) that has(have) more value to bring and demonstrate, that their theory of change matters over that of other actors, and that our accountability system should be geared towards their needs.
  • About theory of change, we already mentioned on day 1 that it is just a tool and any simple tool bears the potential of being used smartly (despite inherent technical limitations in the tool) as much as any complex tool can be used daftly (regardless of the inherent flexibility that it may have). However, the theory of change (of which one guide can be found here) can be quite powerful to ponder the key questions above. A collective theory of change is, however, even more powerful.
  • Perhaps a practical way forward with accountability is to identify early on in a development initiative who we want to invite to map out the big picture of the initiative and the vision that we wish to give it. The set of actors participating to the reflection would represent the set of actors towards whom the initiative should be accountable to. In the process, this consultation could reveal what we can safely promise to ‘deliver’ to whom, and what we can only try and unpack further. This might even lead to shaping up a tree map of outcomes that might be simple, complicated, complex or chaotic (thereby indicating the type of approach that might be more adequate).
  • More often, in practice, we end up with a theory of change (or a similar visioning exercise) that has been prepared by a small team without much consultation. This implies a much simpler accountability mechanism with no downward accountability, only upward accountability to the funding agency or the management of the initiative. This may also imply that the chances of developing local ownership – arguably a crucial prerequisite for sustainable results – are thereby much dimmer too.
  • Robin Vincent also referred to the peer accountability that pervades throughout social media (Twitter, blogs) to recognise the validity and interest of a particular person could be a crucial mechanism to incorporate as a way of letting good content and insights come to the surface and enriching accountability mechanisms.

On impact and change pathways

The next discussion focused on the impact and change pathways of IKM-Emergent. Each member drew a picture of their reflections about the issue, whether specifically or generally, whether practically or theoretically, whether currently or in the future. We produced eight rich drawings (see gallery below) and discussed them briefly, simmering conclusive thoughts about impact and the influence that IKM-Emergent has or might have.

  • Impact happens at various scales: at individual (for oneself and beyond), at team level, at organisational level and at network level (at the intersections of our identities, relations and commitments), it follows various drivers, strategies, instruments and channels. Keeping that complex picture in mind guides our impact seeking work.
  • Our impact is anyway dependent on larger political dynamics that affect a climate for change. The latter could become negative, implying that development initiatives should stop, or positive and leading to new definitions and norms;
  • In this picture, IKM seems to play a key role at a number of junctions: experimentation with development practices, network development, counter-evidence of broadly accepted development narratives, recognition of individual agency and its contribution to social movements, ‘navigating (or coping with) complexity and developing resilience, documenting case studies of how change happens, innovative approaches to planning and evaluation and developing knowledge commons through collaboration;
  • And there certainly are lots of sympathetic agents currently working in funding agencies, international NGOs, social movements, the media as well as individual consultants. Collectively they can help;
  • The combination of public value, capacities and authorising environment are some of the stand posts around IKM’s ball park;
  • IKM’s added value is around understanding the miracle that happens at the intersection between, on the one hand, interactions across many different actors and, on the other hand, systemic change at personal / organisational / discourse level. We can play a role by adding our approach, based on flexibility, integrity, activism and sense-making;
  • If we are to play that role of documenting the miracle and other pathways to change, we should remain realistic: We are led to believe or let ourselves believe that evidence-based decision-making is THE way to inform (development) policies and practices, when – in practice – we might follow more promising pathways through developing new knowledge metaphors, frames of development, preserving documentary records and interlinking knowledges;
  • There is also an element of balancing energy for the fights we pick: Impact and engagement with people that are not necessarily attuned to the principles, values and approaches of IKM-Emergent takes energy. But it matters a lot. So we might also interact with like-minded people and organisations to regain some of that energy.
  • Finally, there are lots of exchanges and interactions and great development initiatives already happening on the ground. The layer above that, where INGOs and donor agencies too often locate themselves, is too limited as such but our impact pathway is perhaps situated at the intersection between these two – how can we amplify good change happening on the ground?

On IKM-Emergent 2:

In the final part of the workshop, after an introduction by Sarah Cummings about where we are at, we surfaced key issues that will be important themes for the sequel programme suggested for IKM-Emergent (the so-called ‘IKM 2’). We briefly discussed a) practice-based change, b) local content and knowledge and c) communication and engagement.

On practice-based change: In this important strand, we debated the importance of the collective against the individual pieces of work – challenging issue in IKM-1. Building a social movement and synthesising work are on the menu, although at the same time it is clear that each team or group of individuals working on independent pieces of work needs to find their breathing space and to some degree possibly detach themselves from the collective. IKM Emergent has been successful at unearthing rich research and insights thanks to the liberty left for each group to carve their space. But the message is clear: connecting the dots helps bring everyone on board and picture the wider collage that an IKM-2 might collectively represent.

On local content and knowledge: In this equally important strand, language is key. So is the distortion of knowledge. We want to understand how localisation of information and technology may differ from one place to the next, we want to move on to ‘particular knowledges’, zooming in on specifics to draw on them. We want to further explore diverse ways of connecting with multiple knowledges through e.g. dancing, objects, non-ICT media. We want to better understand the dynamics of local social movements and knowledge processes and do that with the large African networks that we have been working with.

How is this all to unfold? By creating a network space that allows content aggregation, meetings online and offline, experimental research and production of artefacts, organising exhibitions and happenings and integrating social media.

On communication, monitoring and engagement: This has been paradoxically, and despite the efforts of the IKM management, an area that could have been reinforced. A communication strategy came very late in the process, was somewhat disconnected from the works and rather message-based than focused on engagement and collective sense-making.

What could we do to improve this in IKM-2?

Further integrating communication and M&E, focusing on collective… conversations, engagement, reflection, learning and sense-making. And recognising that both communication and M&E are everyone’s business – even though we need someone (a team?) in the programme to ‘garden communication’, prune our networks (to keep interacting with relevant actors at the edges) and to provide support to staff members and IKM partners to connect to the communication attire of IKM-2

This implies that internally:

  • The success of communication depends also on the production of excellent content to engage people on and around. The constant exploration and openness to new opportunities that characterised much of IKM-1 should be maintained to ensure a wide diversity of mutually reinforcing sources of great reflection and conversation;
  • More conscious efforts are taken to distil key insights from ongoing work – even though we recognise the necessity of (a degree of) freedom and disconnect to develop good work;
  • Distilling those insights might benefit from strong process documentation (1), undertaken by a social reporter (2), supported by regular collective sense-making sessions where those key insights and ‘connecting points’ between work strands could be identified and analysed.
  • We aim at ‘quick and dirty’ (link to post) communication cycles to quickly churn out insights and discuss them, rather than wait for long peer-process processes that slow communication down and reduce the timeliness (and relevance) of the work undertaken;
  • There is a strong need for consistent communication (supported by proper information and training for staff members to feel comfortable with the communication tools and processes) and robust information management (tagging and meta-tagging, long-term wiki management etc. – to be defined).

And externally it implies:

  • That we care for the growing community of conversation that we are having – as an overarching goal for our comms work;
  • That we use the insights to regularly engage a wider group by e.g. organising thematic discussions around emerging (sets of) pieces of work from IKM-2 and invite external actors to connect to and expand that body of work, possibly fund parts of it etc.
  • That we find innovative ways of relating content and ‘re-using it’ smartly by e.g. writing ‘un-books’ with regular updates on the wiki, blogging, syndicating content via RSS  feeds etc.;
  • That we use different communication tools and channels to engage with a multi-faceted audience, so that they find comfortable ways to interact with us and the same time that we titillate their curiosity to try out alternative modes of communication too. There are many relations between external communication and the ‘local content/knowledge’ strand with respect to alternative modes of communication that may not (re-)enforce Western modes and preferences for communication.

 

What now?

After two days of workshops and five years of collective work, we come out with an incredibly rich set of insights – of which this workshop is only the emerged tip of the iceberg – a wide collection of outputs (and more to come), a number of messages for various groups and a dedication to engage with them on the basis of all the above in an expanded programme. There is no funding yet for IKM-2 but with resources, ideas and ambitions, there may well be all the elements to bring us on that way and find like-minded spirits to transform development practices. Impact pathways don’t need funding to work, we are on it, wanna join?

 

Notes:

(1) Process documentation is a soft monitoring approach including a mixture of tools and techniques to ensure that a given initiative’s theory of change is kept in check and questioned throughout its lifetime and ultimately leads to a set of lessons to inform similar initiatives in the future. It has been better described in this IRC publication: Documenting change, an introduction to process documentation.

(2) Social reporting is very close to process documentation although it is usually applied for specific events rather than long term processes. It is better explained in this ICT-KM blog post.

Related blog posts:

At the IKM table: linearity, participation, accountability and individual agency on the practice-based change menu (1)


On 20 and 21 February 2012, the  London-based Wellcome Collection is the stage for the final workshop organised by the Information Knowledge Management Emergent (IKM-Emergent or ‘IKM-E’) programme. Ten IKM-E members are looking at the body of work completed in the past five years in this DGIS-funded research programme and trying to unpack four key themes that are interweaving insights from the three working groups which have been active in the programme:

  1. Linearity and predictability;
  2. Participation and engagement;
  3. Individual agency and organisational remit;
  4. Accountability

This very rich programme is also an intermediary step towards a suggested extension for the programme (“IKM 2″).

In this post I’m summarising quite a few of the issues tackled during the first day of the workshop, covering the first two points on the list above.

On linearity and predictability:

Linear approaches to development – suggesting that planning is a useful exercise to map out and follow a predictable causal series of events – are delusional and ineffective. We would be better advised using  emergent perspectives as they are more realistic, for lack of being more certain.

Linearity and predictability strongly emphasise the current (and desired alternative) planning tools that we have at our disposal or are sometimes forced to use, and the relation that we entertain with the actors promoting these specific planning tools.

Planning tools

After trying out so many ineffective approaches for so long, it seems clear that aspirational intent might act as a crucial element to mitigate some of the negative effects of linearity and predictability. Planning tools can be seen as positivist, urging a fixed and causal course of events, indeed focusing on one highlighted path – as is too often the case with the practice around logical framework – or can have an aspirational nature, in which case they focus on the end destination or the objective hoped for and strive to test out the assumptions underlying a certain pathway to impact (at a certain time).

Different situations require different planning approaches. Following the Cynefin framework approach, we might be facing simple, complicated, complex or chaotic situations and we will not respond the same way to each of those. A complex social change process may require planning that entails regular or thorough consultation from various stakeholder groups, a (more) simple approach such as an inoculation campaign may just require ‘getting on with the job’ without a heavy consultation process.

At any rate, planning mechanisms are one thing but the reality on the ground is often different and putting a careful eye to co-creating reality on the ground is perhaps the best approach to ensure a stronger and more realistic development, reflecting opportunities and embracing natural feedback mechanisms (the reality call).

There are strong power lobbies that might go against this intention. Against such remote control mechanisms – sometimes following a tokenistic approach to participation though really hoarding discretionary decision-making power – we need  distanced control checks and balances, hinting at accountability.

Managing the relationship leading to planning mechanisms

Planning tools are one side of the coin. The other side of the coin is the relationship that you maintain with the funding or managing agency that requires you to use these planning tools.

Although donor agencies might seem like ‘laggards’ in some way, managing the relationship with them implies that we should not stigmatise their lack of flexibility and insufficient will to change. In a more optimistic way, managing our relationship with them may also mean that we need to move away from the contractual nature of the relations that characterise much of development work.

Ways to influence that relationship include among others seeking evidence and using evidence that we have (e.g. stories of change, counter-examples from the past either from one’s own past practice or from others’ past practice etc.) and advocating itProcess documentation is crucial here to demonstrate the evidence around the value of process work and the general conditions under which development interventions have been designed and implemented. It is our duty to negotiate smart monitoring and evaluation in the intervention, including e.g.  process documentation, the use of a theory of change and about the non instrumentalisation (in a way that logical frameworks have been in the past). In this sense, tools do not matter much as such; practice behind the tools matters a lot more.

Finally, still, there is much importance in changing relationships with the donor to make the plan more effective: trust is central to effective relationships. And we can build trust with donors by reaching out to them: if they need some degree of predictability, although we cannot necessarily offer it, we can try, talk about our intent to reduce uncertainty. However, most importantly, in the process we are exposing them to uncertainty and forcing them to deal with it, which helps them feel more comfortable with uncertainty and paradox and find ways to deal with it. Convincing donors and managers about this may seem like a major challenge at first, but then again, every CEO or manager knows that their managing practice does not come from a strict application of ‘the golden book of management’. We all know that reality is more complex than we would like it to be. It is safe and sound management practice to recognise the complexity and the .

Perhaps also, the best way to manage our relationship with our donors in a not-so-linear-not-so-predictable way is to lead by example: by being a shining living example of our experience and comfort with a certain level of uncertainty, and showing that recognising the complexity and the impossibility to predict a certain course of events is a sound and realistic management approach to development. Getting that window of opportunity to influence based on our own example depends much on the trust developed with our donors.

Trust is not only a result of time spent working and discussing together but also the result of surfacing the deeper values and principles that bind and unite us (or not). The conception of development as being results-based or relationship-based influences this, and so does the ‘funding time span’ in which we implement our initiatives.

Time and space, moderating and maintaining the process

The default development cooperation and funding mechanism is the project, with its typically limited lifetime and unrealistic level of endowment (in terms of resources, capacities etc. available). In the past, a better approach aimed at funding institutions, thereby allowing those organisations to afford the luxury of learning, critical thinking and other original activities. An even more ideal funding mechanism would be to favour endemic (e.g. civic-driven) social movements where local capacities to self-organise are encouraged and supported over a period that may go over a project lifetime. If this was the default approach, trust would become a common currency and indeed we would have to engage in longer term partnerships, a better guarantee for stronger development results.

A final way to develop tolerance to multiple knowledges and uncertainty is to bring together various actors and to use facilitation in these workshops so as to allow all participants to reveal their personal (knowledge culture) perspective, cohabiting with each other. Facilitation becomes de facto a powerful approach to plant new ideas, verging on the idea  of ‘facipulation’ (facilitation-manipulation).

Beyond a given development intervention, a way to make its legacy live on is to plug those ideas onto networks that will keep exploring the learning capital of that intervention.

What is the value proposition of all this to donors? Cynically perhaps the innovativeness of working in those ways; much more importantly, the promise of sustainable results – better guaranteed through embedded, local work. The use of metaphors can be enlightening here, in the sense that it gives different ideas: what can you invest in projects and short term relationships? e.g. gardening for instance planting new initiatives in an existing soil/bed or putting fertilizer in existing plants…

Interesting links related to the discussion:

This slideshow requires JavaScript.

On participation and engagement:

Sustainable, effective development interventions are informed by careful and consistent participation and engagement, recognising the value of multiple knowledges and cherishing respect for different perspectives, as part of a general scientific curiosity and humility as to what we know about what works and what doesn’t, in development and generally.

The second strand we explored on day 1 was participation and engagement with multiple knowledges. This boils down to the question: how to value different knowledges and particularly ‘local knowledge’, bearing in mind that local knowledge is not a synonym to Southern knowledge because we all possess some local knowledge, regardless of where we live.

A sound approach to valuing participation and engagement is to recognise the importance of creating the bigger picture in our complex social initiatives. The concept of cognitive dissonance is particularly helpful here: As communities of people we (should) value some of our practices and document them so that we create and recognise a bigger collective whole but then we have to realise that something might be missing from that collective narrative, that we might have to play the devil’s advocate to challenge our thinking – this is the ‘cognitive dissonance at play – and it is more likely to happen by bringing external views or alternative points of view, but also e.g. by using facilitation methods that put the onus on participants to adopt a different perspective (e.g. DeBono’s six-thinking hats). Development work has to include cognitive dissonance to create better conditions to combine different knowledges.

Participation and engagement is also conditioned by power play of course, but also by our comfort zones; e.g. as raised in a recent KM4Dev discussion, we are usually not keen on hiring people with different perspectives, who might challenge the current situation. We also don’t like the frictions that come about with bringing different people to the table: we don’t like to rediscuss the obvious, we don’t like to renegotiate meaning but that is exactly what is necessary for multiple knowledges to create a trustworthy space. The tension between deepening the field and expanding it laterally with new people is an important tension, in workshops as in development initiatives.

We may also have to adopt different approaches and responses in front of a multi-faceted adversity for change: Some people need to be aware of the gaps; others are aware but not willing because they don’t see the value or feel threatened by inviting multiple perspectives; others still are also aware and don’t feel threatened but need to be challenged beyond their comfort zone. Some will need ideas, others principles, others yet actions.

At any rate, inviting participation calls for inviting related accountability mechanisms. Accountability (which will come back on the menu on day 2) is not just towards donors but also towards the people we invite participation, or we run the risk of ‘tokenising’ participation (pretending that we are participatory but not changing the decision-making process). When one interviews a person, they  have to make sure that what they are transcribing faithfully reflects what the interviewee said. So with participation, participants have to be made aware that their inputs are valued and reflected in the wider engagement process, not just interpreted as ‘a tick on the participatory box’.

Participation and engagement opens up the reflective and conversation space to collective engagement, which is a very complex process as highlighted in Charles Dhewa’s model of collective sense-making in his work on traducture. A prerequisite in that collective engagement and sense-making is the self-confidence that you develop in your own knowledge. For ‘local knowledge’, this is a very difficult requirement, not least because even in their own context, proponents of local knowledge might be discriminated and rejected by others for the lack of rigor they display.

So how to invite participation and engagement?

Values and principles are guiding pointers. Respect (for oneself and others) and humility or curiosity are great lights on the complex path to collective sense-making (as illustrated by Charles Dhewa’s graph below). They guide our initiatives by preserving a learning attitude among each and every one of us. Perhaps development should grow up to be more about  ‘ignorance management’, an insatiable thirst for new knowledge. The humility about our own ignorance and curiosity might lead us to unravel ever sharper questions, on the dialectical and critical thinking path, rather than off-the-shelf (and upscaling-friendly) answers – which we tend to favour in the development sector. The importance here is the development of shared meaning.

A collective sensemaking framework (by Charles Dhewa)

A collective sensemaking framework (by Charles Dhewa)

As highlighted in the previous conversation, not every step of a development initiative requires multi-stakeholder participation, but a useful principle to invite participation and engagement is iteration. By revisiting at regular intervals the assumptions we have, together with various actors, we can perhaps more easily ensure that some key elements from the bigger picture are not thrown away in the process. This comes back to the idea of assessing the level of complexity we are facing, which is certainly affected by a) the amount of people that are affected by (or have a crucial stake in) the initiative at hand and b) the degree of inter-relatedness of the changes that affect them and connect them.

Iteration and multi-stakeholder engagement and participation are at the heart of the ‘inception phase’ approach. This is only one model for participation and un-linear planning:

  • On one end of the spectrum, a fully planned process with no room for (meaningful) engagement because the pathway traced is not up for renegotiation;
  • Somewhere in the middle, a project approach using an inception period to renegotiate the objectives, reassess the context, understand the motivations of the stake-holders;
  • At the other end of the spectrum, a totally emergent approach where one keeps organising new processes as they show up along the way, renegotiating with a variety of actors.

Seed money helps here for ‘safe-fail’ approaches, to try things out and draw early lessons and perhaps then properly budget for activities that expand that seed initiative. Examples from the corporate sector also give away some interesting pointers and approaches (see Mintzberg’s books and the strategy safari under ‘related resources’). The blog post by Robert Chambers on ‘whose paradigm’

“]Adaptive pluralism - a useful map to navigate complexity? [Credits: Robert Chambers]

Adaptive pluralism - a useful map to navigate complexity? [Credits: Robert Chambers

counts and his stark comparison between a positivist and adaptive pluralism perspectives are also very helpful resources to map out the issues we are facing here.

At any rate, and this can never be emphasised enough, in complex environments – as is the case in development work more often than not – a solid context analysis is in order if one is to hope for any valuable result, in the short or long run.

Related resources:

These have been our musings on day 1, perhaps not ground-breaking observations but pieces of an IKM-E collage that brings together important pointers to the legacy of IKM-Emergent. Day 2 is promising…

Related blog posts:

Communication, KM, monitoring, learning – The happy families of engagement


Many people seem to be struggling to understand the differences between communication, knowledge management, monitoring, learning etc.

Finding the happy families (Photo: 1st art gallery)

Finding the happy families (Photo: 1st art gallery)

Let’s consider that all of them are part of a vast family – the ‘engagement’ family. Oh, let’s be clear, engagement can happen in many other ways but for the  sake of simplicity, let’s focus on these four and say that all of these family members have in common the desire – or necessity – to engage people with one another, to socialise, for a reason or another. And let’s try to unpack this complex family tree, to discover the happy families of engagement.

The engagement family is big, it contains different branches and various members in each of these. The main branches are roughly the Communication (Comms), Knowledge management (KM) and Monitoring & Evaluation (M&E).

Communicating

Communi-cating

The comms branch is large and old. Among the many siblings, the most prominent ones are perhaps Public Relations and Marketing. They used to be the only ones around in that branch, for a time that seems endless. All members of this branch like to talk about messages, though their horizon has been expanding to other concepts and approaches, of late.

  • Public relations has always made the point that it’s all about how you come across to other folks and enjoys very much the sheen and the idea of looking smart. But some accuse him of being quite superficial and a little too self-centred.
  • His old sibling marketing has adopted a more subtle approach. Marketing loves to drag people in a friendly conversation, make them feel at ease and get them to do things that perhaps they didn’t want in the first place. Marketing impresses everyone in the family by its results, but he has also upset quite some people in the past. He doesn’t always care for all that, as he thinks he can always find new friends, or victims.
  • Another of their sibling has been around for a while too: Advocacy is very vocal and always comes up with a serious message. Some of his family members would like him to adopt a less aggressive approach. Advocacy’s not silly though, so he’s been observing how his brother marketing operates and he’s getting increasingly subtle, but his image is very much attached to that of an ‘angry and hungry revolutionary loudmouth’.
  • Their sister communication is just as chatty but she is a bit behind the scene. Communication doesn’t care about promoting her family, selling its treasures or claiming a message, she just wants people to engage with one another, in and out of the family. She is everywhere. In a way she might be the mother of this branch.
  • Their youngest sister, internal communication, has been increasingly present over the past few years and she really cares for what happens among all members of her family. She wants people to know about each other and to work together better. She has been getting closer and closer to the second main branch of the engagement family tree: knowledge management, but she differs from that branch in focusing on the internal side of things only.
Knowledge management

Knowledge management

The Knowledge management branch also comprises many different members and in some way is very heterogeneous. This branch doesn’t care so much for messages as for (strategic) information and conversations. For them it’s all about how you can use information and communication to improve your approach.

  • The old uncle is information management. He has been around for a while and he still is a pillar of the family. He collects and organises all kinds of documents, publications, reports and puts them neatly on shelves and online in ways that help people find information. His brothers and sisters mock up his focus on information. Without people engaging with it, information does little.
  • His younger sister knowledge sharing was long overshadowed in the KM branch but she’s been sticking her head out a lot more, taking credit for the more human face of the KM branch. She wants people to share, share and share, engage and engage. She’s very close to her cousin Communication from the Comms branch, but what she really wants is to get people to get their knowledge out and about, to mingle with one another. She has close ties with her colourful cousins facilitation, storytelling and a few more.
  • They have another brother called ‘organisational learning’, who was very active for a while. He wanted everyone to follow him and his principles but he has lost a lot of visibility and momentum over the years when many people found out that the way he showed was not so straightforward as he claimed;
  • The little brother PKM (personal knowledge management) was not taken seriously for a long time but he is really a whiz kid and has given a lot of people confidence that perhaps his branch of the family is better off betting on him, at least partly. He says that everyone of us can do much to improve the way we keep our expertise sharp and connect with akin spirits. To persuade his peeps, PKM often calls upon on his friends from social media and social networks (though these fellas are in demand by most family members mentioned above).
  • A very smart cousin of the KM branch, innovation, is marching up to the limelight. She’s drop-dead gorgeous and keeps changing, never settling with one facet of her identity. Her beauty, class and obvious commonsense strike everyone when they see her, but she disappears quickly if she’s not entertained. In fact, many in the KM family would like to get her on their side but she’s alluding. Perhaps if many family members got together they would manage to keep her at their side.
Monitoring

Monitoring

The M&E branch has always been the odd group out. They are collectors and reporters. Through their history they have mostly focused on indicators, reportspromises made, results and lessons learnt. Other family members consider this branch to be little fun and very procedural, even though of late they have bended their approach – but not everyone around seems to have realised that.

  • Planning is not the oldest but perhaps the most responsible one of this branch. He tries to coordinate his family in a concerted manner. But he is also quite idealistic and sometimes he tends to ignore his siblings and stick to his own ideas, for better (or usually for worse). Still, he should be praised for his efforts to give some direction and he does so very well when he brings people to work with him;
  • Reporting, the formal oldest brother, is perhaps the least likely to change soon. He takes his job very seriously and indeed he talks to all kinds of important people. He really expects everyone to work with him, as requested by those important contacts of his. He doesn’t always realise that pretty much everyone consider him rather stuffy and old-fashioned, but he knows – and they sometimes forget – that he does matter a lot as a connector between this whole funky family and the wider world.
  • Data collection is the next sister who tends to wander everywhere; she lacks the sense of prioritisation, which is why planning really has to keep an eye on her. She’s very good at collecting indeed a lot of stuff but she doesn’t always help her siblings make sense of it. Everyone in the family agrees she has an important role to play but they don’t quite know how.
  • Therefore her other sister reflection is always behind to absorb what data collection brought forward and make sense of it. She is supposedly very astute but occasionally she does her job too quickly and misses crucial lessons or patterns. Or perhaps she’s overwhelmed by what data collection brought to her and she settles for comfort. But she usually has great ideas.
  • They have a young sister called process documentation. She’s a bit obscure to her own kin but she seems to have built a nice rapport with the other branches of the wider family and seems more agile than her own brothers and sisters. She goes around and observes what’s going on, picking up the bizarre and unexpected, the details of how people do things and how it helps for their wider work.
Learning is patient

Learning is patient

The wise godmother (1) of them all is learning. Learning generously brings her good advices to all her family, for them to improve over time. She wants her Comms branch offspring to engage in ways that benefit everyone; she encourages their KM siblings to nurture more genuine and deeper conversations that lead to some more profound insights and more effective activities; she invites the sidetracked M&E branch to find their place, not be obtuse and use their sharp wits to bring common benefits and help understand what is going well or not and why. More than anything, she encourages all her godchildren to get along with one another because she sees a lot of potential for them to join hands and play together.

Learning could do it all on her own but she prefers to socialise, she loves socialising in fact, and that’s how she keeps on top of the game, and keeps bringing the light over to other parts of the family. It’s not an easy game for her to bring all her flock to play together. There’s a lot of strong egos in there, but she is patient and versatile, and she knows that eventually people will come to seek her wisdom…

Do you recognise your work in those happy families? Who am I missing and where in the tree should they fit?

Related posts:

A simple KM and communication strategy… with double focus on the context


A lot of KM strategies end up in the dustbin. Or in the cemetery of good ideas that never took off. There are many reasons for that, explored and explained ad infinitum in the KM world.

I’d like to zoom in on two of them though:

West Africa Water Initiative

The West Africa Water Initiative

  1. From the inside, the KM strategy may be disconnected from the organisational context, either because it does not follow the overall objectives of the organisation/initiative or because it is formulated in a complex technical jargon, making it sound like an (unjustified) import. There’s nothing worse for employees than to feel someone that doesn’t understand them is trying to shove a strategy, a procedure or a system down their throat.
  2. From the outside it may be disconnected from the local context in which the initiative or organisation is operating. In this case, the initiative may be well thought-through but it will slide on the surface and fall as quickly as someone wearing normal trainers on an indoors soccer field.

This is why, for an assignment on behalf of the West Africa Water Initiative (WAWI) supported by USAid, a colleague and I proposed a KM and communication strategy that is rather practical and really takes into account the context of the initiative itself and crucially the local context and practices at play in that environment.

The strategy we propose basically looks into two main sets of activities and support activities: the main activities are information management (generating, managing and versioning information) and knowledge sharing (face-to-face and virtually, process documenting dialogues, aggregating content). Support activities include: raising visibility for the initiative; working on improving internal KM and communication; developing capacities for all of these activities; linking meaningfully with monitoring and evaluation.

Have a look at this strategy there and let me/us know what you think: http://www.community-of-knowledge.de/beitrag/knowledge-management-and-communication-strategy/ (this is the link to the strategy as it will be published in a journal soon. You can also find the strategy on IRC’s website: http://www.irc.nl/page/62673).

Related posts:

What the *tweet* do we know (about monitoring/assessing KM)?


This week Tuesday I moderated my first ever Twitter chat thanks to the opportunity provided by KMers (as mentioned in one recent blog post). A very rich and at times overwhelming experience in terms of moderating – more on this in the process notes at the bottom of this post.

KMers provides a great opportunity to host Twitter chats! Tweet on! (photo credits: ~ilse)

The broad topic was about ‘monitoring / assessing KM’ and I had prepared four questions to prompt Tweeters to engage with the topic:

  1. What do you see as the biggest challenge in monitoring KM at the moment?
  2. Who to involve and who to convince when monitoring KM?
  3. What have been useful tools and approaches to monitor KM initiatives?
  4. Where is M&E of KM headed? What are the most promising trends (hot issues) on the horizon?

After a couple of minutes at the beginning to wait for all participants, we started listing a number of key challenges in terms of monitoring/ assessing KM:

  • Understanding what we are trying to assess and how we qualify success – and jointly agreeing on this from originally different perspectives and interests;
  • The disconnect between monitoring and the overall strategy and perhaps its corollary of (wrongly) obsessing on KM rather than on the contribution of KM to overall objectives;
  • The crucial problem of contribution / attribution of KM: how can we show that KM has played a role when we are dealing with behaviour changes and improved personal/organisational/inter-institutional effectiveness?;
  • The dichotomy between what was described as ‘positive’ monitoring (learning how we are doing) and ‘negative’ monitoring (about censoring and controlling peoples’ activities);
  • The occasional hobby horses of management and donors to benchmark KM, social media, M&E of KM etc.
  • The problem of focusing on either quantitative data (as a short-sighted way of assessing KM – “Most quantitative measures are arbitrary and abstract. …adoption rate doesn’t really equate to value generation” – Jeff Hester) or rather qualitative data (leaving a vague feeling and a risk of subjective biases);
  • The challenge of demonstrating added value of KM.
  • The much need leadership buy-in which would make or break assessment activities;

The challenges were also felt as opportunities to ‘reverse engineer successful projects and see where KM played a role and start a model’.

An interesting perspective from Mark Neff – that I share – was about monitoring from the community perspective, not from that of the business/organisation.

This last issue hinted at the second part of the chat, which was dedicated to what turned out to be a crux of the discussion: who do you need to involve and who to convince (about the value of KM) when monitoring KM.

Who to involve? Customers / beneficiaries, communities (for their capacity to help connect), even non-aligned communities, users / providers and sponsors of KM, employees (and their capacity to vote with their feet). Working in teams was suggested (by Boris Pluskowski) as a useful way to get knowledge to flow which eventually helps the business then.

Who to convince? Sponsors/donors (holding the purse strings), leaders (who are not convinced about measurement like managers but instead like outputs and systems thinking).

What is the purpose of your monitoring activities? Management? Business? Productivity? Reuse? Learning? Application? Membership? Mark Neff rated them as all interesting (another challenge there: choose!). Rob Swanwick made the interesting point of measuring within each unit and having KM (and social media at that) mainstreamed in each unit, rather than dedicated to a small group.

Raj Datta shared his interesting perspective that it is key to explore and expand from the work of communities that are not aligned with business objectives.

The third part continued with some tools and approaches used to assess KM.

The key question came back: What are we looking at? Increasing profits, sales and the engagement of customers? Participation in CoPs? Answers provided in 48 hours? Adoption rates (with the related issue of de-adoption of something else that Rob Swanwick pointed out)? Project profile contributions? Percentage of re-use in new projects? Stan Garfield suggested setting three goals and measuring progress for each (as described in his masterclass paper about identifying objectives). Mark Neff also stressed that it all depends on the level of maturity of your KM journey: better to build a case when you begin with KM, to look at implementing something or at the adoption rate when you’re a bit more advanced… At his stage, the man himself sees “efforts to measure the value we provide to clients and hope to extend that to measures of value they provide”,

And storytelling wins again! The most universal and memorable way to share knowledge? (photo credits: Kodomut)

In spite of these blueskying considerations, the KMers’ group nonetheless offered various perspectives and experiences with tools and approaches… social network analysis (to measure community interaction), Collison’s and Parcell’s KS Self assessment, outcome mapping (to assess behaviour change), comparative analysis (of call centre agents using the KM system or not), a mix of IT tools and face-to-face to create conversations.

But really what stole the show were success stories. Jeff Hester mentioned “they put the abstract into concrete terms that everyone can relate to”. Stories could also take the form of testimonials and thank you messages extracted from threaded discussions. But at any rate they complement other measurements and they sell and are memorable.

Rob Swanwick pondered: “Should stories be enough to convince leaders“? Roxana Samii suggested that “leaders wil be convinced if they hear the story from their peers or if they really believe in the value of KM – no lip service” and Boris Pluskowski finished this thread with a dose of scepticism, doubting that leaders would find stories enough to be convinced. In that respect, Mark Neff recommended assessing activities on our own and leading by example, even without the approval of managers or leaders, because they might not be convinced by stories or even numbers.

Of course the discussion bounced off to other dimensions… starting with the gaming issue. A new term to me anyway but indeed how to reduce biases induced by expectations on behalf of the people that are either monitoring or being monitored? And should we hide the measurements to avoid gaming (“security by obscurity” as mentioned by Lee Romero) or should we on the other hand explain them to reveal some parts of the variables to get buy-in and confidence, as suggested by Raj Datta or the transparency that is important for authentic behaviours as professed by Mark Neff?

Finally, the question about where M&E of KM is headed (fourth part) didn’t really happen in spite of some propositions:

A Twitter chat can also mean a lot of tweets running in parallel (photo credits: petesimon)

  • Focusing more on activities and flows in place of explicit knowledge stock (Raj Datta)
  • Mobile buzzing for permanent monitoring (Peter Bury)
  • Some sort of measurement for all projects to determine success (Boris Pluskowski)
  • Providing more ways for users to provide direct feedback (e.g., through recommendations, interactions, tagging, etc.) (Stan Garfield)

After these initial efforts, the group instead happily continued discussing the gaming issue to come to the conclusion that a) most KMers present seemed to go for a transparent system rather than a hidden one that aims at preventing gaming and b) gaming can also encourage (positive) behaviours that reveal the flaws of the system and can be useful in that respect (e.g. Mark’s example: “people were rushing through calls to get their numbers up. People weren’t happy. Changed to number of satisfied customers.”).

With the coming of V Mary Abraham the thorny question of KM metrics was revived: how to prove the positive value of KM? Raj Datta nailed an earlier point by mentioning that anyway “some quantitative (right measures at right time in KM rollout) and qualitative, some subjective is good mix”. On the question raised by V Mary Abraham he also offered his perspective of simplicity: “take traditional known measures – and show how they improve through correlation with KM activity measures”. This seemed to echo an earlier comment by Rob Swanwick” Guy at Bellevue Univ has been doing work to try to isolate ROI benefits from learning. Could be applied to general KM”.

In the meantime Mark Neff mentioned that to him customer delight was an essential measure and other tweeters suggested that this could be assessed by seeing the shared enthusiasm, returning and multiplying customers (through word of mouth with friends).

Boris Pluskowski pushed the debate towards innovation as well, as an easier way to show the value of intangibles, as opposed to KM. V Mary Abraham approved in saying “Collab Innov draws on KM principles, but ends up with more solid value delivery to the org”. To which Raj Datta replied: “to me KM is about collaboration and innovation – through highly social means, supported by technology”. And the initial tweeter on this thread went on about the advantages of innovation as being a problem solving exercise at its heart, including a before and an after / result – and it is possible to measure results. V Mary Abraham: “So #KM should focus on problem-solving. Have a baseline (for before) and measure results after”, because solving problems buys trust. But next to short-term problem-solving Mark Neff also pointed at the other face of the coin: long-term capacity building: “Focus people on real solutioning and it will help focus their efforts. Expose them to different techniques so they can build longterm”.

And in parallel, with the eternal problem of proving the value of KM, Raj Datta (correctly) stated: “exact attribution is like alchemy anyway – consumers of data need to be mature”.

It was already well past the chat closing time and after a handful of final tweets, this first KMers’ page of monitoring/assessing KM was turned.

At any rate it was a useful and fresh experience to moderate this chat and I hope to get at it a second time, probably in April and probably on a sub-set of issues related to this vast topic. So watch the KMers’ space: http://www.kmers.org/chatevents!

Process Notes:

As mentioned earlier in this post, the Twitter chat moderation was a rather uncanny experience. With the machine gun-like speed of our group of 25 or so Tweeters, facilitating, synthesising / reformulating and answering to others as one participant all at once was a hectic experience – and I’m a fast blind typer!

This is how I felt sometimes during the moderation: banging too many instruments at the same time (photo credits: rod_bv)

But beyond the mundane I think what stroke me was: The KMers’ group is a very diverse gang of folks from various walks of life, from the US or the rest of the world, from the business perspective or the development cooperation side. This has major implications on the wording that each of us uses – which may not be granted (such as this gaming issue that got me triggered at first) but also on the kind of approaches we seem to favour, the people we see as the main stumbling block or on the other hand the champions that we see as aspirational forces, and the type of challenges that we are facing… More in a later post about this.

There is finally the back-office side of organising such a Twitter event, and I think as much about preparing / framing the discussion, as inviting people to check out your framing post, preparing a list of relevant links to share, sharing the correct chat link when the event starts (and sending related instructions for new Tweeters), but also generating the full chat transcript (using http://wthashtag.com/KMers, thank you @Swanwick ;) all the way down to this blog post and the infographic summary that I’m still planning on preparing… it’s a whole lot of work, but exciting one and as the web 2.0 follows a ‘share the love / pay it forward’ mentality, so why don’t you give it back to the community out there? This was my first attempt, and I hope many more will follow…

Related blog posts (thank you Christian Kreutz for giving me this idea):

The full transcript for this KMers twitter chat is available here.

(Im)Proving the value of knowledge work: A KMers chat on monitoring / assessing knowledge management


KMers chat on 16/02/2010 on monitoring/assessing KM

On 16 February 2010, I will be hosting a KMers chat about the topic of ‘monitoring / assessing knowledge management’ (1).

When Johan Lammers (one of the founders of KMers and of WeKnowMore.org) invited KMers (the people, not the platform) to host a discussion I jumped on the occasion. It’s new, it’s fresh, it’s fun, it’s useful: what else can you dream of? And talking about useful discussions, it just fitted my work on this topic of monitoring knowledge management very well.

So here you go, if you are interested, this is the pitch for this KMers chat:

Knowledge management is ill-defined but even more crucially ill-assessed. The inaccuracy and inadequacy of monitoring (2) approaches for KM has left behind a trail of tensions, heated debates, frustrations and disillusions. Differing perspectives on the value of KM and on ways to conduct monitoring have further entrenched these reactions.

How to reconcile expectations from managers / donors on the one hand, from teams in charge of monitoring knowledge management and clients / beneficiaries on the other hand? How to conjugate passion for and belief in knowledge-focused work with business realism and sound management practice?

What are approaches, methods, tools and metrics that seem to provide a useful perspective on monitoring the intangible assets that KM pretends to cherish (and/or manage)? What are promising trends and upcoming hot issues to turn monitoring of KM into a powerful practice to prove the value of knowledge management and to improve KM initiatives?

Join this Twitter chat to hear the buzz and share your perspective…

In this particular KMers chat we will grapple with four key questions, i.e.:

  1. What do you see as the biggest challenge in monitoring KM at the moment?
  2. Who to involve and who to convince when monitoring KM?
  3. What have been useful tools and approaches to monitor KM initiatives?
  4. Where is M&E of KM headed? What are the most promising trends (hot issues) on the horizon?

This discussion ties in closely with a couple of posts on this topic on this blog (see for instance this and that post) and three on IKM-Emergent programme’s The Giraffe blog (see 1, 2 and 3). Simon Hearn, Valerie Brown, Harry Jones and I are on the case.

Back on this KMers’ chat, here is an outlook on some issues at stake – I think:

Fig. 1 The starting model we are using for monitoring KM (credits: S. Hearn)

  • KM is not well defined and the very idea of ‘monitoring’ knowledge (related to the M in KM) is fallacious – this is partly covered in this post. What does this mean in terms of priorities defined behind a KM approach? What is the epistemology (knowledge system) guiding KM work in a given context?
  • KM is often monitored or assessed from the perspective of using intangible assets to create value. Is this the real deal? Perhaps monitoring may look at various dimensions: knowledge processes and initiatives (inputs & activities), intangible assets (outputs), behaviour changes and ultimately valuable results (outcomes and impact). See fig. 1 for a representation of this model.
  • In this, where should we monitor/assess knowledge, knowledge management, knowledge sharing and possibly all knowledge-focused processes – from the knowledge value chain or another reference system?
  • Monitoring is itself a contested practice that is sometimes associated with only the simple focus of ‘progress monitoring’ i.e. establishing the difference between the original plan and the reality, to prove whether the plan is accomplished or not. Where is the learning in this? What is more valuable: to prove or to improve? And could we not consider that monitoring of KM should arguably look at other valuable monitoring purposes (like: capacity strengthening, self-auditing for transparency, sensitisation, advocacy etc. (3)?
  • With respect to the different epistemologies and ontologies (world views), isn’t it sensible to explore the different knowledge communities (see slide 8 on Valerie Brown’s presentation on collective social learning) and expectations of the parties involved in monitoring/ assessing KM? After all, the monitoring commissioner, implementer and ultimate beneficiary (client) may have a totally different view point on the why, what and how of monitoring KM.
  • If we take it that monitoring moves beyond simple progress monitoring and does not simply rest upon SMART indicators and a shopping basket for meaningless numbers, what are useful approaches – both quantitative and qualitative – that can help us understand the four dimensions of KM monitoring mentioned above and do this with due consideration for the context of our knowledge activities?
  • And finally what can we expect will be the future pointers of this discussion? I am thinking here both in terms of broadening the conceptual debate, looking at promising new approaches (such as the semantic web and its possibilities to map contextualised information, Dave Snowden’s Sense Maker, Rick Davies’s most recent work on the basis of his old Most Significant Change method) or developing a more practical approach to make sense of knowledge and to support the work of KMers (us), our patrons, our partners and our beneficiaries / clients?
  • Do you have case studies or stories about the issues sketched above?

Hopefully, further down the line, we may have a clearer idea as to turning what is too often a costly and tiresome exercise into an exciting opportunity to prove the value of knowledge-focused work and to improve our practices around it…

If you are interested in this topic or want to find out more about KMers’ chats, please check in on 16 February and join the chat; oh, and spread the word!

Notes:

(1)    KMers is an initiative that was started in late 2009 and has already generated a few excellent discussions (the last one was about knowledge for innovation), usually hosted on Tuesday around 1800 CET (Central European Time). The chats Twitter-based and always involve a group of dedicated KM heads that are really passionate and savvy about the broad topic of knowledge management.

(2)    By monitoring we mean here the ‘follow up of the implementation of programme activities AND periodic assessment of the relevance, performance, efficiency and impact of a piece of work with respect to its stated objectives’ as regularly carried out in the development sector. In this respect we include the purposes of evaluation in monitoring as well. In the corporate world I guess you would translate this in regular assessment. Monitoring / assessment may happen by means of measurement and other methods.

(3)    A forthcoming IKM-E paper by Joitske Hulsebosch, Sibrenne Wagenaar and Mark Turpin refers to the nine different purposes for monitoring, that Irene Guijt proposed in her PhD ‘Seeking Surprise’ (2008). These purposes are: Financial accountability, Operational improvement, Strategic readjustment, Capacity strengthening, Contextual understanding, Deepening understanding, Self-auditing, Advocacy, Sensitisation).

Related blogposts: