Opportunity costs of documentation and how to make it work…


In my book of KM, documentation is an essential part of the work.

Documentation - do you read it (Credits: Matt Ray / FlickR)

Documentation – do you read it (Credits: Matt Ray / FlickR)

Not everyone agrees to it. Someone who works a lot with Liberating Structures recently told me he didn’t necessarily see the point of harvesting anything because the people that were ‘doing the work’ would remember.

But then there’s always the point of documenting for the sake of the people who are not ‘doing the work’ there and then. Keeping traces so others can pick up the trail and use it in ways that help them.

However the question always remains: what should you document (e.g. what is good in a project) and how much should you invest in documenting it – and how – vs. how much you should set up processes to directly connect people with relevant experience?

This is the eternal debate of documentation vs. one-on-one experience sharing, of Alexandrian libraries vs. campfires – something that is currently being debated on KM4Dev around the title “How Elon Musk can tell if job applicants are lying about their experience” (link pending on membership).

Yes, Alexandrian libraries are only a partial solution because they don’t relate a lot of the complexities. And as Johannes Schunter pointed recently on his blog, lessons learnt that generate bland statements are useless (the ‘Duh’ test).

And there is the issue that documentation takes time and effort. Not everything can be documented, everywhere, all the time, by everyone. It’s the same opportunity cost as for monitoring and evaluation (for which we can also adopt a somewhat agile approach).

Here are some ideas to identify what to document and how:

What to document?

  • What is new?
  • What is significant?
  • What’s been done about this already (in some form or shape)?
  • What is simple (and can be codified into principles or best practices)?
  • What is complicated (but can still follow good/next practice)?
  • What is complex and inter-related about this?
  • What is unknown?
  • What is helping us ask the next best questions?
  • Who knows more about this
  • What could be useful next steps?

How to document?

  • Develop templates for documentation for e.g. case studies (link pending KM4Dev membership);
  • Keep it simple: as little information as needed to inform people, but linked sufficiently well to other sources;
  • Develop a collective system where people can add up their experiences and insights (e.g. the KS Toolkit) – make sure you have one place that people recognise as the go-to site for this information;

How to prepare that documentation work? And this is the most important part.

  • Stimulate your own documentation through blogging, note taking, managing a diary etc. It always starts and ends at the individual level – as the constant knowledge gardeners we should be;
  • Make sure your documentation is related to conversations (as Jaap Pels also recommends in his KM framework) so that you get an active habit of identifying;
  • Make sure you have formal and informal spaces and times for these conversations to erupt, both at personal level with our personal learning networks, within teams, within organisations, across organisations (e.g. in networks) etc.;
  • Develop abilities for documentation (which is part of the modern knowledge worker’s skillset);
  • Develop a strong questioning approach where you are constantly working on foresight, trend watching, complex tradeoff assessments etc.;
  • Role model documentation of the important aspects emerging from learning conversations, to stimulate a culture of intelligent documentation;
  • Assess how your documentation makes sense and what is required – and this is the art and science of documentation, to strike the balance between time inputs and learning/productivity outcomes…
Documentation as the next opportunity? See this 'Documentation Maturity Model' (Credits: Mark Fidelman / FlickR)

Documentation is an interesting KM opportunity for many people. See this ‘Documentation Maturity Model’ (Credits: Mark Fidelman / FlickR)

How do you approach documentation in your conversations?

Related blog posts:

How to navigate complexity in M&E and where KM can help


What’s happening in the world of monitoring and evaluation?

How is complexity – the thinking and the reality – affecting it, highlighting gaps and creating opportunities?

What can KM help to do about this?

Here are some of the questions that this Prezi addresses.

I’ll be presenting this in the course of next week at a retreat on M&E.

This is a draft, so please let me know what you think so I can improve this last minute, still before it is uploaded on official channels 😉

Related blog posts:

At the edges of knowledge work, the new beacons of ever-sharper collective intelligence


Modern knowledge workers don’t really exist. Not with all the highly desirable features we may want them to have. But breaking down what such a super human should do into distinct functions could be a good start to training us all at becoming better knowledge workers. I noted a few of these functions in the profile of a modern knowledge worker such as documenting conversations, filtering information etc. Yet these functions are dynamic and reinvent themselves, and new ones appear.

What are the next knowledge work super-hero functions? (credits - Photonquantique / FlickR)

What are the next knowledge work super-hero functions? (credits – Photonquantique / FlickR)

These new functions are partly addressed already by agile knowledge workers, but perhaps not always with enough intent and consistency. While we may not recognise the following functions, they may become increasingly pertinent in the modern knowledge era, with the intention of mobilising collective knowledge as best we can, particularly around events (online or offline) that bring people to strike rich conversations:

Ex-post sense-maker 

An event that is documented properly leads to rich notes on e.g. a wiki, a Google document, a written report (or otherwise). This is great: anyone participant in such conversations – anyone at all actually – can find and use these traces of conversations. But digital conversation notes are often TOO rich. Too long, too complex. A very useful extra mile for knowledge work would be to go through these notes and tease them out in useful bite-size chunks and compelling formats. An excellent example of this is this documentation of work done on ‘anticipating climate risks in the Sahel‘.

Memory connector (literature sifter)

This is the normal job of researchers. They dig through past documentation and build upon it. But they do it in a specific way – not always most straightforward. So before any planned/structured conversation happens (or any event gets organised), having someone go through all the literature related to the issues at hand, summarising key questions and issues that were raised around that field the last time around (picking up on the trail of ex-post sense-makers), on the latest recommendations etc. would add immense value to the conversations. It’s about mapping out the grid of our collective intelligence and building on it.

Too often we reinvent the wheel out of laziness or lack of awareness about related past conversations. The trick is again to package that preexisting information in ways that make it attractive to the people who will be engaged in the audience. Cartoons? A short video? A Pecha Kucha presentation (see example below)? A list of documents commented with humour? There are many ways to do this. So why do we too often fail at linking the past with the present?

Visualisation engineer

The documentation of conversations is more often than not done in a written format. Or in the best of cases in a myriad of videos. This makes it hard for us to absorb and synthesise that information. So how about visual engineers: people who are able to prepare visual handouts as the conversations unfold, organise intelligent lists of contacts that make networking and connecting easier, sifting through stats and presenting graphs in a radical and compelling way, developing complex thoughts into an-image-is-worth-1000-words kind of graphs and conceptual models.

Graphic recording - a whole palette of options before, during and after... (Credits - Susan Kelly)

Graphic recording – a whole palette of options before, during and after… (Credits – Susan Kelly)

There’s already a lot of graphic recording (see above) happening. I believe in our Instagram-culture of Pinterest drives we are only at the dawn of on-the-spot visual engineering. And this is perhaps not as much a function as an activity that just should occur more systematically.

And here’s another example:

Social network gardener

Perhaps this function is covered under any of the above. The idea is that someone really uses the information recorded and nuggets harvested to plant it/them in the right channels, networks and locations. Combined with the work of a visualisation engineer, this job allows targeted sending of compelling information to the right people.

Social media gardening - takes time but pays off! (Credits - j&tplaman / FlickR)

Social media gardening – takes time but pays off! (Credits – j&tplaman / FlickR)

Social network gardening does take time, but really adds a lot of value to the exchange that happened in the first instance, because it contributes to a universal information base that can reduce the learning curve the next time a group of people are wondering about a similar set of issues. And it does so not just by making information available but also by connecting people, i.e. knowledge – so it’s much more dynamic. Of course a lot of modern knowledge workers are already doing this to some extent. The point is to add structure and intent to this, to maximise opportunities for interaction beyond the group of people already involved.

Interestingly, what all these functions have in common is to combine conversations (knowledge sharing) and their documentation or processing (information management) both before, during and after the conversations happen… Acting upon the conversations as they happen, the nexus of agile KM don’t you think?

Related blog posts:

Assessing, measuring, monitoring knowledge (and KM): Taking stock


Been a while since I last properly ‘took stock’ of a specific topic in my knowledge garden. The last one about storytelling. But I’ve recently been working again on one of my pet topics: assessing knowledge work, so a good stock-taking exercise will be really handy for upcoming work, and hopefully for you too!

Some takes on M&E of KM via IKM-Emergent (Credits: Hearn, Hulsebosch & Talisayon)

Some takes on M&E of KM via IKM-Emergent (Credits: Hearn, Hulsebosch & Talisayon)

Knowledge Management Impact Challenge (KMIC) work and related KM4D journal issue

In 2011, the US Agency for International Development (USAID) launched a KM impact challenge, inviting authors to submit entries explaining how KM could be effectively assessed. 45 different case studies were shared and reflected upon in a final report and a series of articles published in the Knowledge Management for Development Journal. These cases spanned a spectrum of KM interventions from capturing lessons, developing capacities, improving organisational performance, looking into learning events, impact of communities of practice etc. A lot of common challenges to assessing KM and practical recommendations and question to move forward are identified in this (to my knowledge) penultimate attempt at taking stock of assessing KM in development work.

Read the KM Impact Challenge final report or discover the Knowledge Management for Development Journal issue dedicated to the KMIC experiences (limited access, come back to me for specific articles).

Methods for measuring intangible assets

I first came across this resource in a blog post (itself worthwhile reading) from Gerald Meinert about ‘KM asks for value compensation‘. Karl-Erik Sveiby is one of the KM tycoons. He has been writing a lot of really good conceptual and practical pieces on KM as a professor and as founder of Sveiby Knowledge Associates. Although this list of approaches to measure intangible assets is not strictly focusing on assessing KM, it is very useful to consider as KM relates very much to intangibles. Sveiby looks at four different methods to measure intangibles: Direct intellectual capital methods, market capitalisation methods, return on assets methods and scorecard methods. He goes on looking into 42 different methods falling in either category.

The merit of this work is to consider the valuation of knowledge capital in various ways. Perhaps not enough is said about how knowledge leads to other changes but that is covered by other methods and resources listed here.

See Sveiby’s methods for measuring intangible assets

Nick Milton’s series of quantified KM stories

Nick Milton, of Knoco Stories, is a prolific blogger on KM and he totally should have been much higher on the top 100 KM influencers on Twitter. Among the many things that Nick has been blogging about are a series of quantified success stories – 60 to date while blogging here – which look at ways KM helped make or save money, adoption of new practices, increasing implementation speed, increasing effectiveness and benchmarking it against other comparies etc.

Have a look at these Knoco Stories ‘quantified success stories‘.

The use of indicators for the monitoring and evaluation of knowledge management and knowledge brokering in international development

This is the latest I’ve come across. Compiled by Philipp Grunewald and Walter Mansfield on the back of a KM4Dev Innovation Fund grant, this survey report came together with a workshop report which consists in fact mainly of a list of 100 indicators to assess knowledge (management, sharing etc.). See the survey report or discover the top 100 indicators in the workshop report.

KM4Dev curated discussions on monitoring and assessing KM

Over time, various KM4Dev members have been asking about this perpetually reappearing conversation topic (and the reason why I consider M&E of KM one of the phoenixes of the KM field). Of all these discussions, ‘Monitoring and evaluating KM‘ and the more recent ‘Measuring knowledge sharing‘ are perhaps the most pertinent pointers, although other conversations helpfully addressed specific aspects related to e.g. after action reviews, partnerships, portals, conferences etc.

There is another one of these conversations happening on the KM4Dev mailing list as we speak. Feel free to join and perhaps to help document the conversation, I may include it in this stock-taking post.

Check the KM4Dev wiki on M&E-focused discussions

The IKM-Emergent papers on monitoring and evaluation of knowledge management

Finally, I couldn’t ignore these two papers that the Information and Knowledge Management (IKM) Emergent project came up with, which I also co-authored:

The first paper takes stock of the major problems with assessing knowledge management in its various forms and how it is currently being done. The second paper suggests an alternative approach to doing it, inviting a variety of people that have a stake in the evaluation of KM and collectively reflecting with them on what assessing KM could be and how it would add more value.

These papers – while in the making – were presented at one of the KMIC webinars:

I have some more resources which I’d like to share with you from my Delicious bookmarks for your own sake.

There must be many other key resources, reports and inputs and I would love to hear from you: What are your personal gems about assessing knowledge work? What resources and ideas have changed your view of this complex and uber-important aspect of our work in the field of KM?

And here I don’t provide a meta-analysis of all these resources, but this might be the next step in my own perpetually restarting journey in the territory of KM.

Related blog posts:

Social web metrics: between the cracks of evidence and confidence


Assessing knowledge work is back on my menu for this year and I need to start somewhere simple(r): social web metrics.

Rather than focus on the high end of monitoring/evaluation (M&E) of knowledge work, I’d like to look into current web metrics in use and to understand what they are really capturing, what they fail to capture or what problems they pose and what links them together or what we could do with them.

The social media analytics framework below proposes a good entry point to this exploration. I will come back at a later stage to such analytical frameworks.

A framework for Social Media Analytics (Gaurav Mishra)

A framework for Social Media Analytics (Gaurav Mishra)

The social web metrics we have at our disposal to assess knowledge work are related along a chain from attention to action (the famous [social] AIDA again) – or from content to collective intelligence as suggested above. First comes discovering a particular resource and last comes using it, appreciating its use and being transformed (at scale) with it. Each resource, each piece of content,  hopes to tick as many of these goals. With it comes a potential useful insight, but also limitations… and mitigations.

Here is an overview of some objectives we might legitimately have with our content (which metrics try, mostly insufficiently, to capture).

Find me!

Sometimes we just bump into a site or resource on the web, while looking for something else… A page view is a reflection of that. Page views can thus be intentional views (effectively linked to your content/focus) or accidental views (someone ends on a web page after looking for term that is only vaguely connected with that content). An instance of this: I suspect quite a few visitors to my blog are actually looking for information about the gay podcast ‘the feast of fools’ but they end on this blog post – no connection other than the name. Then again, people do come across your content for good reasons too.

Limitations and mitigation: Page views are thus not entirely helpful. Oh, and in case you didn’t know by now, a hit really is not a useful metric, even though it’s all about finding stuff online.

The remedy is to properly (meta-)tag your content, use descriptions on your photos, add links to relevant related content. Linking is the currency of the time with search engine optimisation. The more you are linked to by others, the more likely people will find your content genuinely related to their focus, in relation with specific search terms.

Grab my attention!

Ending on a page or resource is the first step. Attracting our curiosity as online visitors is the second step, and it is not straightforward with our 8-second attention span. (Intentional) Page views are still the main metric here. But so are retweets on Twitter.

Limitations and mitigation:

This is where a good title comes in handy (one of the many useful tips of Ian Thorpe in sharing his blogging experience). But a retweet in particular doesn’t mean that the person re-tweeting the page/resource actually liked it… or even read it. These visitors just seemed to like your shop window’s look and feel! Mind that they like your content for the right reasons beyond that sweet first impression. All ‘Find me’ advices are applicable here too!

Like my content!

Ok, now people have checked your content. And they enjoy it! They ‘like’ it. Or they +1 it, or  they rate it… There are various ways to show appreciation for content. Perhaps the most valuable one is to comment on content and show appreciation this way. It’s useful feedback, provided it’s genuine.

Limitations and mitigation: The danger is that some people just ‘like’ because the like button is easy to push, with or without checking the content in the first place (see the shop window problem above). The other problem is that there is still no indication as to why they like your content (perhaps the tone, the image you chose, the serendipity effect that led them to your content at a moment when they were looking for something similar). Most liking metrics are only partly useful, unless a certain volume of these signals is aggregated throughout various collections and it starts indicating trends.

Focused comments, however, should be encouraged as they help find out why people liked your content and helps you engage with your audience one step further…

Pass it on to others!

If people liked your content, perhaps they didn’t rate it (most people find giving feedback a daunting step) but they might have shared it with others. Metrics here include: linking to your content, social shares (re-tweets are a point in case, but Facebook shares, Google+ shares and email shares are other examples), citations of your work etc. People might be sharing a link to your content or the full content (re-blogging content is an indirect metric of sharing here).

Limitations and mitigation: The same danger of people sharing without having checked your content is still looming. But sharing content is generally a better indication of appreciation for your content, especially when it is shared in quantity and quality. Pay attention to who shares your content. Trusted and valued sources are great indicators of the quality of your content. I am not aware of tools that track the sharing of content with a specific breakdown of the popularity of sharing sources but that would be useful.

Keep me for later!

People may keep track of your content for different reasons:

  • They haven’t read it yet but want to do so later when they find time for it;
  • They want to share it with others but haven’t gotten around it;
  • They like it so much – or find it useful enough – that they want to collect and curate your content.

At any rate, they seem attracted to your content enough to keep it for later.

Metrics here include: Bookmarks, favourites, downloads etc. These are possibly good measures of some following for your content.

Limitations and mitigation: Two out of three reasons above do not point to any particular appreciation. Resources could be put aside and never used again. Even when downloaded, their effective use depends on the discipline and willingness of the bookmarker to actually use his/her saved resources for another activity. Again here large numbers of these metrics can plot useful trends, but individual measurements or isolated bookmarks remain marginally useful.

(re-)Use me!

The objective of your content is to be used – and re-used. Directly or indirectly, now or later, as intended or otherwise, as direct inspiration or diffuse source of innovation. But this is very difficult to track. Only direct references in someone else’s work are (usually) straightforward indications that content is being used.

Readily available metrics thus include: reblogs, citations, links in other important writings and works. Testimonies (e.g. stories of change and the likes) are not a given in social media but are probably the best approach to hear about the use of content. Indirectly, comments may play a similar role, if they mention how the content is being applied somewhere else (as opposed to just reacting on the content itself).

Limitations and mitigation: It is very difficult to get such references and accounts of use – but from this point on it becomes really interesting and relevant. Aiming at collecting such testimonies and developing a culture of feedback and critical reflection (e.g. by means of comments, ratings etc.) all contribute to getting better at and closer to collecting interesting results about the use of content.

Let me make a better you!

One of the best results we can hope for any resource we develop is for it to contribute to changing behaviour. Using content doesn’t equate change. Change is very elusive and difficult to assess as it is an intimate matter, which perhaps requires the realisation of the person changing that they are changing. 

Among other metrics here, the most important one are testimonies, and to a lesser extent comments (provided these comments relate to the usefulness and effect of the resource itself, how it was used not just about the content of the resource). These are not available web metrics (yet?) and would be more typically part of process/outcome/impact monitoring efforts. But these results are worth tracking down.

Limitations and mitigation: As for the use of content, accounts of change brought about by resources or otherwise are very diffuse and hard to collect, even harder to attribute, unless  mentioned in the testimonies. The same approach as for the use matters here, it just goes one level deeper in the exploration.

Become a movement thanks to me

The ultimate goal of any resource is that it is so seminal that it is referred to over and over again and has the tendency to provoke a knock-over domino effect on the behaviour of many. What the Bible or the Coran or the little red book achieved. Tough job…

Limitations and mitigation: Frankly, if you are at that stage, you should be blogging about this instead of me 😉 I can only say that radical innovation, use of locally nested word-of-mouth conversion effects and tapping into the viral potential of some technologies and their disruptive nature might offer shorter paths to this holy grail.

 In conclusion…

What is difficult is that there is no linear following along these metrics. Furthermore, some of these metrics only become useful at a certain scale – or in combination with other metrics occur e.g. only when various people have downloaded and favourited a resource can one tell that it probably has a transformative effect on people. The only sure way to get a relatively sure account of evidence is through testimonies – if they are truthful and sufficiently marginally biased.

The table below summarises some of the metrics available to suggest evidence of any impact of your content/resources.

Direct metrics Indirect metrics
Finding Page views, hits
Liking Likes, +1’s, ratings, comments
Retweets and other social shares
Sharing Links, Social shares, citations Downloads, comments, reblogs
Keeping Downloads, bookmarks, favourites Re-tweets, Social Shares, (some) social ratings
Using Citations, links, testimonies, reblogs Comments
Being transformed by it N/A Comments, testimonies

All in all, what matters in those web metrics are a combination of: effective consumption of the content, appreciation of that content (its quality and relevance), intent to use it, effective use of it, transformation brought by that use, scale of that transformation.

There are many tools to collect these. But the tools only address the collection part (your demand for it as content provider wishing feedback). What is more difficult is the supply of such evidence, and that comes only progressively with a culture of feedback and critical inquiry… Until that culture is there, we always navigate between the cracks of evidence and personal confidence.

Related blog posts:

Back on monitoring learning, from social media to impact


Second attempt to review some of the work done recently in the communication and knowledge management workshop (for CGIAR research programmes).

Another building block session was about the monitoring and evaluation of KM and communication. The group of participants was very interesting: a mix of researchers who are interested in monitoring participatory work, monitoring & evaluation (M&E) folks who know that impact assessment is the part that leads M&E and interests donors and organisations most, and finally comms/KM folks who usually monitor web stats and social media measures of influence, if anything at all.

Monitoring learning is about connecting knowledge dots, from social media 'signals' to evidence of impact (Credits - Dkuropatwa)

Monitoring learning is about connecting knowledge dots, from social media ‘signals’ to evidence of impact (Credits – Dkuropatwa)

Usually these three categories of people do not mingle (so) much with one another – each evolving in their comfortable silo. As a result, M&E is usually not integrated and serves only the interests of either of these communities. So this session was another interesting attempt at bringing together the learning/monitoring brothers.

Here are a few reflections that came up in our conversations:

  • Monitoring communication and knowledge work is essential in complex initiatives where both documentation and engagement are necessary. How we can best do this? By ensuring that comms and knowledge are at the heart of impact/outcome assessment and M&E, looking at where information management (availability of scientific information, high standards of data and information), knowledge sharing (engagement, dissemination) and learning (personal KM and social learning etc.) can contribute to better impact.
  • There are a series of interesting monitoring areas that comms and knowledge work partly cover, which can be of help for wider impact assessment:
    • Reach (how information reaches intended or unintended beneficiaries)
    • Appreciation of the information sent (or appreciation for the fact of being kept updated)
    • Influence of that information on thinking, discourse, actions
    • Results of these influences: changes in policy, practice etc.
  • All these aspects both work internally and externally: We try to reach, influence etc. both inside our programs and organisations and outside.
  • While impact assessment on the one hand and social media monitoring on the other hand are approaching evaluation questions in a very different way, a simple bridge between comms and impact crowds is a major step forward: after conducting social media monitoring, getting back to the audience with a couple of deeper questions could reap useful deeper reflections. Similarly, when developing impact assessment baselines etc., paying attention to the contribution of simple communication activities, tools and approaches can also help reveal more of these crucial connections.
  • The approach of bringing multiple stakeholders together to negotiate intended outcomes (as we suggested in one of the IKM-Emergent research program papers on this topic) might be one step too far at this stage but I feel it will come back on the menu sooner than we think…

The CGIAR Aquatic agricultural systems research program is trying to move towards the recognition of the importance of a knowledge sharing and learning culture – as a separate research strand, which is innovative in the CGIAR system – as a whole approach that federates KM, communication and monitoring and evaluation.

I will be working with some people from that program, from the recent workshop and from colleagues at my former organisation IRC as they are also looking into monitoring knowledge work. After the conceptual time of IKM-Emergent looking at these issues, I feel this might follow a rather pragmatic approach.

Yeeha! And here I come back on one of my favourite pet KM topics…

Related blog posts:

Paving the way for communication and knowledge management in the CGIAR research programs


The CGIAR organises a communication & KM workshop for its research programme

The CGIAR organises a communication & KM workshop for its research programme

The CGIAR network of agricultural research centres has set up 15 ambitious research programmes (CGIAR research programs).

This week, with a group of about 45 people staff from those programs will be working on various aspects of knowledge management and communication for those research programs. This (repeat after me…) kmc4CRPs workshop will have participants focus on five main themes:

  • Internal communication
  • Knowledge sharing and learning
  • Co-creating knowledge locally (and getting research into local use)
  • Communicating for policy impact
  • Scaling up and out of research

Under these main themes, various ‘building block’ sessions will zoom in on specific aspects of the work. And hands-on tool sessions will get some practical guidance for how to go about the tools to support our building block activities.

At the end of the week the group hopes to have various useful insights and recommendations which can be applie in a somewhat better coordinated/more consistent way across those research programs.

This week promises to be rich so there might be quite a few blog posts coming out of this workshop (which I’m partly facilitating), including interviews from interesting people…

Keep watching this space, and feel free to channel your questions here too!

The wealth of communities of practice – pointers to assess networked value?


Building upon the CoP 4L model by Seely Brown, Wenger and Lave (credits: dcleesfo / FlickR)

Building upon a.o. the CoP 4L model by Seely Brown, Wenger and Lave (credits: dcleesfo / FlickR)

The KM4Dev community of practice is going through an intensive action phase, beyond the conversations, as the grant given by the International Fund for Agricultural Development (IFAD) for the period 2012-2013 is leading to a number of interesting activities.

Among them is a learning and monitoring (L&M) plan which really focuses on learning from all other IFAD-funded activities, rather than focusing on monitoring (in the sense of checking delivery of outputs against the plans). And the focus of our L&M plan is about the networked development and value-creation of a community of practice (CoP). How does it fare against governance principles, identity and level of activity, engagement channels and learning / outcomes (which really focus on the most important value creation).

I am involved in the learning and monitoring team and as part of it have started (with support from other L&M team members) developing the table below.

This table offers a suggested selection of ‘learning areas’ that in our eyes matter when working in communities of practice such as KM4Dev.

Learning area Specific issues in this area Description
Governance Transparency Systematic sharing of and accessibility of results of conversations, decisions, initiatives, reification (see below) activities etc. also including the selection process for core group members
Vision, values and principles Development, existence, clarity, understanding and acceptance of general vision, principles and values for the  community of practice by and for its members / normally this is not really a ‘learning’ area but if it isn’t in place it becomes one.
Leadership Demonstrated (and particularly accepted) leadership of the core group and occasionally others by other members of the KM4Dev community. Is there any dissonance between the two groups?
Mobilisation and commitment See below. This is also mentioned under governance as people involved in the CoP governance have to mobilise resources and commit themselves to activities in a specific way
Identity and activity Diversity and expansion Profile of members of the community and the core group (language, region, type of organisation etc.); Growth and expansion (frequency of new members, how etc.) and ties with external networks
Conversation Frequency and quality of conversations around the domain (knowledge management for development) or the community (KM4Dev)
Reification Tendency (quality and frequency) of the community to ‘reify’ conversations into tangible outputs e.g. blog post, wiki entry, journal article etc. Also has a bearing on learning and outcomes
Mobilisation and commitment Capacity of core group members and other KM4Dev members to mobilise themselves and commit to activities (which activities? to what extent/degree of involvement?) and indeed deliver according to the plan and with strong learning. This also has bearing on the governance
Participation Degree of participation of different members to conversations and other activities
Reflection Evidence of social learning, development and sharing of new insights as part of activities (and results – this has bearing on learning/outcomes)
Cohesion Evidence that the relationship between members of the community is good and that everyone finds their place in the community while feeling they are part of a whole
(Learning and) Outcomes Reification / outputs See above. Production of outputs (quality/frequency?) – planned or spontaneous
Reflection / changed thinking and discourse See above. Evidence that reflections from the KM4Dev community have achieved change in thinking and/or discourse among others e.g. citations, semantic analysis.
Inspiration / changed behaviour Evidence of change as a new way to proceed, inspired by KM4Dev activities
Innovation / changed artefact or approach Evidence of KM4Dev influencing development of a new artefact or method, codified concretely
Impact Evidence of larger changes (in autonomy of decision and well-being related to livelihood) where KM4Dev activities have inspired/influenced others within community and particularly beyond. Caveat: attribution.
KM4dev engagement channels Suitability for participation The different KM4Dev channels (mailing list, wiki, ning community), annual meetings) foster dialogue and engagement, and learning
Ease of use / Availability of KM4Dev outputs The different channels are easy to use and complement each other. They make KM4Dev activity outputs visible, and available.
Identity Governance of Km4dev is clear in all engagement channels

This table and the plan which we highlighted triggered a very rich discussion in the KM4Dev core group over the  past couple of weeks. This conversation was meant to provide some initial reactions before opening it more widely with the entire community. As we are about to embark on a much wider and open consultation process with the rest of the community, I thought it might be useful to post this here and see if any of you has any suggestion or feedback on these learning areas…

At the IKM Table (2): individual agency vs. organisational remit, accountability and impact pathways for the future of IKM-Emergent


Day 2 of the final IKM workshop dedicated to ‘practice-based change’. As much as on day 1, there is a lot on the menu of this second day:

  • Individual agency vs. organisational remit;
  • Accountability;
  • Impact and change pathways;
  • A possible extension of the programme: IKM-2
Day 2 - the conversation and cross-thumping of ideas continues

Day 2 - the conversation and cross-thumping of ideas continues

On individual agency and organisational remit:

We are made of a complex set of imbricated identities and cultures that manifest themselves around us in relation with the other actors that we are engaging with. These complex layers of our personality may clash with the organisational remit that is sometimes our imposed ‘ball park’. Recognising complexity at this junction, and the degree of influence of individual agents is an important step forward to promote more meaningful and effective development.

Pressed for time, we did not talk a lot about this. Yet we identified a few drivers that have much resonance in development work:

  • As little as organisations tweet, people do, organisations do not trigger change, individual people do. Pete Cranston mentioned a study done about three cases of critical change within Oxfam, all triggered by individuals: a manager with the power to change, an aspirational individual quickly building an alliance etc. – our impact pathways need to recognise the unmistakable contribution of individual ‘change agents’ (or positive deviants) in any specific process or generic model of social change. Individuals that are closely related to resource generation obviously have crucial leverage power and play a special role in the constellation of agents that matter in the impact pathway;
  • We are obscured by our scale: In politics it took us a long time to realise there were crucial dynamics below nation-states and above them. In a similar swing, in development let’s go beyond merely the organisational scale to focus on the individual agency as well as the network scale – all organisations and individuals are part of various networks which impact both individuals and organisations engaged in them. Teams also play an important role to explore and implement new ways – it is at that level that trust is most actively built and activities planned and implemented. The riddles of impact from the teams emulate in sometimes mysterious ways to the organisational level;
  • These differences of scale tend to place subtle tensions on individuals between their personal perspectives and the organisational priorities. The multiple identities and knowledges (including local knowledge) are inherently in ourselves too, adding layers of complexity as the predominance of one identity layer over another plays out in relation to the other people around – see presentation by Valerie Brown.

On accountability:

Accountability is a central piece of the development puzzle yet, so far, we have embedded it in too linear a fashion, usually upwards, to our funders. Accountability should also embrace the wider set of stake-holders concerned in development initiatives, including beneficiaries and peers, and find alternative ways to be recognised, acted upon and expressed.

The crux of our accountability discussion was around the tension to reconcile accountability with the full set of actors that we are interacting with in our development initiatives. The work carried out by CARE in Nepal (recently finished and soon to be uploaded on the page listing all IKM documents) is a testimony that accountability can and should be multi-faceted.

  • At the core of this conversation lies the question: whose value, whose change, whose accountability? We perhaps too quickly jump on the idea that we know who is the (set of) actor(s) that has(have) more value to bring and demonstrate, that their theory of change matters over that of other actors, and that our accountability system should be geared towards their needs.
  • About theory of change, we already mentioned on day 1 that it is just a tool and any simple tool bears the potential of being used smartly (despite inherent technical limitations in the tool) as much as any complex tool can be used daftly (regardless of the inherent flexibility that it may have). However, the theory of change (of which one guide can be found here) can be quite powerful to ponder the key questions above. A collective theory of change is, however, even more powerful.
  • Perhaps a practical way forward with accountability is to identify early on in a development initiative who we want to invite to map out the big picture of the initiative and the vision that we wish to give it. The set of actors participating to the reflection would represent the set of actors towards whom the initiative should be accountable to. In the process, this consultation could reveal what we can safely promise to ‘deliver’ to whom, and what we can only try and unpack further. This might even lead to shaping up a tree map of outcomes that might be simple, complicated, complex or chaotic (thereby indicating the type of approach that might be more adequate).
  • More often, in practice, we end up with a theory of change (or a similar visioning exercise) that has been prepared by a small team without much consultation. This implies a much simpler accountability mechanism with no downward accountability, only upward accountability to the funding agency or the management of the initiative. This may also imply that the chances of developing local ownership – arguably a crucial prerequisite for sustainable results – are thereby much dimmer too.
  • Robin Vincent also referred to the peer accountability that pervades throughout social media (Twitter, blogs) to recognise the validity and interest of a particular person could be a crucial mechanism to incorporate as a way of letting good content and insights come to the surface and enriching accountability mechanisms.

On impact and change pathways

The next discussion focused on the impact and change pathways of IKM-Emergent. Each member drew a picture of their reflections about the issue, whether specifically or generally, whether practically or theoretically, whether currently or in the future. We produced eight rich drawings (see gallery below) and discussed them briefly, simmering conclusive thoughts about impact and the influence that IKM-Emergent has or might have.

  • Impact happens at various scales: at individual (for oneself and beyond), at team level, at organisational level and at network level (at the intersections of our identities, relations and commitments), it follows various drivers, strategies, instruments and channels. Keeping that complex picture in mind guides our impact seeking work.
  • Our impact is anyway dependent on larger political dynamics that affect a climate for change. The latter could become negative, implying that development initiatives should stop, or positive and leading to new definitions and norms;
  • In this picture, IKM seems to play a key role at a number of junctions: experimentation with development practices, network development, counter-evidence of broadly accepted development narratives, recognition of individual agency and its contribution to social movements, ‘navigating (or coping with) complexity and developing resilience, documenting case studies of how change happens, innovative approaches to planning and evaluation and developing knowledge commons through collaboration;
  • And there certainly are lots of sympathetic agents currently working in funding agencies, international NGOs, social movements, the media as well as individual consultants. Collectively they can help;
  • The combination of public value, capacities and authorising environment are some of the stand posts around IKM’s ball park;
  • IKM’s added value is around understanding the miracle that happens at the intersection between, on the one hand, interactions across many different actors and, on the other hand, systemic change at personal / organisational / discourse level. We can play a role by adding our approach, based on flexibility, integrity, activism and sense-making;
  • If we are to play that role of documenting the miracle and other pathways to change, we should remain realistic: We are led to believe or let ourselves believe that evidence-based decision-making is THE way to inform (development) policies and practices, when – in practice – we might follow more promising pathways through developing new knowledge metaphors, frames of development, preserving documentary records and interlinking knowledges;
  • There is also an element of balancing energy for the fights we pick: Impact and engagement with people that are not necessarily attuned to the principles, values and approaches of IKM-Emergent takes energy. But it matters a lot. So we might also interact with like-minded people and organisations to regain some of that energy.
  • Finally, there are lots of exchanges and interactions and great development initiatives already happening on the ground. The layer above that, where INGOs and donor agencies too often locate themselves, is too limited as such but our impact pathway is perhaps situated at the intersection between these two – how can we amplify good change happening on the ground?

On IKM-Emergent 2:

In the final part of the workshop, after an introduction by Sarah Cummings about where we are at, we surfaced key issues that will be important themes for the sequel programme suggested for IKM-Emergent (the so-called ‘IKM 2’). We briefly discussed a) practice-based change, b) local content and knowledge and c) communication and engagement.

On practice-based change: In this important strand, we debated the importance of the collective against the individual pieces of work – challenging issue in IKM-1. Building a social movement and synthesising work are on the menu, although at the same time it is clear that each team or group of individuals working on independent pieces of work needs to find their breathing space and to some degree possibly detach themselves from the collective. IKM Emergent has been successful at unearthing rich research and insights thanks to the liberty left for each group to carve their space. But the message is clear: connecting the dots helps bring everyone on board and picture the wider collage that an IKM-2 might collectively represent.

On local content and knowledge: In this equally important strand, language is key. So is the distortion of knowledge. We want to understand how localisation of information and technology may differ from one place to the next, we want to move on to ‘particular knowledges’, zooming in on specifics to draw on them. We want to further explore diverse ways of connecting with multiple knowledges through e.g. dancing, objects, non-ICT media. We want to better understand the dynamics of local social movements and knowledge processes and do that with the large African networks that we have been working with.

How is this all to unfold? By creating a network space that allows content aggregation, meetings online and offline, experimental research and production of artefacts, organising exhibitions and happenings and integrating social media.

On communication, monitoring and engagement: This has been paradoxically, and despite the efforts of the IKM management, an area that could have been reinforced. A communication strategy came very late in the process, was somewhat disconnected from the works and rather message-based than focused on engagement and collective sense-making.

What could we do to improve this in IKM-2?

Further integrating communication and M&E, focusing on collective… conversations, engagement, reflection, learning and sense-making. And recognising that both communication and M&E are everyone’s business – even though we need someone (a team?) in the programme to ‘garden communication’, prune our networks (to keep interacting with relevant actors at the edges) and to provide support to staff members and IKM partners to connect to the communication attire of IKM-2

This implies that internally:

  • The success of communication depends also on the production of excellent content to engage people on and around. The constant exploration and openness to new opportunities that characterised much of IKM-1 should be maintained to ensure a wide diversity of mutually reinforcing sources of great reflection and conversation;
  • More conscious efforts are taken to distil key insights from ongoing work – even though we recognise the necessity of (a degree of) freedom and disconnect to develop good work;
  • Distilling those insights might benefit from strong process documentation (1), undertaken by a social reporter (2), supported by regular collective sense-making sessions where those key insights and ‘connecting points’ between work strands could be identified and analysed.
  • We aim at ‘quick and dirty’ (link to post) communication cycles to quickly churn out insights and discuss them, rather than wait for long peer-process processes that slow communication down and reduce the timeliness (and relevance) of the work undertaken;
  • There is a strong need for consistent communication (supported by proper information and training for staff members to feel comfortable with the communication tools and processes) and robust information management (tagging and meta-tagging, long-term wiki management etc. – to be defined).

And externally it implies:

  • That we care for the growing community of conversation that we are having – as an overarching goal for our comms work;
  • That we use the insights to regularly engage a wider group by e.g. organising thematic discussions around emerging (sets of) pieces of work from IKM-2 and invite external actors to connect to and expand that body of work, possibly fund parts of it etc.
  • That we find innovative ways of relating content and ‘re-using it’ smartly by e.g. writing ‘un-books’ with regular updates on the wiki, blogging, syndicating content via RSS  feeds etc.;
  • That we use different communication tools and channels to engage with a multi-faceted audience, so that they find comfortable ways to interact with us and the same time that we titillate their curiosity to try out alternative modes of communication too. There are many relations between external communication and the ‘local content/knowledge’ strand with respect to alternative modes of communication that may not (re-)enforce Western modes and preferences for communication.

 

What now?

After two days of workshops and five years of collective work, we come out with an incredibly rich set of insights – of which this workshop is only the emerged tip of the iceberg – a wide collection of outputs (and more to come), a number of messages for various groups and a dedication to engage with them on the basis of all the above in an expanded programme. There is no funding yet for IKM-2 but with resources, ideas and ambitions, there may well be all the elements to bring us on that way and find like-minded spirits to transform development practices. Impact pathways don’t need funding to work, we are on it, wanna join?

 

Notes:

(1) Process documentation is a soft monitoring approach including a mixture of tools and techniques to ensure that a given initiative’s theory of change is kept in check and questioned throughout its lifetime and ultimately leads to a set of lessons to inform similar initiatives in the future. It has been better described in this IRC publication: Documenting change, an introduction to process documentation.

(2) Social reporting is very close to process documentation although it is usually applied for specific events rather than long term processes. It is better explained in this ICT-KM blog post.

Related blog posts:

At the IKM table: linearity, participation, accountability and individual agency on the practice-based change menu (1)


On 20 and 21 February 2012, the  London-based Wellcome Collection is the stage for the final workshop organised by the Information Knowledge Management Emergent (IKM-Emergent or ‘IKM-E’) programme. Ten IKM-E members are looking at the body of work completed in the past five years in this DGIS-funded research programme and trying to unpack four key themes that are interweaving insights from the three working groups which have been active in the programme:

  1. Linearity and predictability;
  2. Participation and engagement;
  3. Individual agency and organisational remit;
  4. Accountability

This very rich programme is also an intermediary step towards a suggested extension for the programme (“IKM 2”).

In this post I’m summarising quite a few of the issues tackled during the first day of the workshop, covering the first two points on the list above.

On linearity and predictability:

Linear approaches to development – suggesting that planning is a useful exercise to map out and follow a predictable causal series of events – are delusional and ineffective. We would be better advised using  emergent perspectives as they are more realistic, for lack of being more certain.

Linearity and predictability strongly emphasise the current (and desired alternative) planning tools that we have at our disposal or are sometimes forced to use, and the relation that we entertain with the actors promoting these specific planning tools.

Planning tools

After trying out so many ineffective approaches for so long, it seems clear that aspirational intent might act as a crucial element to mitigate some of the negative effects of linearity and predictability. Planning tools can be seen as positivist, urging a fixed and causal course of events, indeed focusing on one highlighted path – as is too often the case with the practice around logical framework – or can have an aspirational nature, in which case they focus on the end destination or the objective hoped for and strive to test out the assumptions underlying a certain pathway to impact (at a certain time).

Different situations require different planning approaches. Following the Cynefin framework approach, we might be facing simple, complicated, complex or chaotic situations and we will not respond the same way to each of those. A complex social change process may require planning that entails regular or thorough consultation from various stakeholder groups, a (more) simple approach such as an inoculation campaign may just require ‘getting on with the job’ without a heavy consultation process.

At any rate, planning mechanisms are one thing but the reality on the ground is often different and putting a careful eye to co-creating reality on the ground is perhaps the best approach to ensure a stronger and more realistic development, reflecting opportunities and embracing natural feedback mechanisms (the reality call).

There are strong power lobbies that might go against this intention. Against such remote control mechanisms – sometimes following a tokenistic approach to participation though really hoarding discretionary decision-making power – we need  distanced control checks and balances, hinting at accountability.

Managing the relationship leading to planning mechanisms

Planning tools are one side of the coin. The other side of the coin is the relationship that you maintain with the funding or managing agency that requires you to use these planning tools.

Although donor agencies might seem like ‘laggards’ in some way, managing the relationship with them implies that we should not stigmatise their lack of flexibility and insufficient will to change. In a more optimistic way, managing our relationship with them may also mean that we need to move away from the contractual nature of the relations that characterise much of development work.

Ways to influence that relationship include among others seeking evidence and using evidence that we have (e.g. stories of change, counter-examples from the past either from one’s own past practice or from others’ past practice etc.) and advocating itProcess documentation is crucial here to demonstrate the evidence around the value of process work and the general conditions under which development interventions have been designed and implemented. It is our duty to negotiate smart monitoring and evaluation in the intervention, including e.g.  process documentation, the use of a theory of change and about the non instrumentalisation (in a way that logical frameworks have been in the past). In this sense, tools do not matter much as such; practice behind the tools matters a lot more.

Finally, still, there is much importance in changing relationships with the donor to make the plan more effective: trust is central to effective relationships. And we can build trust with donors by reaching out to them: if they need some degree of predictability, although we cannot necessarily offer it, we can try, talk about our intent to reduce uncertainty. However, most importantly, in the process we are exposing them to uncertainty and forcing them to deal with it, which helps them feel more comfortable with uncertainty and paradox and find ways to deal with it. Convincing donors and managers about this may seem like a major challenge at first, but then again, every CEO or manager knows that their managing practice does not come from a strict application of ‘the golden book of management’. We all know that reality is more complex than we would like it to be. It is safe and sound management practice to recognise the complexity and the .

Perhaps also, the best way to manage our relationship with our donors in a not-so-linear-not-so-predictable way is to lead by example: by being a shining living example of our experience and comfort with a certain level of uncertainty, and showing that recognising the complexity and the impossibility to predict a certain course of events is a sound and realistic management approach to development. Getting that window of opportunity to influence based on our own example depends much on the trust developed with our donors.

Trust is not only a result of time spent working and discussing together but also the result of surfacing the deeper values and principles that bind and unite us (or not). The conception of development as being results-based or relationship-based influences this, and so does the ‘funding time span’ in which we implement our initiatives.

Time and space, moderating and maintaining the process

The default development cooperation and funding mechanism is the project, with its typically limited lifetime and unrealistic level of endowment (in terms of resources, capacities etc. available). In the past, a better approach aimed at funding institutions, thereby allowing those organisations to afford the luxury of learning, critical thinking and other original activities. An even more ideal funding mechanism would be to favour endemic (e.g. civic-driven) social movements where local capacities to self-organise are encouraged and supported over a period that may go over a project lifetime. If this was the default approach, trust would become a common currency and indeed we would have to engage in longer term partnerships, a better guarantee for stronger development results.

A final way to develop tolerance to multiple knowledges and uncertainty is to bring together various actors and to use facilitation in these workshops so as to allow all participants to reveal their personal (knowledge culture) perspective, cohabiting with each other. Facilitation becomes de facto a powerful approach to plant new ideas, verging on the idea  of ‘facipulation’ (facilitation-manipulation).

Beyond a given development intervention, a way to make its legacy live on is to plug those ideas onto networks that will keep exploring the learning capital of that intervention.

What is the value proposition of all this to donors? Cynically perhaps the innovativeness of working in those ways; much more importantly, the promise of sustainable results – better guaranteed through embedded, local work. The use of metaphors can be enlightening here, in the sense that it gives different ideas: what can you invest in projects and short term relationships? e.g. gardening for instance planting new initiatives in an existing soil/bed or putting fertilizer in existing plants…

Interesting links related to the discussion:

This slideshow requires JavaScript.

On participation and engagement:

Sustainable, effective development interventions are informed by careful and consistent participation and engagement, recognising the value of multiple knowledges and cherishing respect for different perspectives, as part of a general scientific curiosity and humility as to what we know about what works and what doesn’t, in development and generally.

The second strand we explored on day 1 was participation and engagement with multiple knowledges. This boils down to the question: how to value different knowledges and particularly ‘local knowledge’, bearing in mind that local knowledge is not a synonym to Southern knowledge because we all possess some local knowledge, regardless of where we live.

A sound approach to valuing participation and engagement is to recognise the importance of creating the bigger picture in our complex social initiatives. The concept of cognitive dissonance is particularly helpful here: As communities of people we (should) value some of our practices and document them so that we create and recognise a bigger collective whole but then we have to realise that something might be missing from that collective narrative, that we might have to play the devil’s advocate to challenge our thinking – this is the ‘cognitive dissonance at play – and it is more likely to happen by bringing external views or alternative points of view, but also e.g. by using facilitation methods that put the onus on participants to adopt a different perspective (e.g. DeBono’s six-thinking hats). Development work has to include cognitive dissonance to create better conditions to combine different knowledges.

Participation and engagement is also conditioned by power play of course, but also by our comfort zones; e.g. as raised in a recent KM4Dev discussion, we are usually not keen on hiring people with different perspectives, who might challenge the current situation. We also don’t like the frictions that come about with bringing different people to the table: we don’t like to rediscuss the obvious, we don’t like to renegotiate meaning but that is exactly what is necessary for multiple knowledges to create a trustworthy space. The tension between deepening the field and expanding it laterally with new people is an important tension, in workshops as in development initiatives.

We may also have to adopt different approaches and responses in front of a multi-faceted adversity for change: Some people need to be aware of the gaps; others are aware but not willing because they don’t see the value or feel threatened by inviting multiple perspectives; others still are also aware and don’t feel threatened but need to be challenged beyond their comfort zone. Some will need ideas, others principles, others yet actions.

At any rate, inviting participation calls for inviting related accountability mechanisms. Accountability (which will come back on the menu on day 2) is not just towards donors but also towards the people we invite participation, or we run the risk of ‘tokenising’ participation (pretending that we are participatory but not changing the decision-making process). When one interviews a person, they  have to make sure that what they are transcribing faithfully reflects what the interviewee said. So with participation, participants have to be made aware that their inputs are valued and reflected in the wider engagement process, not just interpreted as ‘a tick on the participatory box’.

Participation and engagement opens up the reflective and conversation space to collective engagement, which is a very complex process as highlighted in Charles Dhewa’s model of collective sense-making in his work on traducture. A prerequisite in that collective engagement and sense-making is the self-confidence that you develop in your own knowledge. For ‘local knowledge’, this is a very difficult requirement, not least because even in their own context, proponents of local knowledge might be discriminated and rejected by others for the lack of rigor they display.

So how to invite participation and engagement?

Values and principles are guiding pointers. Respect (for oneself and others) and humility or curiosity are great lights on the complex path to collective sense-making (as illustrated by Charles Dhewa’s graph below). They guide our initiatives by preserving a learning attitude among each and every one of us. Perhaps development should grow up to be more about  ‘ignorance management’, an insatiable thirst for new knowledge. The humility about our own ignorance and curiosity might lead us to unravel ever sharper questions, on the dialectical and critical thinking path, rather than off-the-shelf (and upscaling-friendly) answers – which we tend to favour in the development sector. The importance here is the development of shared meaning.

A collective sensemaking framework (by Charles Dhewa)

A collective sensemaking framework (by Charles Dhewa)

As highlighted in the previous conversation, not every step of a development initiative requires multi-stakeholder participation, but a useful principle to invite participation and engagement is iteration. By revisiting at regular intervals the assumptions we have, together with various actors, we can perhaps more easily ensure that some key elements from the bigger picture are not thrown away in the process. This comes back to the idea of assessing the level of complexity we are facing, which is certainly affected by a) the amount of people that are affected by (or have a crucial stake in) the initiative at hand and b) the degree of inter-relatedness of the changes that affect them and connect them.

Iteration and multi-stakeholder engagement and participation are at the heart of the ‘inception phase’ approach. This is only one model for participation and un-linear planning:

  • On one end of the spectrum, a fully planned process with no room for (meaningful) engagement because the pathway traced is not up for renegotiation;
  • Somewhere in the middle, a project approach using an inception period to renegotiate the objectives, reassess the context, understand the motivations of the stake-holders;
  • At the other end of the spectrum, a totally emergent approach where one keeps organising new processes as they show up along the way, renegotiating with a variety of actors.

Seed money helps here for ‘safe-fail’ approaches, to try things out and draw early lessons and perhaps then properly budget for activities that expand that seed initiative. Examples from the corporate sector also give away some interesting pointers and approaches (see Mintzberg’s books and the strategy safari under ‘related resources’). The blog post by Robert Chambers on ‘whose paradigm’

”]Adaptive pluralism - a useful map to navigate complexity? [Credits: Robert Chambers]counts and his stark comparison between a positivist and adaptive pluralism perspectives are also very helpful resources to map out the issues we are facing here.

At any rate, and this can never be emphasised enough, in complex environments – as is the case in development work more often than not – a solid context analysis is in order if one is to hope for any valuable result, in the short or long run.

Related resources:

These have been our musings on day 1, perhaps not ground-breaking observations but pieces of an IKM-E collage that brings together important pointers to the legacy of IKM-Emergent. Day 2 is promising…

Related blog posts: