Communication, strategy and revolution


Communication strategy, connecting the dots and conversations

Communication strategy, connecting the dots and conversations

When in Ethiopia recently I facilitated a workshop for an NGO forum around the topic of strategic communications in the WASH sector and particularly how you can develop a communication strategy – and does it make sense in the first place?

The workshop went very well and included a couple of very interesting sessions like a talk show about our experiences with developing communication strategies (funny how people are riveted to what is said in a talk show for being so close to a panel discussion format – but then a lot more informal), a fishbowl session on the pros and cons of using various communication channels, an open space session on any pending (parking lot) or new point and of course a number of presentations about the basics of strategic communication, just to clarify some initial doubts from participants and to have enough to chew on (a few years back I facilitated a workshop that I designed way too much as a participatory exercise to the extent that I didn’t provide enough matter for participants to share experience on – well it was in a specific context but I promised myself never to end up in that situation again). It is a fine balance to give enough information and enough space for participants to discuss and digest it (from their perspective and experience too).

But two of the more interesting aspects of that workshop were on the one hand a checklist of questions that we used to develop five draft communication strategies (based on the cases of five organisations represented by participants) throughout the workshop and then a special strategic communication 2.0 session.

The checklist of questions was actually developed with a number of IRC colleagues in 2008 and 2009 and I just updated and enriched it in view of this workshop (ah, the beauty of external assignments and deadlines to make things happen!). It turned out to be a rather useful checklist, judging from the results that participants came up with and their comments. I’m definitely planning to use it more and to keep refining it.

Hereby find this presentation:

And please share your suggestions on it!

The other bit was the presentation about the web 2.0 and how it could have some interesting applications for strategic communication work. This was meant to be a 20-min presentation at best but I got completely carried away and went on for 45 minutes through the presentation (of course in an interactive manner otherwise I would have performed in a room full of snoring folks).

It turned out to be a much more political exercise than I had anticipated as well and made me realise how much the web 2.0 and the opportunities it offers – creatively combined with everyone’s special attributes and crazy ideas – are a crucially working on a silent revolution agenda. Gil Scott Heron used to say ‘The revolution will not be televised, the revolution… will be live’. No man, the revolution will be (also) on-line…

The presentation is here and includes a number of excellent references I found through Twitter recently.

I need to dig further into this type of messaging because somehowit responds to my profound desire to work towards more empowerment and this seems to be the most promising approach in that direction so far, but at the same time I wouldn’t want to end up in a political struggle.

Starting with Powerpoint, to end with an Empower curve? The question remains open for now…

What the *tweet* do we know (about monitoring/assessing KM)?


This week Tuesday I moderated my first ever Twitter chat thanks to the opportunity provided by KMers (as mentioned in one recent blog post). A very rich and at times overwhelming experience in terms of moderating – more on this in the process notes at the bottom of this post.

KMers provides a great opportunity to host Twitter chats! Tweet on! (photo credits: ~ilse)

The broad topic was about ‘monitoring / assessing KM’ and I had prepared four questions to prompt Tweeters to engage with the topic:

  1. What do you see as the biggest challenge in monitoring KM at the moment?
  2. Who to involve and who to convince when monitoring KM?
  3. What have been useful tools and approaches to monitor KM initiatives?
  4. Where is M&E of KM headed? What are the most promising trends (hot issues) on the horizon?

After a couple of minutes at the beginning to wait for all participants, we started listing a number of key challenges in terms of monitoring/ assessing KM:

  • Understanding what we are trying to assess and how we qualify success – and jointly agreeing on this from originally different perspectives and interests;
  • The disconnect between monitoring and the overall strategy and perhaps its corollary of (wrongly) obsessing on KM rather than on the contribution of KM to overall objectives;
  • The crucial problem of contribution / attribution of KM: how can we show that KM has played a role when we are dealing with behaviour changes and improved personal/organisational/inter-institutional effectiveness?;
  • The dichotomy between what was described as ‘positive’ monitoring (learning how we are doing) and ‘negative’ monitoring (about censoring and controlling peoples’ activities);
  • The occasional hobby horses of management and donors to benchmark KM, social media, M&E of KM etc.
  • The problem of focusing on either quantitative data (as a short-sighted way of assessing KM – “Most quantitative measures are arbitrary and abstract. …adoption rate doesn’t really equate to value generation” – Jeff Hester) or rather qualitative data (leaving a vague feeling and a risk of subjective biases);
  • The challenge of demonstrating added value of KM.
  • The much need leadership buy-in which would make or break assessment activities;

The challenges were also felt as opportunities to ‘reverse engineer successful projects and see where KM played a role and start a model’.

An interesting perspective from Mark Neff – that I share – was about monitoring from the community perspective, not from that of the business/organisation.

This last issue hinted at the second part of the chat, which was dedicated to what turned out to be a crux of the discussion: who do you need to involve and who to convince (about the value of KM) when monitoring KM.

Who to involve? Customers / beneficiaries, communities (for their capacity to help connect), even non-aligned communities, users / providers and sponsors of KM, employees (and their capacity to vote with their feet). Working in teams was suggested (by Boris Pluskowski) as a useful way to get knowledge to flow which eventually helps the business then.

Who to convince? Sponsors/donors (holding the purse strings), leaders (who are not convinced about measurement like managers but instead like outputs and systems thinking).

What is the purpose of your monitoring activities? Management? Business? Productivity? Reuse? Learning? Application? Membership? Mark Neff rated them as all interesting (another challenge there: choose!). Rob Swanwick made the interesting point of measuring within each unit and having KM (and social media at that) mainstreamed in each unit, rather than dedicated to a small group.

Raj Datta shared his interesting perspective that it is key to explore and expand from the work of communities that are not aligned with business objectives.

The third part continued with some tools and approaches used to assess KM.

The key question came back: What are we looking at? Increasing profits, sales and the engagement of customers? Participation in CoPs? Answers provided in 48 hours? Adoption rates (with the related issue of de-adoption of something else that Rob Swanwick pointed out)? Project profile contributions? Percentage of re-use in new projects? Stan Garfield suggested setting three goals and measuring progress for each (as described in his masterclass paper about identifying objectives). Mark Neff also stressed that it all depends on the level of maturity of your KM journey: better to build a case when you begin with KM, to look at implementing something or at the adoption rate when you’re a bit more advanced… At his stage, the man himself sees “efforts to measure the value we provide to clients and hope to extend that to measures of value they provide”,

And storytelling wins again! The most universal and memorable way to share knowledge? (photo credits: Kodomut)

In spite of these blueskying considerations, the KMers’ group nonetheless offered various perspectives and experiences with tools and approaches… social network analysis (to measure community interaction), Collison’s and Parcell’s KS Self assessment, outcome mapping (to assess behaviour change), comparative analysis (of call centre agents using the KM system or not), a mix of IT tools and face-to-face to create conversations.

But really what stole the show were success stories. Jeff Hester mentioned “they put the abstract into concrete terms that everyone can relate to”. Stories could also take the form of testimonials and thank you messages extracted from threaded discussions. But at any rate they complement other measurements and they sell and are memorable.

Rob Swanwick pondered: “Should stories be enough to convince leaders“? Roxana Samii suggested that “leaders wil be convinced if they hear the story from their peers or if they really believe in the value of KM – no lip service” and Boris Pluskowski finished this thread with a dose of scepticism, doubting that leaders would find stories enough to be convinced. In that respect, Mark Neff recommended assessing activities on our own and leading by example, even without the approval of managers or leaders, because they might not be convinced by stories or even numbers.

Of course the discussion bounced off to other dimensions… starting with the gaming issue. A new term to me anyway but indeed how to reduce biases induced by expectations on behalf of the people that are either monitoring or being monitored? And should we hide the measurements to avoid gaming (“security by obscurity” as mentioned by Lee Romero) or should we on the other hand explain them to reveal some parts of the variables to get buy-in and confidence, as suggested by Raj Datta or the transparency that is important for authentic behaviours as professed by Mark Neff?

Finally, the question about where M&E of KM is headed (fourth part) didn’t really happen in spite of some propositions:

A Twitter chat can also mean a lot of tweets running in parallel (photo credits: petesimon)

  • Focusing more on activities and flows in place of explicit knowledge stock (Raj Datta)
  • Mobile buzzing for permanent monitoring (Peter Bury)
  • Some sort of measurement for all projects to determine success (Boris Pluskowski)
  • Providing more ways for users to provide direct feedback (e.g., through recommendations, interactions, tagging, etc.) (Stan Garfield)

After these initial efforts, the group instead happily continued discussing the gaming issue to come to the conclusion that a) most KMers present seemed to go for a transparent system rather than a hidden one that aims at preventing gaming and b) gaming can also encourage (positive) behaviours that reveal the flaws of the system and can be useful in that respect (e.g. Mark’s example: “people were rushing through calls to get their numbers up. People weren’t happy. Changed to number of satisfied customers.”).

With the coming of V Mary Abraham the thorny question of KM metrics was revived: how to prove the positive value of KM? Raj Datta nailed an earlier point by mentioning that anyway “some quantitative (right measures at right time in KM rollout) and qualitative, some subjective is good mix”. On the question raised by V Mary Abraham he also offered his perspective of simplicity: “take traditional known measures – and show how they improve through correlation with KM activity measures”. This seemed to echo an earlier comment by Rob Swanwick” Guy at Bellevue Univ has been doing work to try to isolate ROI benefits from learning. Could be applied to general KM”.

In the meantime Mark Neff mentioned that to him customer delight was an essential measure and other tweeters suggested that this could be assessed by seeing the shared enthusiasm, returning and multiplying customers (through word of mouth with friends).

Boris Pluskowski pushed the debate towards innovation as well, as an easier way to show the value of intangibles, as opposed to KM. V Mary Abraham approved in saying “Collab Innov draws on KM principles, but ends up with more solid value delivery to the org”. To which Raj Datta replied: “to me KM is about collaboration and innovation – through highly social means, supported by technology”. And the initial tweeter on this thread went on about the advantages of innovation as being a problem solving exercise at its heart, including a before and an after / result – and it is possible to measure results. V Mary Abraham: “So #KM should focus on problem-solving. Have a baseline (for before) and measure results after”, because solving problems buys trust. But next to short-term problem-solving Mark Neff also pointed at the other face of the coin: long-term capacity building: “Focus people on real solutioning and it will help focus their efforts. Expose them to different techniques so they can build longterm”.

And in parallel, with the eternal problem of proving the value of KM, Raj Datta (correctly) stated: “exact attribution is like alchemy anyway – consumers of data need to be mature”.

It was already well past the chat closing time and after a handful of final tweets, this first KMers’ page of monitoring/assessing KM was turned.

At any rate it was a useful and fresh experience to moderate this chat and I hope to get at it a second time, probably in April and probably on a sub-set of issues related to this vast topic. So watch the KMers’ space: http://www.kmers.org/chatevents!

Process Notes:

As mentioned earlier in this post, the Twitter chat moderation was a rather uncanny experience. With the machine gun-like speed of our group of 25 or so Tweeters, facilitating, synthesising / reformulating and answering to others as one participant all at once was a hectic experience – and I’m a fast blind typer!

This is how I felt sometimes during the moderation: banging too many instruments at the same time (photo credits: rod_bv)

But beyond the mundane I think what stroke me was: The KMers’ group is a very diverse gang of folks from various walks of life, from the US or the rest of the world, from the business perspective or the development cooperation side. This has major implications on the wording that each of us uses – which may not be granted (such as this gaming issue that got me triggered at first) but also on the kind of approaches we seem to favour, the people we see as the main stumbling block or on the other hand the champions that we see as aspirational forces, and the type of challenges that we are facing… More in a later post about this.

There is finally the back-office side of organising such a Twitter event, and I think as much about preparing / framing the discussion, as inviting people to check out your framing post, preparing a list of relevant links to share, sharing the correct chat link when the event starts (and sending related instructions for new Tweeters), but also generating the full chat transcript (using http://wthashtag.com/KMers, thank you @Swanwick ;) all the way down to this blog post and the infographic summary that I’m still planning on preparing… it’s a whole lot of work, but exciting one and as the web 2.0 follows a ‘share the love / pay it forward’ mentality, so why don’t you give it back to the community out there? This was my first attempt, and I hope many more will follow…

Related blog posts (thank you Christian Kreutz for giving me this idea):

The full transcript for this KMers twitter chat is available here.

Peter and Justin: when and how does information make sense?


Last week when I was over in Ethiopia I had a wonderful dinner at a great Belgian Restaurant in Addis together with Peter Ballantyne of ILRI and some of his friends; A great bunch of people, some of who like Peter are working on knowledge sharing (and knowledge management although Peter hates the contradictory term knowledge management just as much as I do).

Peter Ballantyne

Peter shared his current considerations about the Justin idea, as a way of defining when information makes sense to someone and becomes useful as you share it. The Justin idea is the following (as much as I understood it and I invite you Peter to chip in on this):

  • Just in case is the lot of information that is saved and stored just in case someone may need it. It’s the typical case of many 1st KM generation initiatives: collecting and storing without paying too much attention to the actual needs for this information by various audiences. A rather useless approach even though for the public good it does make sense to have large repositories of information (like vast public libraries). But is it justified for all organisations to favour this approach? I’m not sure…

Justin... time

  • Just in time is information that is shared by two people in a timely manner. Examples of it would be a Q&A helpdesk request handled, information found by searching through the internet or other means. This is obviously a very useful Justin approach and one that I think should be encouraged more (at a personal level through effective means of searching information and at an institutional level through helpdesk and match-making services). However the interface that is required to match demand and supply may limit its applications.
  • Just in space is the point that Peter is working on, to find out if there is a way to make information useful in a particular context. It is probably a mixed case of just in time but perhaps with the added benefit of local relevance that may not be considered in a heldesk request handled, simply because there hasn’t been enough time to share the contextual needs between the requester and the broker. Another example of just in space could be information that is shared by two people in the same place without having the chance to express their contextual needs: e.g. a new worker receiving lots of information from the departing colleague without being able to place this information in a way that makes it useful / actionable, an emergency situation where agents have to quickly move on and may not be able to explain how this information makes sense.

I would add two other Justin approaches:

  • Just in need: Perhaps a combination of time and space, just in need would favour sharing knowledge on the spot and applied to a specific context. Typical examples may include coaching advices provided on the spot, working together and sharing knowledge while working. To me this is probably the most important of all Justin approaches and in my eyes should be the focus of most KM initiatives by connecting appropriate sources of information with receiving ends in a shared contextual environment. This can happen by means of encouraging coaching, joint work, local matchmaking knowledge centres, communities of practice about a fairly common practice indeed, ideally with a certain cultural focus (one may not apply KM in a similar way between two countries or event two distinct groups within a same community).
  • Just in transition: Finally, we do receive a lot of information by various means and we cannot always make sense of it on the spot. But sometimes later we make other associations which turn that stored information (in a transitional knowledge state in our heads) into useful knowledge that we can then apply in different ways. Even a piece of advice that has helped in one way at one occasion could be reinterpreted in different useful ways later. This is the typical case of books that we read and re-read with a different lens, picking up messages that would not or could not resonate with us before. This transitional information collection happens anyway and it is useful in encouraging serendipity. If we would receive only the information that we really need all the time we may not be able to see a bigger picture and to get out of our active ‘scoping’ mode (looking for specific information). This is also probably why a community of practice with a diverse group of members is so relevant because it helps you address the issue at stake but also make associations with other bits of information that can help.

While we are probably moving on from just in case to just in time and space with our KM initiatives, let us focus more on ‘just in need’ and encourage or remain open to ‘just in transition’ to keep innovating and making sense in the longer run…

(Im)Proving the value of knowledge work: A KMers chat on monitoring / assessing knowledge management


KMers chat on 16/02/2010 on monitoring/assessing KM

On 16 February 2010, I will be hosting a KMers chat about the topic of ‘monitoring / assessing knowledge management’ (1).

When Johan Lammers (one of the founders of KMers and of WeKnowMore.org) invited KMers (the people, not the platform) to host a discussion I jumped on the occasion. It’s new, it’s fresh, it’s fun, it’s useful: what else can you dream of? And talking about useful discussions, it just fitted my work on this topic of monitoring knowledge management very well.

So here you go, if you are interested, this is the pitch for this KMers chat:

Knowledge management is ill-defined but even more crucially ill-assessed. The inaccuracy and inadequacy of monitoring (2) approaches for KM has left behind a trail of tensions, heated debates, frustrations and disillusions. Differing perspectives on the value of KM and on ways to conduct monitoring have further entrenched these reactions.

How to reconcile expectations from managers / donors on the one hand, from teams in charge of monitoring knowledge management and clients / beneficiaries on the other hand? How to conjugate passion for and belief in knowledge-focused work with business realism and sound management practice?

What are approaches, methods, tools and metrics that seem to provide a useful perspective on monitoring the intangible assets that KM pretends to cherish (and/or manage)? What are promising trends and upcoming hot issues to turn monitoring of KM into a powerful practice to prove the value of knowledge management and to improve KM initiatives?

Join this Twitter chat to hear the buzz and share your perspective…

In this particular KMers chat we will grapple with four key questions, i.e.:

  1. What do you see as the biggest challenge in monitoring KM at the moment?
  2. Who to involve and who to convince when monitoring KM?
  3. What have been useful tools and approaches to monitor KM initiatives?
  4. Where is M&E of KM headed? What are the most promising trends (hot issues) on the horizon?

This discussion ties in closely with a couple of posts on this topic on this blog (see for instance this and that post) and three on IKM-Emergent programme’s The Giraffe blog (see 1, 2 and 3). Simon Hearn, Valerie Brown, Harry Jones and I are on the case.

Back on this KMers’ chat, here is an outlook on some issues at stake – I think:

Fig. 1 The starting model we are using for monitoring KM (credits: S. Hearn)

  • KM is not well defined and the very idea of ‘monitoring’ knowledge (related to the M in KM) is fallacious – this is partly covered in this post. What does this mean in terms of priorities defined behind a KM approach? What is the epistemology (knowledge system) guiding KM work in a given context?
  • KM is often monitored or assessed from the perspective of using intangible assets to create value. Is this the real deal? Perhaps monitoring may look at various dimensions: knowledge processes and initiatives (inputs & activities), intangible assets (outputs), behaviour changes and ultimately valuable results (outcomes and impact). See fig. 1 for a representation of this model.
  • In this, where should we monitor/assess knowledge, knowledge management, knowledge sharing and possibly all knowledge-focused processes – from the knowledge value chain or another reference system?
  • Monitoring is itself a contested practice that is sometimes associated with only the simple focus of ‘progress monitoring’ i.e. establishing the difference between the original plan and the reality, to prove whether the plan is accomplished or not. Where is the learning in this? What is more valuable: to prove or to improve? And could we not consider that monitoring of KM should arguably look at other valuable monitoring purposes (like: capacity strengthening, self-auditing for transparency, sensitisation, advocacy etc. (3)?
  • With respect to the different epistemologies and ontologies (world views), isn’t it sensible to explore the different knowledge communities (see slide 8 on Valerie Brown’s presentation on collective social learning) and expectations of the parties involved in monitoring/ assessing KM? After all, the monitoring commissioner, implementer and ultimate beneficiary (client) may have a totally different view point on the why, what and how of monitoring KM.
  • If we take it that monitoring moves beyond simple progress monitoring and does not simply rest upon SMART indicators and a shopping basket for meaningless numbers, what are useful approaches – both quantitative and qualitative – that can help us understand the four dimensions of KM monitoring mentioned above and do this with due consideration for the context of our knowledge activities?
  • And finally what can we expect will be the future pointers of this discussion? I am thinking here both in terms of broadening the conceptual debate, looking at promising new approaches (such as the semantic web and its possibilities to map contextualised information, Dave Snowden’s Sense Maker, Rick Davies’s most recent work on the basis of his old Most Significant Change method) or developing a more practical approach to make sense of knowledge and to support the work of KMers (us), our patrons, our partners and our beneficiaries / clients?
  • Do you have case studies or stories about the issues sketched above?

Hopefully, further down the line, we may have a clearer idea as to turning what is too often a costly and tiresome exercise into an exciting opportunity to prove the value of knowledge-focused work and to improve our practices around it…

If you are interested in this topic or want to find out more about KMers’ chats, please check in on 16 February and join the chat; oh, and spread the word!

Notes:

(1)    KMers is an initiative that was started in late 2009 and has already generated a few excellent discussions (the last one was about knowledge for innovation), usually hosted on Tuesday around 1800 CET (Central European Time). The chats Twitter-based and always involve a group of dedicated KM heads that are really passionate and savvy about the broad topic of knowledge management.

(2)    By monitoring we mean here the ‘follow up of the implementation of programme activities AND periodic assessment of the relevance, performance, efficiency and impact of a piece of work with respect to its stated objectives’ as regularly carried out in the development sector. In this respect we include the purposes of evaluation in monitoring as well. In the corporate world I guess you would translate this in regular assessment. Monitoring / assessment may happen by means of measurement and other methods.

(3)    A forthcoming IKM-E paper by Joitske Hulsebosch, Sibrenne Wagenaar and Mark Turpin refers to the nine different purposes for monitoring, that Irene Guijt proposed in her PhD ‘Seeking Surprise’ (2008). These purposes are: Financial accountability, Operational improvement, Strategic readjustment, Capacity strengthening, Contextual understanding, Deepening understanding, Self-auditing, Advocacy, Sensitisation).

Related blogposts:

Settling the eternal semantic debate: what is knowledge, what is information…


(As of February 2012, a new post on this blog updates and extends this one: What the heck is knowledge anyways: from commodity to capacity and insights).

While being away in Ethiopia and characteristically without internet access, a debate was raging on the KM4DEV list, fiercer and hotter than it ever was. Another phoenix of KM is renascent and this is perhaps the phoenix of all phoenixes in the KM world: what is knowledge, what is information?

To understand this debate, you should know – if you don’t already – that a common way of explaining this difference has been to use the DIKW pyramid. And here DIKW does not mean ‘Do it knowledge worker’ like my mate Jaap amusingly suggested, but rather: Data – information – knowledge – wisdom.

This pyramid is here:

The DIKW pyramid: The starting point of 1000 fallacious KM approaches?

The debate on KM4DEV has been rather heated because Dave Snowden added his grain of itchy salt by provocatively mentioning that he “would reject the DIKW pyramid, aside from the fact it’s just plain wrong, it’s difficult to explain and leads to bad labels” and that “Anyone talking about wisdom as a higher level of knowledge should be taken out and shot for the good of the field”. Now of course the point is not to shoot anyone down and this is obviously not the man’s point either, but rather to consider carefully whether indeed this DIKW is a pyramid worth fighting for, or whether we should not get busier with constructions of another genre…

I personally find the DIKW also quite limited and rather dangerous.

It is limited because to me, data and information on the one hand, and knowledge and wisdom on the other, are of very different nature:

  • Data and information are tangible, they are explicit. They are what some call ‘explicit knowledge’ (to me yet another flawed fad). Basically they are plain bits of text or signals (data) or organised/formatted/packaged bits of text/signals (information) that are concretely available for our senses: in print or images (sight and touch) and in sounds and music (sound). I’ll actually have to think further about taste and odour on this one.
  • Knowledge is intangible by definition (to me anyway) in the sense that it represents the way that we combine data and/or information with a variety of inner characteristics (experience, skills, attitude, emotions, interest, intention and need to use data and information) to make sense of this data/information and apply it to a given situation where we need to apply it. In that respect I believe knowledge as a noun is not a terribly useful concept. The act of knowing, on the other hand, is much more relevant as it refers to our capacity to invoke all these inner characteristics to make informed decisions. In other words knowledge/knowing is about lining up what we have in our mind that may be useful for a particular situation (1).
  • Wisdom, well I guess I’m not wise enough to touch this one in depth. The only relation I could imagine is that wisdom – induced by experience, repeated exposure to various incarnations of similar ideas and actions in various contexts – may be what helps us make a better informed decision between two seemingly similar choices. It could relate to the triple loop learning that I blogged about in the past.

But more importantly, this DIKW model is rather dangerous in the sense that, by posing a linear representation of data all the way up to wisdom, it assumes a natural hierarchy among these four variables. And it seems to suggest that one is better than the other when we are talking about different things. If explained to people first exposed to knowledge management (gosh, that term again, that’s where the problem starts: one cannot manage knowledge!!!) it may give a feeling that they are at a certain level.

In reality we are all at different levels of understanding – partly because we may be interested in or need some information and not other: For some areas, we may have accumulated a lot of experience, in others we may be on a completely new territory, so we should not feel like we are at a certain level.

And anyway in any given context we would use data, information, our act of knowing and the wisdom we have accumulated (whatever that may be if it is not accumulated and analysed experience) in various forms and shapes.

So for me, if you lack better models to make sense of it, feel free to use DIKW but do it with caution. As for me, I’ve never used it as learning / coaching principle and I’m not planning to either…

(26/02/2010) By the way, I summarised the whole KM4DEV discussion on the KM4DEV wiki: http://wiki.km4dev.org/wiki/index.php/DIKW_model

Notes:

(1) In workshops, to explain this ‘knowledge as information in use’, I often refer to the equation K=I*ESA (as pointed to me by Jaap). Thinking about it I think this equation also deserves a good brushing off but it sounds certainly more sensible than the DIKW pyramid.

Related blog posts: