This week Tuesday I moderated my first ever Twitter chat thanks to the opportunity provided by KMers (as mentioned in one recent blog post). A very rich and at times overwhelming experience in terms of moderating – more on this in the process notes at the bottom of this post.
The broad topic was about ‘monitoring / assessing KM’ and I had prepared four questions to prompt Tweeters to engage with the topic:
- What do you see as the biggest challenge in monitoring KM at the moment?
- Who to involve and who to convince when monitoring KM?
- What have been useful tools and approaches to monitor KM initiatives?
- Where is M&E of KM headed? What are the most promising trends (hot issues) on the horizon?
After a couple of minutes at the beginning to wait for all participants, we started listing a number of key challenges in terms of monitoring/ assessing KM:
- Understanding what we are trying to assess and how we qualify success – and jointly agreeing on this from originally different perspectives and interests;
- The disconnect between monitoring and the overall strategy and perhaps its corollary of (wrongly) obsessing on KM rather than on the contribution of KM to overall objectives;
- The crucial problem of contribution / attribution of KM: how can we show that KM has played a role when we are dealing with behaviour changes and improved personal/organisational/inter-institutional effectiveness?;
- The dichotomy between what was described as ‘positive’ monitoring (learning how we are doing) and ‘negative’ monitoring (about censoring and controlling peoples’ activities);
- The occasional hobby horses of management and donors to benchmark KM, social media, M&E of KM etc.
- The problem of focusing on either quantitative data (as a short-sighted way of assessing KM – “Most quantitative measures are arbitrary and abstract. …adoption rate doesn’t really equate to value generation” – Jeff Hester) or rather qualitative data (leaving a vague feeling and a risk of subjective biases);
- The challenge of demonstrating added value of KM.
- The much need leadership buy-in which would make or break assessment activities;
The challenges were also felt as opportunities to ‘reverse engineer successful projects and see where KM played a role and start a model’.
An interesting perspective from Mark Neff – that I share – was about monitoring from the community perspective, not from that of the business/organisation.
This last issue hinted at the second part of the chat, which was dedicated to what turned out to be a crux of the discussion: who do you need to involve and who to convince (about the value of KM) when monitoring KM.
Who to involve? Customers / beneficiaries, communities (for their capacity to help connect), even non-aligned communities, users / providers and sponsors of KM, employees (and their capacity to vote with their feet). Working in teams was suggested (by Boris Pluskowski) as a useful way to get knowledge to flow which eventually helps the business then.
Who to convince? Sponsors/donors (holding the purse strings), leaders (who are not convinced about measurement like managers but instead like outputs and systems thinking).
What is the purpose of your monitoring activities? Management? Business? Productivity? Reuse? Learning? Application? Membership? Mark Neff rated them as all interesting (another challenge there: choose!). Rob Swanwick made the interesting point of measuring within each unit and having KM (and social media at that) mainstreamed in each unit, rather than dedicated to a small group.
Raj Datta shared his interesting perspective that it is key to explore and expand from the work of communities that are not aligned with business objectives.
The third part continued with some tools and approaches used to assess KM.
The key question came back: What are we looking at? Increasing profits, sales and the engagement of customers? Participation in CoPs? Answers provided in 48 hours? Adoption rates (with the related issue of de-adoption of something else that Rob Swanwick pointed out)? Project profile contributions? Percentage of re-use in new projects? Stan Garfield suggested setting three goals and measuring progress for each (as described in his masterclass paper about identifying objectives). Mark Neff also stressed that it all depends on the level of maturity of your KM journey: better to build a case when you begin with KM, to look at implementing something or at the adoption rate when you’re a bit more advanced… At his stage, the man himself sees “efforts to measure the value we provide to clients and hope to extend that to measures of value they provide”,
In spite of these blueskying considerations, the KMers’ group nonetheless offered various perspectives and experiences with tools and approaches… social network analysis (to measure community interaction), Collison’s and Parcell’s KS Self assessment, outcome mapping (to assess behaviour change), comparative analysis (of call centre agents using the KM system or not), a mix of IT tools and face-to-face to create conversations.
But really what stole the show were success stories. Jeff Hester mentioned “they put the abstract into concrete terms that everyone can relate to”. Stories could also take the form of testimonials and thank you messages extracted from threaded discussions. But at any rate they complement other measurements and they sell and are memorable.
Rob Swanwick pondered: “Should stories be enough to convince leaders“? Roxana Samii suggested that “leaders wil be convinced if they hear the story from their peers or if they really believe in the value of KM – no lip service” and Boris Pluskowski finished this thread with a dose of scepticism, doubting that leaders would find stories enough to be convinced. In that respect, Mark Neff recommended assessing activities on our own and leading by example, even without the approval of managers or leaders, because they might not be convinced by stories or even numbers.
Of course the discussion bounced off to other dimensions… starting with the gaming issue. A new term to me anyway but indeed how to reduce biases induced by expectations on behalf of the people that are either monitoring or being monitored? And should we hide the measurements to avoid gaming (“security by obscurity” as mentioned by Lee Romero) or should we on the other hand explain them to reveal some parts of the variables to get buy-in and confidence, as suggested by Raj Datta or the transparency that is important for authentic behaviours as professed by Mark Neff?
Finally, the question about where M&E of KM is headed (fourth part) didn’t really happen in spite of some propositions:
- Focusing more on activities and flows in place of explicit knowledge stock (Raj Datta)
- Mobile buzzing for permanent monitoring (Peter Bury)
- Some sort of measurement for all projects to determine success (Boris Pluskowski)
- Providing more ways for users to provide direct feedback (e.g., through recommendations, interactions, tagging, etc.) (Stan Garfield)
After these initial efforts, the group instead happily continued discussing the gaming issue to come to the conclusion that a) most KMers present seemed to go for a transparent system rather than a hidden one that aims at preventing gaming and b) gaming can also encourage (positive) behaviours that reveal the flaws of the system and can be useful in that respect (e.g. Mark’s example: “people were rushing through calls to get their numbers up. People weren’t happy. Changed to number of satisfied customers.”).
With the coming of V Mary Abraham the thorny question of KM metrics was revived: how to prove the positive value of KM? Raj Datta nailed an earlier point by mentioning that anyway “some quantitative (right measures at right time in KM rollout) and qualitative, some subjective is good mix”. On the question raised by V Mary Abraham he also offered his perspective of simplicity: “take traditional known measures – and show how they improve through correlation with KM activity measures”. This seemed to echo an earlier comment by Rob Swanwick” Guy at Bellevue Univ has been doing work to try to isolate ROI benefits from learning. Could be applied to general KM”.
In the meantime Mark Neff mentioned that to him customer delight was an essential measure and other tweeters suggested that this could be assessed by seeing the shared enthusiasm, returning and multiplying customers (through word of mouth with friends).
Boris Pluskowski pushed the debate towards innovation as well, as an easier way to show the value of intangibles, as opposed to KM. V Mary Abraham approved in saying “Collab Innov draws on KM principles, but ends up with more solid value delivery to the org”. To which Raj Datta replied: “to me KM is about collaboration and innovation – through highly social means, supported by technology”. And the initial tweeter on this thread went on about the advantages of innovation as being a problem solving exercise at its heart, including a before and an after / result – and it is possible to measure results. V Mary Abraham: “So #KM should focus on problem-solving. Have a baseline (for before) and measure results after”, because solving problems buys trust. But next to short-term problem-solving Mark Neff also pointed at the other face of the coin: long-term capacity building: “Focus people on real solutioning and it will help focus their efforts. Expose them to different techniques so they can build longterm”.
And in parallel, with the eternal problem of proving the value of KM, Raj Datta (correctly) stated: “exact attribution is like alchemy anyway – consumers of data need to be mature”.
It was already well past the chat closing time and after a handful of final tweets, this first KMers’ page of monitoring/assessing KM was turned.
At any rate it was a useful and fresh experience to moderate this chat and I hope to get at it a second time, probably in April and probably on a sub-set of issues related to this vast topic. So watch the KMers’ space: http://www.kmers.org/chatevents!
As mentioned earlier in this post, the Twitter chat moderation was a rather uncanny experience. With the machine gun-like speed of our group of 25 or so Tweeters, facilitating, synthesising / reformulating and answering to others as one participant all at once was a hectic experience – and I’m a fast blind typer!
But beyond the mundane I think what stroke me was: The KMers’ group is a very diverse gang of folks from various walks of life, from the US or the rest of the world, from the business perspective or the development cooperation side. This has major implications on the wording that each of us uses – which may not be granted (such as this gaming issue that got me triggered at first) but also on the kind of approaches we seem to favour, the people we see as the main stumbling block or on the other hand the champions that we see as aspirational forces, and the type of challenges that we are facing… More in a later post about this.
There is finally the back-office side of organising such a Twitter event, and I think as much about preparing / framing the discussion, as inviting people to check out your framing post, preparing a list of relevant links to share, sharing the correct chat link when the event starts (and sending related instructions for new Tweeters), but also generating the full chat transcript (using http://wthashtag.com/KMers, thank you @Swanwick 😉 all the way down to this blog post and the infographic summary that I’m still planning on preparing… it’s a whole lot of work, but exciting one and as the web 2.0 follows a ‘share the love / pay it forward’ mentality, so why don’t you give it back to the community out there? This was my first attempt, and I hope many more will follow…
Related blog posts (thank you Christian Kreutz for giving me this idea):
- (Im)Proving the value of knowledge work: A KMers chat on monitoring / assessing knowledge management
- M&E of KM: the phoenix of KM is showing its head again – how to tackle it?
- Network monitoring & evaluation: taking stock
- G(r)o(w)ing organically and the future of monitoring
The full transcript for this KMers twitter chat is available here.