cards

René said, there is no value to any knowledge as long as nothing is done with it. It raised a discussion. More than ten consultants had explained how they calculate a value to knowledge before René’s speech.

http://www.ki-network.org/downloads/knowledge_value_B7.pdf by Markus Perkmann

http://www.providersedge.com/docs/km_articles/Measuring_Knowledge_Value.pdf

 

KIN brief #7 – 26/07/2002

 

Measuring knowledge value Evaluating the impact of knowledge projects

 

A recent conference in London addressed the issue of ‘Measuring Knowledge Value’.

 

Rupert Consulting: knowledge has no value as long as nothing is done with it. Please see how Markus Perkmann concludes.

 

A series of presentations contributed to the ongoing debate on how the benefits of Knowledge Management can be evaluated and measured. In the current business climate, there is a growing need to spell out the concrete impact of knowledge projects on business performance. The conference proved once again that this is easier said than done.

 

Two perspectives

 

The speakers approached the issue of ‘knowledge value’ from two main perspectives:

  •  The macro view: quantify the intangible assets of an organisation by using tools such as the Balanced Scorecard, score boards, indexes and ‘navigators’. According to Karl-Erik Sveiby, the concept of intangible assets attempts to capture the value of human capital, competencies, customer relationships, employee collaboration or diversity in an organisation. On the basis of these concepts, tools such as the Skandia navigator have been created to serve as strategic and monitoring device.

  •  The micro view: How can the impact of single knowledge projects be assessed and quantified? Example of such knowledge projects mentioned by the speakers included the roll-out of knowledge bases and idea generation systems as well as ‘soft’ interventions such as communities of practice.

 

 The micro-macro divide How do these two approaches relate to each other? The main benefit of macro approaches is that they allow an organisation to consider performance indicators that are not purely financial. This is based on the assumption that the ultimate performance of a company is down to its intangible assets. By contrast, most financial indicators essentially refer to past performance and therefore reflect outcomes rather than the value-generating drivers in an organisation. In theory, the intangible asset indicators used in such approaches could provide cases for knowledge projects in areas where performance is lacking. However, no single example of such a link between macro approaches and concrete projects was given at the conference. Rather, the macro and the micro camp seem to have little to do with each other as concrete knowledge projects usually seem to be driven by other, more concrete organisational needs. The main problem is to establish a direct causal link between concrete initiatives and their impact on business performance. But this difficulty does not prevent practitioners to evaluate the impact of their knowledge projects although this is often not ‘measurement’ in a strict sense. As often argued, ongoing senior management support requires some evidence of success. As a result, KM owners have come up with a series of pragmatic ways of demonstrating the impact of their knowledge projects, as for instance shown by Paul Riches from BT and Andrew Cowell from MWH. The armoury ranges from anecdotal and case study evidence, feedback from users or participants, large-scale user surveys and indirect measures such as system usage.

 

 

The measurement paradox It emerged that quantitative measures can be actually very limited in ‘measuring’ knowledge processes. For instance, system usage is very easy to measure but there is no guarantee that this will actually result in individual or business performance. The measure is too indirect. The impact of soft measures such as communities of practice will be even more difficult to assess on the basis of their impact in terms of timesaving, ‘amount’ of learning or financial value added. The apparent precision of quantitative measures is offset by the fact that they often do not really measure what they are supposed to measure. In many cases, therefore, anecdotal evidence and case studies seem to be more useful. It can be argued that for some specific projects, such as idea management systems, ROI case can relatively easily proven as their output can be directly related to financial gains. But it has to be born in mind that ROI can only capture part of a project’s impact. This is because projects always have unintended consequences or effects that can not be easily captured as (financial) ‘return’. These effects can be negative or positive, potentially undermining the validity of an elegant ROI calculation. In general, it can be said that ROI models will have more validity when projects address efficiency or productivity concerns. By contrast, it will be difficult to prove ROI of projects focussing on more intangible assets, such as cross-project learning or competency development. In such cases, a good theory might be a better way of convincing senior management to commit resources to KM than an array of indicators and ROI percentages.

Use measures as a heuristic But the theory can indeed be supported by quantitative measures if they are used carefully. Sveiby’s Collaboration Climate Index (CCI) is an example for a measure that can support the business case for improving collaborative relationships between employees. The theory behind the CCI is that a good collaborative climate enhances knowledge sharing and therefore the development of intellectual capital. The index does not necessarily tell you how to actually improve the collaborative climate. Nevertheless, the CCI is a useful heuristic for understanding the factors that are important for collaboration and can therefore inform concrete projects. Likewise, Andrew Cowell from MWH showed how a simple tool can be used for facilitating knowledge sharing between different business units. Using a web-based questionnaire, MWH asked engineers to assess the current performance of their units against the desired level of performance in the future. The results were used for putting high-performing offices in touch with lower-performing offices. Similar tools were used by Bradford & Bingley, the bank, to assess competency gaps as perceived by employees and their colleagues using a 360º method, as explained by Margaret Johnson and Ian Dixon.

 

Conclusion These measures are by no means ‘objective’ but they provide a useful heuristic that can inform concrete projects and actions. For knowledge practitioners, such pragmatic tools are more relevant than overcomplicated methods of measuring ‘knowledge value’. The practical problem organisations are facing is not necessarily to measure the value of knowledge but to improve their ability to exploit and create knowledge.

 

As pointed out at the conference by Rene Rupert from INSEAD, behaviour is key in this respect. As knowledge sharing is a social activity, it crucially relies on how people behave towards their colleagues, bosses, customers and suppliers. In turn, behaviour can be informed either by culture or incentives. Given the intangible nature of knowledge, and the mostly intrinsic rewards drawn from ‘knowing’ something or teaching somebody, incentives are often not very effective in stimulating knowledge behaviour. This means ‘culture’ is key for the knowledge performance of an organisation. At the same time, culture is notoriously difficult to change. In this context, the role of the knowledge officer is to create cultural pockets where employees can interact and learn in a context unaffected by mainstream organisational culture that might be hierarchical and non-communicative. Communities of practice and knowledge networks are examples for such pockets that might, in turn, have a positive impact on other practices in the organisation. With their focus on intangibles, for such projects, measuring ‘knowledge value’ will not be a priority. The key is rather to make a convincing case based on good arguments, pilot projects and case evidence and use quantitative tools as a supporting heuristic.

 

Markus Perkmann

 

About KIN: The Knowledge and Innovation Network brings together practitioners from leading organisations with an interest in behavioural aspects of Knowledge Management. The Network is a joint venture of Warwick Business School and Leicester University. Website: www.ki-network.org - Contact:  Copyright (C) 2002 Knowledge and Innovation Network.