Categorizing knowledge [1].

Categorizing knowledge.





Theoretical discussions of knowledge management typically start with the distinction between tacit and explicit knowledge. Ignore the tacit dimension for a moment and consider a progression of four kinds of explicit knowledge that can shape how I collect, organize, and share knowledge.

Sharing Answers






  (What answers could be reused?)






  The starting point would be concrete, specific information that exists and is the answer to a current question. It may exist as research information available from a third party or it may be material elsewhere in the organization.






  In this form, we are talking about the boundary between information and knowledge. The knowledge component lies in the ability to formulate the question intelligently coupled with awareness of the places where the answer might be found.






  A surprising amount of the knowledge management problems identified in organizations fall into this category, as do many of the laments by senior executives about a need for better knowledge management. Consider a consulting team designing an e-commerce website for a client. The team needs to size the technology based on estimated usage and estimate the costs for software licenses. One approach would be to poll hardware and software vendors for estimates. This risks getting estimates that are biased to the interests of the vendors. Moreover, it takes time to obtain this data at a stage in the project when it may be premature or inappropriate to share information with vendors.






  Alternatively, a team might work with published price sheets and performance benchmarks at the risk of seriously misestimating performance or economics through a failure to understand how to adapt public information to the case at hand.






  A better solution would be to find a path to teams that have recently gone through the same exercise and have valid and reasonably current data on vendor discounting practices and on differences between real-world technology performance and laboratory benchmarks.






  Managing this level of knowledge depends on developing a profile of the questions that come up frequently, coupled with up-to-date and authoritative answers.That is most easily done if people in the organization are adept at formulating questions precisely and if there is some central place (an information center/library) where an inventory of questions asked, answered, and open can be maintained. A key supporting process is to weed out answers that have become obsolete or superceded, with the proviso that a sophisticated weeding process would also allow for old answers to become new again under the right change in context.

Sharing Questions






  (What questions are being asked? How can we get more mileage out of our capacity to ask better questions?)






  The next step after learning to reuse answers is learning how to reuse the questions to create new answers. In this category would be efforts to identify new conceptual models or new diagnostic capacities that might be transferred. It abstracts one level from the product (answers) to the first level of process. We stipulate that the answers will likely be different, but that the inquiry processes that generated them have value in their own right.






  One example of this process is the U.S. Army’s success at institutionalizing a questioning process called After Action Reviews (AAR). The premise is simple, the process powerful.






  Growing out of their emphasis on realistic training, an AAR examines the difference between the plans on the map and “ground truth.” How does what actually happened differ from what we expected would happen when we drew up our plans. To be truly useful, this review has be focused on what might be done differently next time, not on who was to blame for what just happened. Even with that goal firmly in mind, effective AARs must be done as soon after the action concludes as possible. “Ground truth” gets muddy as people work to get their stories straight.






  …on the actual day of battle naked truths may be picked up for the asking. But by the following morning they have already begun to get into their uniforms.” (E.A. Cohen and J. Gooch, Military Misfortunes: The Anatomy of Failure in War (New York: Vintage, 1990), p.44 as quoted in Karl E. Weick and Kathleen M. Sutcliffe, Managing the Unexpected: Assuring High Performance in an Age of Complexity (San Francisco: Jossey-Bass, 2001), p. 58)






  An AAR seeks answers to only a handful of questions. What did we expect to happen? What actually happened? Why did it happen that way? What would we do differently next time? Getting honest answers does depend on providing a safe haven for uncomfortable answers. That is easier to do in settings that appreciate that the real world typically has little respect for plans.






  Many of the 2×2 grids that consultants are fond of fall into this category of leveraging questions. All are examples of asking the underlying general question of what do I see if I look at the world through a particular lens. The goal is always the same. Is there an interesting pattern here? Unfortunately, consulting presentations generally need to cut to the chase. While that’s a sound communication strategy, it does tend to gloss over the inquiry process that is potentially more valuable than the particular answers discovered this time. As Peer Munck, one of my former consulting partners known affectionately as the Mad Norwegian, liked to observe: “if you torture the data long enough, it will always confess.” What good consultants develop over time, and what organizations can look for as a form of reusable knowledge, are questions that have shown success at getting the data to confess to interesting crimes.

Sharing practices






  (How can we get more people to do X as well as Cleveland?)






  Gabriel Szulanski at Wharton has been investigating this category of knowledge sharing recently. It is the problem of identifying who within the organization are the best performers and figuring out how to bring other parts of the organization up to that level of performance.






  Like most interesting problems, this is more difficult than it might seem, starting with understanding what best performance looks like and with appreciating how much that performance is likely a function of local context. Understanding the contribution of context to performance is particularly important to transferring practices successfully. Certainly, it would be nice if every element of the organization could perform at a uniform level of high quality, but that denies the variability that exists in the real world. Disney has been extraordinarily successful at replicating its theme parks. Yet EuroDisney in France has been largely a disappointment. The differnces are many ranging from different attitudes toward jobs in the workforce to different expectations about children and vacations. All the issues, however, can rightfully be grouped under the heading of differences in context.

Discovery/Innovation






  (What if we mix A with B?)






  The most sophisticated and difficult use of knowledge is in the discovery or invention of new knowledge. Much has been written elsewhere on this topic. [e.g. Wellsprings of Knowledge – Leonard-Barton]. Here the focus is on how existing knowledge feeds into the process and what approaches are worth considering on generating, organizing, and managing knowledge as feedstock.






  At a 50,000 foot level there are two aspects of existing knowledge as it contributes to creating new knowledge. The first is as nuggets of raw material that can be combined and recombined in new configurations, some of which may turn out to be interesting or valuable. The second is as multiple series of interlocking nuggets that comprise patterns which may also be interesting or valuable in their own right.






  These two aspects make an interesting pair as they effectively anchor two ends of a spectrum; at one end the elemental fact, at the other the grand unified theory. The curious thing about discovery and invention is that you cannot pick one end or the other of this spectrum, nor can you occupy some middle position. Rather, you must work at shifting your balance back and forth between each. One failure of many knowledge management systems is an implicit and unexamined attempt to straddle the middle position we’ve just declared undesirable, if not impossible. One the one hand, much of the material managed in today’s knowledge management systems represents current instances of knowledge nuggets embedded in particular patterns appropriate to the current context. On the other, the contextual embedding of the materials obscures the more general patterns.






  Consider a final consulting report to a client or the proposal that generated the client project. Both are examples of the kind of deliverables frequently collected and stored in knowledge management systems. By themselves, even with adequate contextual descriptions, these kinds of deliverables are hard to draw value from as they stand, especially with respect to generating ideas for new products or services.






  For example, a project might have pioneered the use of a new financial valuation model to evaluate alternative proposals. This knowledge nugget might be useful across a wide range of current and future projects. Buried as slides 23-27 inside a longer report, this nugget may never be discovered. The costs of search and retrieval overwhelm any estimate of potential value and the next team will create its own analysis from scratch. This kind of knowledge is then restricted to the slower movement of project team members being reassigned from one client to another at the end of the project. Moreover, important elements of the analysis–how to collect the appropriate raw data, explanations of the applicability of the technique to a particular situation–will likely be scattered across the working files of the project and the individual project files of members of the project team. If one goal of the knowledge management system is to invent new approaches to valuation, that goal will be hindered by the monolithic design of the knowledge system.






  Patterns across multiple projects are somewhat easier to discern. However, once again the contextual details of each deliverable can work to conceal the structural similarities that might suggest new products or services, for example. Here, the necessary process is one of explicitly abstracting and generalizing from multiple details to the relevant underlying patterns. This can be done more effectively with conscious attention to and labeling of abstract patterns that transcend the particulars of individual projects.
[McGee’s Musings]