First off, the news. Yes, I’m now working for Aberdeen Group. And a big shout out to my fellow analysts. J Now back to commentary.
The latest edition of MIT’s Sloan Management Review has a couple of interesting articles, one on the effects of real-time data and one on IBM’s Innovation Jam. The one on real-time data (“The Downside of Real-Time Data”, Alden Hayashi, reviewing recent research by Swaminathan and Lurie) suggests that getting new data in real time can actually be harmful – that is, decision-makers fed daily retail data instead of weekly averages tended to over-react to negative or positive news, resulting in inventory replacement orders that were actually less optimal than those of the “behind the times” managers (especially when day-to-day sales variance was high). The researchers do note that systems can compensate to some extent by ensuring that decision-makers are given some historical context on the same screen as the latest figures.
I think it’s reasonable to extrapolate to CEOs and CMOs (not necessarily CFOs; they are so tied to long-term data that it’s hard to imagine them over-reacting to the latest uptick in inventory in a rolling balance sheet). Yes, the latest craze for 360-degree view performance-management dashboards based on event processing could indeed enable CEOs who not only take a short-term view of the company’s stock price but also of the strategic levers he/she has to pull, resulting in erratic company shifts in tactics and/or strategy.
However, I think this point is entirely irrelevant to the main value-add of real-time presentation of event-type data (that is, data that represents a shift in the environment: a customer order is an event, a description of a product part is not). Each event-response loop is, or should be, part of a business process; the main idea is to speed up the response in order to satisfy customers, react to unanticipated events, or save money, not to improve decisions. The side-effect is that delaying a response often forecloses options, so that decisions are necessarily less optimal; these studies do not test that notion. In other words, in the real world, if you design speeded-up event-response loops by speeding real-time data presentation in the context of an existing business process, you will indeed include context or you won’t need it, and so 99% of the time what results is good for the organization.
Btw, the real danger, which we may run into someday, is “herd behavior” such as we have seen in the markets, in which multiple organizations “head for the exits” at the same time in immediate reaction to the same news, creating an avoidable vicious cycle. But the game is still worth the candle …
Anyway, the second article talks of how IBM did a massive “jam session” in 2006 to ask its employees to contribute ideas for turning IBM technology in its labs into products. The first phase simply collected the ideas; the second phase, after managers sifted and aggregated ideas, was to attempt to turn the ideas into clear product concepts with attractive markets. Particular examples of areas in which the Jam gave a boost that usually resulted in real-world products were green technology, health care (electronic health records), and integrated transportation management. The authors’ (Bjelland and Wood) conclusion was that this was a promising new way of doing an innovation process, with the usual tradeoffs (“powerful but ponderous”).
I think they’re missing a bet here. Has nobody noticed the similarity to open source? In other words, we’re back to the idea of world-wide but to some degree unrepresentative proxies for the customers helping to design the product. To put it another way, these designers (a) combine the technical skills and company knowledge to design products with the reactions of global customers; (b) are treated to some degree as an equal mass, not a hierarchy in which those lower down are ignored; and (c) insist on going their own way, not the way the company lays out for them.
Here I must make a confession: I participated in much the same sort of process at Prime, shortly before it melted down in 1989. The big advance in IBM’s approach, compared to that one, is that it (as far as possible) treated all ideas as equal – Prime by its DNA was unable to resist having “senior gurus” and desperate GMs take charge, with the result that the only ideas that might have saved Prime never surfaced or were ignored (my own idea was “let’s build a new kind of desktop that allows cross-filing”; I still think it was better than the senior-guru idea that was favored, “let’s build a framework and figure out what it’s a framework for as we go along”). The other major difference was that IBM in 2006 had pantloads of employees who could contribute real new ideas based on constant engagement with the world beyond IBM; Prime in 1989 had very few.
My net-net? For the favored few that are large and have that kind of employee, there should be more Innovation Jams, not less; anything to escape “not invented here.” For the rest of us, I think it just won’t work; not enough critical mass, not enough qualified participants. Instead, let’s go back to our communities. They’re a bit more reactive; but, boy, do they do a better job as proxies for the market.