Monday, February 27, 2012

Thoughts on Dangerous Misunderstandings of Statistics

I was listening to an IBM presentation of their latest Business Intelligence offering today, including new capabilities based on their SPSS statistical package, and my mind wandered, as it often does, all the way to a way I have seen such statistics capabilities used wrongly, in our daily lives, to the point where it even becomes a danger to us. And no, I am not talking about “confirmation bias” (look it up?).

Here’s the way, in my naïve understanding, statistics presently works for us. We have a worldview. Scientists using statistics beaver away and discover possible changes in that worldview (A new drug helps cancer? Tobacco may be harmful to you?), and then test them against a null hypothesis, until it is extremely likely that your worldview should be modified; and at that point they announce, they get beaten up while others check their work (if necessary), and then as rational people we change our worldview accordingly.

No, again, my problem is not with how rational we are. Rather, I am concerned with what happens between the time when a new hypothesis shows promise and the time when it is pronounced “extremely likely” -- statistics-speak for “time to change.” [For those who care, think of “extremely likely” as a one-tailed or two-tailed distribution in which likelihood of rejecting the null hypothesis is greater than 95% to 99% and power and data mining bias are just fine]

So what’s wrong with that? We’re risk-averse, aren’t we? Isn’t it the height of rationality to wait for certainty before going off in the wrong direction, and doing worse instead of better?
Not really. Let me reprise a real-world example that I recently saw.

Polar Bears and Statistics

A scientist in Alaska (Dr. Monett) had been observing polar bears, sampling one-tenth of his area each year, for 20-odd years (I have stripped this down to its statistical core). One year, for the first time, he observed 4 polar bears in their migration swimming rather than walking across the ice. The next week, again for the first time, he observed 3 polar bears in that area, dead. Let me also add, to streamline this argument, that this was the first time he had observed open water next to the shore during migration season.

Here was what that scientist did, as a good statistician: wrote a paper noting the new hypothesis (open water is beginning to occur, and it’s killing polar bears), and indicating that based on his sample size, an initial alternate hypothesis was 40 polar bears per year swimming instead of ice-walking in that region, causing 30 additional polar bear deaths per year. He then requested help from other scientists in carrying out similar surveys in other regions, while he would continue his yearly sample in his region. Duly, they did so, and the evidence grew stronger over the last five years. Afaik, it has not yet been officially recognized as an extremely likely hypothesis, but it seems reasonable to guess that if it has not already been done, this hypothesis will be recognized as an “extremely likely scientific fact” in the next five years.

Now, let’s look more closely at the likely statistical distribution of the data. The first thing to realize is, there can’t be less than zero polar bear deaths. In other words, sorry, this is not a normal distribution. It’s not a matter of a cluster of data points around zero and a cluster greater than zero; if you get 20 years of zeros and then a year of 3 or 4, especially given the circumstances, the alternate hypothesis is in fact immediately more likely than your conservative statistician is telling you, even before more data arrive.

Now look at the statistical distribution of polar bears swimming in the new situation. Because you only have a year’s worth of data, that distribution is pretty flat. If you add that, on average, there are 10 polar bears migrating in the area surveyed, then the distribution runs from zero to 94 polar bears swimming in that year, and zero to 100 in any year, in a pretty flat fashion. But statistics also tells us that 40 is the most likely number – even if it has 2% likelihood – and that it is also the median of likely outcomes as of now: you are just as likely to see more polar bears swimming in the region as less. In other words, it’s pretty darn likely that some polar bears are going to be swimming from now on (you can take as given that there’s going to continue to be open water), and, if so, the new null hypothesis is 40 per year.

So the only question is, when in the last five years should we have changed our worldview? And the answer is, not after five years, when conservative statistics says the new null hypothesis is “extremely likely.” Rather, depending on the importance of the statistics to us, we should change after the first year’s additional data, at the latest, which is the point at which the new null hypothesis becomes much more likely than the old. [again, for those who care, the point at which the likely zero-swims lambda probability distribution is twice as unlikely as a semi-normal distribution somewhere around 40 polar bears].

Because now we have to ask, what are we using this data for? If it’s a nice-to-have cancer cure, sure, by all means, let’s wait. If it’s a matter of just how fast climate change is happening …

Of Course Disastrous Climate Change is Still Unlikely. Isn’t It?

I picked that last example above on purpose, as a kind of shock therapy. You see, it appears to me that there is a directly comparable “super-case”: how we – or at least, almost all of the people I see quoted – view the likelihood of various scenarios for upcoming climate change.

Let’s start with the 2007 IPCC report upon which most analysis is based. Those sane, rational folks recognized that this IPCC report lays out all the scientific facts – those aspects of climate change that scientists have confirmed are “extremely likely” – and fits them all together into a model of how climate change is happening and will happen. In that model, we can by reasonable efforts limit global climate change to 2 degrees C total, reached by the end of the century, which is a disastrous amount, but, given the time frame over which this occurs, not civilization-threatening.

That, it turns out, as in the polar bear case, is just about the absolute minimum. And, as in the polar bear case, what the scientists knew in the years before 2007, and which to a fair extent they have confirmed since then, is that it is not the most likely case nor the median-likelihood case under all scenarios except those involving a really drastic reduction in fossil-fuel use over the next 8-18 years. It is just that, still, all the real-world data has not quite reached the point where scientists can stamp it as an “extremely likely” new hypothesis. But even in 2007, it was quite a bit more likely than the 2007 “scientific fact” model; it is now far, far more likely.

So, as in the polar bear case, here we have the most likely, median-likelihood case, and it is much worse than the minimum case, and what does everyone talk about? The minimum case – which means that it is all too easy to assume that we will make a reasonable effort (or the “free market” has begun taking care of everything) that will hold climate change to 2 degrees C or below.

Just to cite a few examples of this: all the political leaders are talking about conferences to set as targets (bad ones, but valid targets if we assume the global economy doesn’t grow from now on) carbon emissions reductions aimed at reducing emissions by 20% by 2020-30, the number that should hold climate change to 2 degrees C; and Prof. Krugman, whom nobody regards as an optimist, is pointing approvingly to the work of an economist expert in the relationship of climate change to economics, who argues that although climate change of 2 degrees C is really unlikely, we should do something about it because of the horrible consequences if it did occur.

These are not irrational people (or, at least, their thinking here seems to be pretty rational). It is just that scientists, concerned about the scientific process, have told them that the IPCC report is scientific fact and the stuff beyond it is not, and we, who should be operating based on “most likely” and “median likelihood”, are instead, operating on the basis of a stale “null hypothesis” and “alternative hypothesis.”

Moreover, we should not expect scientists to change their tune. It is their job to move from scientific fact to scientific fact. It should be our job to listen to their “opinions” as well, and weave them together into an understanding of just what the most likely, median-likelihood model is right now. And, because in this case the most likely, median-likelihood case is pretty darn frightening, we must prioritize; we must see that this is not like the polar bear or the cancer cure: The answer to this matters a lot, right now.

A Quick, Boring Aside on Climate Change Statistics, Right?

Not that anyone cares, what is that most likely, median-likelihood climate change model and what are the resulting scenarios over the next 90 years or so? Well, first let’s look at where real-world data is diverging from the “minimum” model. Arctic ice decrease is supposed to be linear, and the alternative hypothesis has been it’s exponential. In the first case, less than 5 % ice at minimum somewhere around 2100; in the second, somewhere around 2030. Real-world volume data? Somewhere between 2016 and 2020. Oh, and the same data shows the Arctic all but free of ice year-round by somewhere between 2035 and 2045.

Similar thing with Greenland ice. Looks like it’s doubling its rate of ice loss every decade, but null hypothesis is, it will be linear from now on. Antarctic ice? Null hypothesis is still zero ice loss; initial surveys suggest linear ice loss; you think maybe we’ll start to figure out exponential ice loss in the next ten years? Natural-source methane release: can’t rule out zero natural-source increase, the likelihood is that a fairly big increase in the last 10 years is due mainly to natural causes, too early to tell about exponential, never mind the Russian surveys. At least, with atmospheric carbon measurements, somebody has noted a semi-linear increase in the annual increase over the last thirty years, and another scientist has come up with a reason that the rate of temp change caused by that rising increase will top out at 1 degree C per decade in 2050 – and he’s not operating on the basis of the minimum model, thank heavens.

So what does that mean? Well, the closest any scientist has come to a most likely, median likelihood model is the MIT 2011 study. “Business as usual” – a reasonable effort – yields 6 degrees C (11 degrees F) temp rise by end of century and 10-15 feet of sea rise. Of course, the results of that are probably not much more than the results of 2 degrees C temp rise, right?

And, remember, this model is still more scientifically “conservative” than the most likely one. Now we’re talking more than 6 degrees C by 2100, perhaps 25 feet of sea rise (mostly between 2050 and 2100) by 2100. And still, there’s that methane kicker to possibly add – which is “unlikely” now because we still aren’t sure how it will play out; it’s happening faster than we thought, but the absolute upper limit on its effects is coming down as we learn more.

Back to Statistics, Wrapping Up

I would dearly like for scientists to make a reasonable effort, as part of communicating a result, to tell us the implications of most-likely, median-likelihood models based on that result. I can understand why they won’t, given how the legal system and the press beats them up for imagined “sins” in that direction. Which is why I would simply ask the reader to make the effort, in cases that just might be very important to the reader or to us all.

Because, you see, in the long run, scientists are all dead, and so are we. If we wait for scientific fact to be established, which can sometimes take a lifetime, no one can criticize us for it. But, in cases where we should have acted long before we did, because, in fact, our old worldview was dangerously wrong and the evidence told us well before the final scientist’s official extremely-likely stamp, we ought at least to feel some pangs of guilt. Rationality isn’t enough. Understanding and compensating for the limitations of our null-hypothesis statistics hopefully is.

Saturday, February 25, 2012

Michelle Accardi-Peterson's Agile Marketing: The Flaws of So Many Virtues

I am partway through reading one of the best marketing tomes I have ever read, skimmed, read the blurb of, or seen summarized: Agile Marketing, by Michelle Accardi-Petersen (CA Press, 2011) – and it ranks pretty high among books in general. As someone who follows agile theory and practice carefully, and who has been a fellow-traveler with Michelle in the computer industry for lo these many years, as a programmer, B-school graduate, manager, analyst, and consultant to marketers, I found the first paragraph to be one of the most pitch-perfect I have ever seen, and the rest of it I have read so far has gone directly to the meat, presented it clearly and attractively, and grasped the overall implications of agile methods and marketing for each other well and with a strong, accurate sense of the importance of the topic.

And so, of course, as is my custom in these cases, I am now going to criticize this book viciously for two key ideas that I view as dangerous and in a profound way headed in the wrong un-agile direction (actually, there’s a third, but clearly Michelle is discussing that one just to humor a fad, even if she doesn’t realize it, so it doesn’t really taint her message. I’ll touch on that one at the end, briefly).
Briefly stated, those ideas are:

1. For the customer (but, of course, he said sarcastically, not for the marketer), perception, not product, is what's important; in a sense, perception is reality.
2. Agile marketing, like other agile methodologies, is about adapting rapidly and effectively to change.

Before I begin eviscerating these fascinating ideas, I should pause and invite those readers with “confirmation bias” to stop reading. By that I mean, if you feel that I am an arrogant, ignorant upstart in criticizing ideas that have been seen as important parts of marketing for many years, and have the sense that the rest of this piece will turn out to be wrong, I am sure you’re right. Please stop reading right now.

Still here? Boy, that reverse psychology worked. Because, of course, I’m just asking you to perceive the world differently, not accept that the world is different; and I’m only asking you to adapt to a customer’s suggested change in your own ideas, not develop additional new ideas before being prompted. Right?

All right, then, let’s dive in. Sharpen your knives for Idea 1.

Marketing Into the Delicate Dance of Perception and Reality

I have encountered this one many times over the years, in unexpressed assumptions, and expressed as “products never fail because of product characteristics, but because of flawed differentiation/positioning/advertising/understanding of the market we’re in.” Let me start with two stories that I believe directly contradict this.

The first one is the story of a software product back in the 1980s called LAN Manager. It was sold by Microsoft and IBM, who were dominant at the time in the PC market. It was sold to the correct market – businesses – and sold with conviction, imho, by both Microsoft and IBM. I can vouch for the fact that not only among businesses, but also among techies, the general view was that LAN Manager was the future. I have heard many rationalizations for the failure of LAN Manager over the years, and they all strike me as bad descriptions of what I saw at the time.

At that time, I was tasked with comparing LAN Manager and Novell NetWare technically, and as I examined LAN Manager I noted an odd fact: it required twice as much main memory on the client PC as any PC afforded that would be on the market for the next 1 ½-2 years. In other words, in the real world, LAN Manager would be effectively unusable except if an organization waited to buy a new round of PCs that wouldn’t exist for a couple of years, and even then might not make sense as replacements for existing PCs. And I should also add that the key, and very effective, selling proposition of the new network operating systems was that you could use a spare PC as the server.

At this point, I can hear the objections: probe a little deeper, and you will find that marketing failed to coordinate new product development effectively. Sorry, but you’re still missing the point: This was the typical act of organizations for whom the customer’s perception of reality was everything, and all you had to do was to manipulate that perception. Microsoft and IBM never saw the problem because they assumed that giving programmers general directions was enough and they could handle the rest by marketing. Because they were so focused on the customer’s perception, they were blind to the customer’s reaction to reality.

Hey, that’s one case, right? All right, here’s another. When I was at Yankee Group in the early 1990s, it was a game among publications to get projections from analysts at different firms as to network operating system share of market in the coming year. Three times I did this. Three times, unlike every other analyst, I picked Novell NetWare to increase market share – it was already at a hefty 70% as I recall, but the heavyweights of the overall computing industry, from DEC to IBM, now had viable, well-marketed NOSs of their own, and every business surveyed indicated that they were going to increase the share of these competitors in the coming year. Year after year, this was occurring, and year after year, Novell NetWare’s share increased, just as I predicted. Monopoly power? Don’t make me laugh. So why was I right, and everyone else wrong?

Well, here’s a gem of a case I collected from DEC itself. I was talking to one of their engineers, and he told me that in order to get their work done, he and others had set up a NetWare network. But because they weren’t supposed to be doing so, they “tunneled” as a small part of the overall DEC network, invisible to both their managers and the administrators keeping watch over expenditures and usage of competitors’ products. Until the volume on the NetWare network grew so large that it began crowding out all the other traffic on the DEC network. And that, typically, was how NetWare won out – because no matter what the corporate strategy was, individual employees acting on behalf of their own needs crowded out corporate plans. Marketing, meet reality.

I hear another bleat from the marketers – not proven. Again, you’re missing the point. In this case, DEC marketing was so focused on manipulating the perceptions of its traditional customers, the CIOs and CEOs who held the purse strings – which, obviously, it did quite well – it failed to realize how the reality of greater NetWare usefulness right now to reality-based programmers and techies trumped corporate plans. And, again, that was because marketing was so focused on manipulating the perceptions of customers, as had been highly successful in the past, that it overlooked the degree to which users’ clear view of the real comparative usefulness of the products at that point in time would create a situation in which, eventually, CIOs bowed to reality and started making NetWare a corporate standard; and then the game was well and truly over, and the mini makers had lost control of their low-end customers. Everyone else bet on perception, as a reflex; I bet on reality (I had used NetWare in the past), because I felt strongly that in this particular case it would trump well-manipulated perception.

I am not just saying that “perception is reality” is wrong in minor ways. I am saying that another idea is a much better model for what is going on, and that, in the long run, and sometimes in the short run, using my idea as a basis for agile marketing is likely to do much better than Michelle’s Idea 1. But before I go on to that idea, let me summarize the ways I believe Idea 1 is wrong-headed:


• If you analyze particular failures and successes carefully, the design of the product, independent of how marketing thinks it should be portrayed (i.e., its “reality”), often plays a key role in long-term success, and in a few cases in short-term success. I emphasize long-run success, because initial success that emphasizes perception too much takes the organization’s eye off the technology-development ball, and causes a greater and greater gap between a pleasant perception and a not-so-pleasant reality. That may have worked in the stable markets of fifty years ago, but not today.
• The idea that perception is reality reinforces in corporate marketing and strategic planning a fundamentally flawed – because biased – view of the customer. It focuses the marketer on manipulating a malleable and necessarily partially unreal perception, rather than engaging in a delicate dance with the customer to get him/her to play with your offered perception rather than another’s, always keeping in mind that a key segment of your market will be spending the majority of their time doing something else and dealing with the necessary realities of real life, and therefore there is only so far you can go in bending and misdirecting around reality. Bluntly: just because you’re a successful marketer, you think the customer is what you want him/her to be in order to sell your product. As an antidote, try to imagine (preferably, checking against reality) a day in the life of several customers and several who are not customers, and then see how much your “buy, buy, buy” caricatures his/her overall behavior.
• Above all, the idea that perception is reality leads to un-agile marketing behavior. It does so because it causes you to misperceive changes in the customer – misperceive them as greater and greater acceptance of the perception you are selling them, rather than the tendency of a sub-segment of the market to become addicts. Except when something is forbidden, addicts are not a good long-term market. If McDonald’s was only delivering more fat and sugar, it would not be growing; instead, its customers would be busy dealing with their excesses via ill-health and reduced buying. What else is it delivering? Not just convenience; it’s also delivering enough food to live on when cost is very important. If, in that delicate dance, you misperceive the customer, your rapid changes do not fundamentally get at what is croaking your markets; they are not, in the long term, effective responses. Therefore, they are fundamentally un-agile – they just give the perception of agility. But hey, perception is reality for the marketer too, right?
• So you can guess my final bullet point. If the marketer thinks perception is reality for the customer, perception will become reality for the marketer, and the rest of the organization: manipulate perception of the marketer by the rest of the organization, manipulate perception of the goals of the organization by employees, manipulate perception of the financials by the stock market – if we do it right, perception will become reality, because perception is all that really matters. And that’s why we see, in the latest surveys of CMOs, a new appreciation for the importance of company culture in sales, and appreciation of the difficulty of changing it to what the marketer would desire in order to sell perception better. Because there’s the marketer’s perception of what the company is; and then there’s the reality of employees’ daily lives and all the other messages their organization keeps sending them.

My Modest Alternative

Having gotten thus far in my musing, I found that Michelle’s wonderful discussion of marketing agility put me in an agile frame of mind, suggesting to me a new way of thinking about things.

So let me present it in its most provocative form: What companies are selling is not product so much as a partial worldview, and what customers do is not so much buy – that is the final or intermediate act – as take part in that worldview, fitting it with the rest of their worldview, for a brief time, hopefully repeatedly. Moreover, just as much as companies compete with other companies’ products for a share of the customers’ money, they compete with other customer worldviews for a slice of the customer’s time. That time, the company should accept, will and should always be a relatively small fraction of the customer’s overall time. If your employees are always buying your products, when do they work for you?

That customer worldview, or parts of it, are always evolving, not just as the customer moves through life but also as the customer always seeks better worldviews. Better means new, not the reverse. Better is, in some sense, reality, and if you are a lucky marketer, your new product will be part of that customer’s better reality. Yes, you can get away with perception that is well away from product reality in the customer’s life for a while, and the customer will not reject it, but, because it is not delivering any benefits but fantasy, after a while, the customer will increasingly prefer another worldview more based in reality. You can sell a Dungeons & Dragons game, but if you don’t think up ways to apply such fantasy usefully to some of the rest of the user’s life, the market of non-addicts will slowly drift away to fantasies that are more useful, like fantasy vacations. And then, the final refuge of the trapped marketer – monopolies that cling desperately to high prices in one area, until they fail to cross the chasm they no longer can recognize.

So marketing strategy no longer becomes, perception is reality, I must create the right perception. Rather, it becomes, how far can I go in creating perception before reality bites me in the donkey? How do I manage both perception and reality so that both are in tune with the customer? And how do I reality-check with customers that my worldview is a viable, differentiated customer worldview, worth spending time in the long run on?

Good time to discuss Idea 2.

Be Proactive!

Michelle is only the latest in a long string of commentators to imbibe received, unexamined wisdom about what agile software development is, and assume it always means reacting to the customer’s change, or the organization’s environment, or at best doing one’s utmost to predict future customer behavior based on the past, and then prepare for a few alternatives. No, no, no. Agile is equally about being proactive. Agile is about both sides of the conversation, the developer and the customer, contributing equally, each held in check by – and also using as a source of new ideas – the other. Agile is about product development that is as informed by the developer’s sense of the possible in the future as it is by the customer’s sense of what is needed right now.

Let me give you an example of a fundamental marketing failure because, almost universally, organizations have failed to grasp this idea. Back in the late 1970s, I worked on one of the first word processors. The problem then was how to make accessible to a consumer – a secretary, an administrative assistant, even, heaven forbid, a fast-track youngster doing spreadsheets – how they could store and view “files” of information on a word processor or computer. And there were two obvious answers: A desktop, on which you store individual “files”, and a file cabinet, in which you can store lots of folders containing files. A desktop was everyone’s experience; a file cabinet, in those days, was managed mostly by the secretary.

You are probably beginning to guess where this is going. The first word processors tried to implement file cabinets. Then they went for desktops with folders scattered around the tops. Meanwhile, the programmers were screaming at the marketers that this was crippling the full power of the word processor and PC to store information, and the marketers were screaming back that people would never understand folders within folders, until finally someone tried it, and, lo and behold, people just used the first level until they got the idea and then were perfectly happy to use several levels of folders; and the product that got there first was among the winners until the next chasm came along. Except that, sadly, the story doesn’t end there.

You see, there was another thing that some programmers were screaming about. With a simple extension of existing computer file storage organization, with the same look and feel as “folders within folders”, you could store the same file in multiple folders. In essence, you could cross-reference with incredible ease. But, gee, said the marketers, people will never understand this. And clearly, they don’t care; no one’s demanding it, are they? And so it has been, from that day to this, with one extremely minor exception:

GOOGLE SEARCH!!!

Yes, I’m being very rude, here. Has no marketer yet realized that one major value of search engines to the average user is that they give a temporary cross-reference of multiple files on the same keyword, different from the static “it’s on this Web site” address? And if you do realize it, how could you possibly, possibly not think that users would really, really find that kind of cross-reference useful on their basic PC or browser desktop, in addition to today’s way of storing information? But hey, developers couldn’t possibly have anything to with what users would find useful, would they? Technology-push. Dirty, dirty word. Oh, and by the way, funny how Steve Jobs dared to try drag-and-swipe, fifteen years after programmers first started talking about it. Yeah, he was very good at fitting it to customer uses; but customers weren’t asking for it, were they?

And the most ironic part of this is that Michelle, with her right brain, is saying, agile is reactive, while with her left brain she is talking about keeping marketing simple. Because that’s typically when programmers or designers can anticipate user needs well – something that mathematicians (and, believe it or not, good programmers are good mathematicians) call orthogonality. Basically, orthogonality is a user interface in which (at the top level) a minimum number of approximately equally powerful actions cover most or all of what the user wants to do. You want an example? Take a look at the original Microsoft Word GUI. File. Edit. A couple more. Help. Did you realize that was one of the key reasons people started preferring Word to WordPerfect? Yeah, there were some verbs and some nouns, but basically people could look at it and say, yeah, now I do something involving the overall file, like Print it. Now I do something that involves editing. Now I … That was a developer’s idea – orthogonality. WordPerfect was more a marketer’s idea. When it’s orthogonal it’s simple – see, Michelle? Only the developer can do orthogonal, because the customer doesn’t spend the time thinking about things from the developer’s point of view, just thinks about the next change for the next immediate need.

And that’s the funny thing about agile software development. It encourages orthogonal software development, because it turns out it’s quicker to develop. It encourages the developer to suggest ideas to the customer, not just the other way around. You know, in looking at implementing this human resources app, it occurred to me that it would be straightforward to link your profile to your Facebook page. You can do that??? Oh, yeah. In fact, to make it orthogonal, I can just leave open what you want to link to – like Google Groups. Wow. Let’s see, what could I do with that …

I don’t think we need to belabor what my fix is: it ought to be obvious. Marketers should start thinking like Saki’s master politician, who knows just how far to go (what the customer is demanding) and then goes just a little bit farther (extends it to simplify the design, or gets a programmer to do that during product development, or preferably both). Crowdsourcing? Crowdsourcing is still reactive. You need a single voice that simplifies and goes a little bit farther. And you do it in a rapid, spiral fashion. Customer leads a little bit; you lead a little bit; customer leads; you lead. Reactive agility goes to where the customer has been. Proactive agility goes where the customer doesn’t know he/she would like to go – and then reality-checks.

Just one last sneaky point, and then my sermon is over. Why do marketers ignore the very real and valuable orthogonality contributions of developers? Could it be because the reality of the product doesn’t matter, only its perception by customers?

Oh, Yeah, Third Criticism, But Even I Don’t Care in This Case

Somewhere in the first Chapter, Michelle drops a casual remark about “lean, agile” as if this were a good thing. Yeah, there are plenty of folks in product development who believe sincerely in that one. I have a simple test: When you are creating the product time-line, what’s more important? The agility of the participants, or just-in-time provision of resources to the project? If you rush one programmer towards the next task or delay him/her, are you doing it so no resource goes un-utilized and none are unnecessarily utilized, or because the programmer needs that to be done in order to spend extra time to improve the agility of his/her process? Who is the master, agile or lean?

I have written on this extensively elsewhere. I understand that Thoughtworks, these days, is generally doing “lean in the service of agile” right. Here’s the bottom line: If lean wins, you inevitably become less and less agile. And that, in turn, will mean that despite a lean veneer, you will have greater costs, less profit, and less customer satisfaction. Do you want the perception of cost savings in that project? Or do you want the reality of better company financials, short-term and long-term, rain or shine? Don’t tell me “lean, agile” is necessarily a good thing.

Keep On Changing, Michelle! Pay Attention, Everybody!

So, after many pages of attempted disemboweling, I anticipate that most readers who have hung in this far (ah, so my reverse psychology didn’t work after all, did it?) will have the nagging thought in the back of their heads: if the writer is this critical of Michelle, how can he possibly mean those nice things he said about her and the book at the top?

Again the answer is simple, and, I hope, orthogonal. There’s a story that Mrs. Justice Oliver Wendell Holmes once met President Theodore Roosevelt at a party, and, as was his habit, he asked her what she thought of his Presidency. “Oh, Mr. President,” she replied, batting her eyes at him, “I believe that everyone should be forgiven the vices of his virtues, and you, Mr. President, have so many virtues!” He beamed happily at that one – until he suddenly got the point.

I am, I hope, less catty than Mrs. Holmes in saying the same thing. There would be no point in pointing out the points I disagree with if I did not feel that this book was so close to getting it right (there’s that arrogance again) that it is vitally important that Michelle be read, and her readers consider these points. And consider them agilely. Pay attention to Michelle, all you marketers! Keep on changing, as she says! And, please, keep on changing, Michelle! Changing my agile marketing blues away …

Tuesday, February 21, 2012

How To Have Fun Following the Stock Market

Over the course of 40 years, I have been following the stock market. Obviously, I have a bit of a personal interest, but it’s also fun to make up my own mind about what it all means. And, of course, it’s a bit like rooting for a team – with the added fun that, over the long term, the stock market has gone up by about 10% per year, so you tend to win at the end of the day a bit more frequently than you lose – definitely not the case with some of my sports teams.

So let’s talk about how I might do it (bearing in mind, of course, that these are brief glimpses via a TV stock market ticker in places like restaurants or on the radio while travelling on occasional days, plus checking the results at night as part of looking for good news to go to sleep with). The first thing I do is, concentrate on the S&P 500. The Dow Jones, I have found, isn’t always representative of the overall stock market, because its selection of stocks is a bit quirky; and, of course, the NASDAQ is too heavily high-tech-weighted. The S&P contains stocks from both, and so, if the NASDAQ goes one way and the Dow goes another, as happens surprisingly often, you can pretty much count on the S&P being somewhere in the middle.

The next thing I do is remind myself that the S&P 500 does not include income from dividends. According to studies I have seen, those dividends add about 2.15% to the S&P’s return each year. So, if the S&P is down a little on a given day, I can comfort myself with the reflection that so far in the year, it has actually returned, say, 1% more than the number suggests. If you want a further comforter, S&P publishes “total return” (S&P 500 plus dividends for the year so far). And that doesn’t include reinvestment of dividends, which, if the market has gone up by 15% so far, will yield you another 0.15% by the end of the year (p.s., due to the magic of compounding, that will be nothing to sneeze at in 5 years or so). See, I’m winning already!

If I happen to be able to look at the stock market before the opening bell, the only thing I look for is “S&P futures”. That happens to be a fairly reliable indicator of where the stock market will hover around until about 2-2:30 pm … more on that later. Be careful of the name, because there’s another S&P futures market whose abbreviation is very similar, but which tends to be negative when S&P futures is positive, and vice versa.

And then there’s one more thing I can do, and pretty much do every day. That is, look to see if it’s sunny out. You see, I live near Boston, reasonably near NYC – near enough to guess what the weather is like around 8 or 8:30 in the morning, when the traders go into NYSE to prepare for the day’s session. A fascinating study a while back showed that when those traders see sunny weather outside, they tend to be more optimistic in their outlook, and so the S&P 500 tends to go up more often. And so, when it’s sunny out at that time, no matter whether it’s pouring the rest of the day, I am more likely to anticipate a good day. It’s sunny this morning! I’m going to win, yay!

Unfortunately, I find that the news rarely gives a good indication on what will move the market today, no matter when in the day you check. After many years of listening to them, I tend to distrust ex post explanations. So I just check, if I’m lucky enough to see, what is happening around 10:30-11, when the spurt of the S&P 500 to its futures level has been pretty much over for a good half hour. On the rare occasions when the futures projection is countered by sudden events, that’s when the big drops or big gains will show up. Yay! The Fed is going to give guidance today, and traders usually overestimate the effect on the positive side before they announce around 2!

If by some amazing chance I get a chance around lunch to watch the S&P 500 update on an every-half-minute basis, I can watch the dance of the buyers and sellers. The way it plays out is, these days, a surge up of less than a point as buyers buy the prices up, followed by a drop of less than a point as sellers sell the price down. It’s only after 2-3 minutes that you can see whether the overall trend is up or down – unless a thrilling jump up suddenly moves the market up by a point or more. Or an annoying big drop down interrupts the festivities. To heck with the business news – where’s the sports?

But the real surprises tend to occur around 2-2:30, when suddenly the market can go completely away from the futures prediction. It almost always is due to breaking news – the Fed, late-in-the-day government reports, political news, late news from Europe. If no break occurs then, the futures prediction is probably good for the rest of the day. Yay if it was sunny this morning!

And then the final bell rings at 4, and about 4:04 to 4:06 you get the final S&P 500 figure of the day. But, oddly enough, if the S&P is up more than 1% (these days, about 13 points), I don’t get too happy, and if it’s down more than 1%, I don’t get too sad. Because if traders really understood what was happening, there’s no way there would be jumps like that. I’m a cynical guy; as far as I’m concerned, traders can be pretty unsophisticated in their fears and enthusiasms, so what this tells me is that traders are running around like chickens because it’s going to take them forever to see that the latest news, ultimately, is no big deal. And since that kind of thing can last for months, I just sit and wait until sanity arrives, with my own idea of where the S&P 500 should be. On the other hand, if it reaches the point where Intel is at 60 times earnings (as it was in 2000), or 5 times, that’s real craziness.

Next, I factor in yearly cycles. The one I like right now (or maybe “like” is too strong) is the end-of-the-month cycle in which, I suspect, traders start trading the price back towards the value at the beginning of the month, to show gains in their monthly performance report if there are any, or because they have oversold through panic and now are just waiting until the price comes back up through lack of sellers in order to cut their losses. That tends to mean that I pay less attention starting around the 21st. Ho-hum.

The next thing I like to anticipate is the quarterly period from about Month 1 Day 15 to about early in Month 2 when companies make their earnings reports. Those companies, I really suspect, manage their numbers to analysts’ estimates. And that means, most times, many more times where the companies beat the numbers by a little, and the stock market keeps bouncing up, bit by bit. Or, there’s negative news, but the earnings reports match it until the panic goes away (or outlast the reports, in a few cases). Fun times. Less than 1% S&P 500 daily increases. Yay!

And then there is the full yearly cycle in which, many times, the “Christmas rally” tends to take place right after New Years’, followed by continued upward movement to the later part of April, followed by slow and at the end more rapid decline to mid-October, followed by comeback until the end of the year. This varies a lot, but it typically means that about 2/3 of the year is an up-tick. Yay! And if it isn’t, one of my sports teams won again. Yay!

But the best part of it all is those times during the year when the S&P 500 is really on a hot streak, or at the end of the year. If the S&P 500 is on a tear, I can sit down and fantasize what my teeny S&P 500 index fund has grown to. If I’m at the end of a down year, I can sit down and remind myself that the underlying S&P 500 is growing by 10% per year, blithely ignoring inflation and eventual capital gains taxes (don’t rain on my parade!) And at the end of the year I can raise a toast, to my teeny S&P 500 index fund that over the last 25 years of 10% average returns has grown 10-fold, to a slightly less teeny amount, and fantasize that if life were just that index fund, I would be rich, rich beyond the dreams of avarice, and that my stock-market team had therefore won the Super Bowl.

Happy New Year! Yay!

Saturday, February 18, 2012

The Other Agile Development: XtremeEDA and Product Development

This blog post highlights a software company and technology that I view as potentially useful to organizations investing in agile development, agile new product development, and business agility over the next few years. Note that, in my opinion, this company and solution are not typically “top of the mind” when we talk about agile development today.

The Importance of Continuous Delivery to Agile Development

I have taken a hiatus from this series over the last two weeks, but I knew one thing that I wanted to tackle: product development that included hardware, or New Product Development (NPD) solutions. And I have to say that, in general, the situation is distressing.

It’s been more than ten years since the Agile Manifesto, and in that time there have certainly been vocal proponents of agile NPD. The lean movement, for one, keeps claiming that “lean” has incorporated the tenets of agile software development, that “lean” and “agile” are complementary, and that this is a methodology that is being applied to NPD in general.

Looking at their and other evidence, I am not at all convinced. There is little or no evidence that their proponents are intensively applying agile software development techniques and methodologies to actual hardware development, whether you’re talking about engineering a plane or designing and fabricating a chip. There is a persistent tendency in what I have seen in the agile/lean movement to treat just-in-time leanness and fast-design-change-from-user-feedback agility as equally important (or, treat lean as more important than agile), rather than carefully considering what it would mean to have super-lean NPD give way when agility demands (as, in agile software development, it often does) temporary use of extra resources plus Gantt charts that are not time-optimized. Finally, I cannot find clear evidence of forward thinking on how, if we are going to maintain that agile hardware development is not appropriate in many hardware-design cases, agile and non-agile will integrate effectively. It is clear from some of the comments I have seen that many hardware engineers are bewildered by the seeming Wild West of agile software development, and in no mood to face an overall schedule in which they depend on the changing specs and indefinite deadlines of the software side.

And yet, I was able to find one small but amazingly perceptive example of another kind of thinking about NPD. XtremeEDA Corporation, and in particular Neil Johnson there, is just what I’m talking about.

If you want to see an example, try submit2011.agilealliance.org/files/session_pdfs/applying%20agile%20to%20ic%20development.pdf. There it all is: How to turn ASIC development into agile; how to coordinate with agile software developers; how to use more customer feedback; and the message “to heck with minimal just-in-time resources and up with ‘one person owns a task.’” He even – be still, my heart – quotes Ken Schwaber, whom I knew, before his days as Mr. Scrum and as one of the original Manifesto signers, as the vendor of a superb rapid application development tool that basically “got” agile (alas, it was only for the AS/400).

Even I don’t want to overstate the relevance of agile development approaches to hardware NPD. There are still constraints, not the least of which is the fact that producing a hardware prototype or even a functioning sub-product prototype is in many cases a costly and time-consuming process that we cannot yet completely accomplish as swiftly as compile-load-go.

And yet, what Mr. Johnson makes clear is that, in the real world, agile hardware development and integrated, totally agile joint hardware/software NPD still do better than the traditional counterparts, in time, in money, and in customer satisfaction. And that’s just one product iteration. You tell me what rapid, incremental, customer-pleasing hardware improvements would mean for your bottom line.

Isn’t it time that not only IT but also the CAD and manufacturing side gave it some thought?

The Relevance of XtremeEDA to Agile NPD

I don’t want to misrepresent the imminence of agile NPD, so these next two sections will be short. Neil Johnson is one person in XtremeEDA, and clearly the firm doesn’t make a big point of his efforts, I am guessing in order not to spook its traditional customers for consulting and ASICs. I am sure that XtremeEDA does a fine job with these tasks, and that if you want to use them you need never run the risk (?????) of agile hardware development, or even hear about it, if you don’t want.

That said, I am also sure that if anybody does want to see how it’s really done, XtremeEDA would be very pleased to have someone like Neil show them, and any fees involved have got to be minimal considering the advantage to be gained. So what, bottom line, could XtremeEDA potentially offer if you go that route?

First, hands-on experience – as in the url to which I just sent you. Second, creative thinking about how to apply this experience and the resulting insights to a wide variety of (at least in the chip area) hardware development projects, fab and otherwise. Third, both experience and creative thinking in how to mesh embedded-software and hardware agile development. Fourth, consulting and support for your efforts. Above all, fifth, suggestions on scaling agile hardware and hardware/software NPD.

The point of this exercise in imagining how xTremeEDA could help you is to lead you to the next question: How will this information make my NPD – specifically, NPD that includes hardware, like a cell phone or a child’s toy – agile? And the answer, in case I haven’t said it sufficiently, is that agile is, more than anything else, a process and a culture. You process NPD that way, you think NPD that way. If you are really doing agile NPD right, NPD agility comes with the package. If you use information from someone like XtremeEDA to implement agile hardware development and integration between your agile software development and agile hardware development, your NPD will be agile.

Potential Uses of XtremeEDA-Type Agile NPD for IT and NPD

In this case, IT and NPD are two separate things, and often, in the case of hardware development, two separate organizations. On the IT side, if there is a separate NPD organization, this means evangelism. It may not work – but with someone who has walked the hardware walk at your side, like XtremeEDA, you may actually get listened to. Of course, if the NPD organization is folded under a joint hardware-plus-software IT organizatio, it’s more a matter of top-down commitment from the CIO (and, hopefully, the CEO), and then using someone like XtremeEDA to hand-hold and calm the rightful fears of experienced hardware engineers.

If we’re talking about an overall NPD organization to whom software development is both important and an annoying pest, on the other hand, you may want to consider initially leaving IT and/or agile software development organizations out of the conversation completely. That will avoid the usual antagonisms that surface between hardware and software teams, and it will also make the organization squarely face the fact that pure agile hardware development works – and there’s a hardware geek like someone at XtremeEDA to tell them so. Once the hardware-only implementation has shown the doubters, and worriers begin to turn into enthusiasts, you can let loose the software and hardware sides on each other, and they will automagically start practicing their getting-user-feedback skills on each other. An integrating process tool is great, but don’t forget that these folks may well do some of the integration on their own.

The Bottom Line for IT Buyers

I freely admit that an XtremeEDA approach to NPD may turn out to be of little value to most companies in the next 2-3 years. That, however, is not because it intrinsically has no value. Rather, it is because I am fully confident that most companies with significant hardware in their products will not figure out how to walk the walk before 2015. They will probably not be in a hurry, because they will rightly figure that most of their competitors will take the same attitude. Into the pool? You first. I live in hope; but I’ve seen too much.

However, when – not if – agile NPD really begins to take over, I urge you to consider carefully the further implications of truly agile NPD. My survey results, as well as experiences when software is the only product of the company, suggest that NPD is the absolute biggest bang for the buck for the business in implementing business agility, leading to permanent, 10-35% improvements in revenues, ROI, and customer satisfaction (not to mention quality), and similar reductions in costs, compared to what they would have been had you not implemented agile. Today, in many companies, agile software development is really about competitive advantage. In NPD, it is about the top and bottom lines.

Moreover, because NPD is always at the core of company success, agile NPD leaks into other areas of the company, far more than agile software development. Marketers cannot help being more agile. CEO strategies, given new power to change product directions fast, become more nimble, and the CEO begins to think more in terms of strategy change than enforcing execution. Even finance – well, maybe someday.
If, however, you really are a visionary, and would like to do it now, then I beg you not to talk about it publicly. What? XtremeEDA? Oh, they’re just here to help us out with some ASICs. Nothing to see here. These aren’t the droids you’re looking for. And your company’s success? Oh, just lucky, I guess. Fun, isn’t it?

Yes, it is. And XtremeEDA-type agile product development just happens to be quite profitable, too. Just a coincidence, I’m sure. Yeah, right.

Monday, February 6, 2012

Levels of the Game

I’m sorry, I can’t help it. I want to talk about the greatest football game I have ever seen. It was almost perfect.

You see, I’m a Patriots fan. I’m a Giants fan. I had some basic understanding of their DNA. And what I hoped for, and longed for, for this game, was a perfect display of Bill Parcells football. In it, there would be move, countermove, every coach’s decision, every quarterback’s decision close to the very best one in the situation given the players’ capabilities, every player playing up to the maximum of their capabilities given their nervousness and sometimes their unfamiliarity with the Super Bowl, all the way to the end of the game, and at the end, at the very last play, the Patriots would win. But that wouldn’t matter. Because what mattered was the perfection.

And for the first three quarters, and into the fourth, it happened just as I hoped. Move. Countermove. Never really taking too much of a chance. Never really taking too little. Every player beginning to play in the moment. Did you see the way the Patriots’ defense tried to strip Ahmad Bradshaw of the ball, just as they should? Did you see the way the very minor parts of the Giants’ team, like Henry Hinoski, began to trail Bradshaw and Nicks, so that twice they were in the perfect position to recover a fumble? You never see that in a football game. Never. Move. Countermove. Perfect timing of timeouts (with a couple of unimportant, to-be-expected exceptions from Manning in the second half). Perfect move and countermove on the special teams. And then, early in the fourth quarter, the referees screwed up.

You have to understand that until then, I have never seen referees perform at such a consistently high level. Every spot far more perfect. Every call that mattered, right the first time, no matter how difficult. And then, on a third down with the Giants marching, the referees failed to pick up what I think was an extremely subtle foul by a defensive back. And that put everything out of whack. Tom Coughlin was furious. I don’t blame him. Because if everything had been perfect, and the referees had performed at the high levels of the rest of the game, the referees would have picked it up, and the game would probably have gone just as I hoped.

And then Eli Manning saved the day. Both teams adjusted to the new reality, but we were still probably going to see a Giants’ victory due to that referee’s mistake – and then, as the Giants were marching down the field towards the almost inevitable touchdown and two-point try that would succeed, Eli managed to hit Nicks for a 14-yard gain on second down, when he should have succeeded on third down, and that would have meant the Giants’ field goal would have come too late, and Brady could never have gotten up the field in time for the Patriots to win. Now, things were back on track for perfection.

And then, on the very next play, someone or someones in the Patriots’ defense screwed up when they shouldn’t have. And the perfection was irretrievably wrecked. It was almost inevitable that Jacobs or Bradshaw would run. The Patriots defense should have been ready enough. Belicheck had called time outs and challenged an obvious call and used the two-minute warning to give them time to gather enough energy to make this play. Bradshaw should have gotten 2 or 3 yards. Then, some way or another, the Giants should have gotten to fourth down in the 10-20 area, kicked a field goal with about a minute to play, and Brady would have marched the Patriots down to between the 20 and 35 yard lines on the last play of the game and Coughlin would have called his last time out and Gostkowski would have nailed the field goal anyway and the Patriots would have won. And that wouldn’t have mattered. Because what mattered was that I was seeing thinking, every-effort football that was yielding extraordinary play after extraordinary play, from Jason Pierre Paul blocking Tom Brady’s throw just right to Rob Gronkowski coming in on a bum leg and Tom Brady picking just the right spot to use him.

And now, the Patriots’ loss was just about inevitable. Now, Manning could use the 4-yard pass to Nicks that gave him a first down. Now, the Patriots had to use their second timeout and then let Bradshaw score. Now, the time was just too short, and Brady’s best effort could not get the Patriots to the 15-yard line or within that gave them a probable touchdown on the last play. Against the Giants’ defense, with both teams playing at the peak of their game as they were, Tom’s perfect throw into the end zone on the last play of the game had very little chance of succeeding. It would be only fitting that it fail; and it did.

But still, in the short interval before the almost inevitable happened, there were some extraordinary things that I have never seen. Did you see Tom Coughlin choosing to run for the extra points? There was minimal chance that a fumble would cause a runback. If it succeeded, then there was still some small chance if Brady somehow got a touchdown that the extra point could be blocked. If it failed, and Brady somehow screwed up on his timing and got the touchdown too early, there was a small chance that Manning could take the ball down to field goal range and Tynes could nail a field goal and we would be in overtime. It was the absolutely perfect Bill Parcells coach’s move.

But the thing that really opened my eyes was the way, in the middle of the play where he scored, Ahmad Bradshaw realized what the Patriots were up to and suddenly, just as he was about to cross the goal line, stopped. In the way that absolutely minimized the chance that he would not score a touchdown at the end of the play. Just to take 2 seconds off the clock. I have never seen a player do that pitch-perfect thinking at that point in the game. Never.

Now think ahead to the end of the game. Tom Brady is around his own 45. There are 5 seconds left in the game. The only thing he can safely do is throw it in the end zone, from his own 45. And that’s a very low-percentage play.

But suppose he had six seconds, or seven – if Bradshaw had not done that. He could have passed for another 11 yards, out of bounds or not, and the Giants, playing perfectly, would have let him. Now he would be on the Giants’ 45 yard line, with about one second to play. Now, just as before, he looks and looks downfield, is flushed out of the pocket, moves towards the sidelines, and a lineman is heading towards him and a contain man is just downfield, in the perfect defense, and then suddenly he runs past the scrimmage line. He darts past the contain man. Now the Giants defense in the end zone is beginning to react just as they should, and they move over to get him and handle possible nearby laterals in just the way they should, and he reaches about the 25 yard line just before they get to him and he whirls and throws a perfect strike across the field to some tight end who has drifted downfield. Before, that tight end would have been on the 35 yard line, so Brady would never have tried that play -- too low-percentage. And now we see that the Patriots’ defensive line has begun to block the Giants’ offensive line upfield, just as they should, and all those receivers in the end zone are coming to block the Giants’ defensive backs away from the other side of the field, just as they should, and still the Giants’ defense, playing perfectly, will get over there and tackle the tight end somewhere between the 5 and 10 yard line. Except that maybe they won’t. Maybe somehow that tight end will find a way to make it into the end zone. It’s a better chance than trying to throw the football into the end zone from one’s own 45. And it wouldn’t have mattered which of the two happened. I would have seen one more extraordinary play.

Yes, all this is very improbable. But that’s the point. What Ahmad did, that’s real perfection. What Coughlin and the rest of the team did to bring Ahmad to the point of making that play, that’s real perfection. What the Patriots did to bring the game to that point, that’s real perfection, marred only by that one defensive play. The whole game brought Ahmad to that point of perfection. And, as it turned out, it mattered a little bit. That, to me, is real enjoyment. That is football as I first saw Bill Parcells develop it, way back in 1986 as he marched to his first Super Bowl win, and this time played by both teams. That is an absolute symphony of football, every player playing close to the best of his capabilities within a great plan and completely integrated with their teammates, so that each side keeps forcing the other to make an extraordinary play at the end – or just miss it, which is almost as good.

Upset that Welker missed that big catch near the end? Don’t be. The Giants’ lineman were pressuring Brady just enough and the defenders were just close enough to Welker that about the best throw he could have made was that throw, and Welker would have had to have made an insanely great catch. He didn’t make it? That’s not the point. The point is that he had reached such a level at that point in the game that he almost made that catch. It wouldn’t have made the game as good if he had made that catch. Whether he had made it or not, wow.

You know, the best line I ever saw written about football was done by Dan Jenkins in his book Semi-Tough. At the end of a fictional Super Bowl, even though the rest of the game has been far from perfect, for one brief moment at the end of the game one of the great players on one of the teams has knocked over the best player on the other team with everyone else playing perfectly to score a touchdown that wins the game. And the winning player goes over to the losing player’s locker room to tell him that with all the unlucky breaks the losing team has had, that losing team should have won. And the losing player looks the winning player in the eye, and says, simply, “One thing I’ve learned, son: Them as should have won -- did.”

That is what we almost had. Not only that one team deserves to win, but that it is all proved out when both are playing and acting at an extraordinary pitch of that perfect amount of extra stretch in a completely integrated way, and even the lucky breaks even up, so that all doubt as to who deserves to win is removed, and what we are left with are the plays, and the plays, and the plays. I know a lot of people are happy or unhappy for other reasons, but I wish they would see it my way. I wish all the players would realize that no matter how many times they go to the Super Bowl, they will probably never take part in a game as great as this one. Please, be happy. I sure am.