Friday, February 22, 2013

Parasoft and "Service Virtualization" Testing: A Good Idea

Recently there passed across my desk a white paper sponsored by Parasoft about the idea of applying what they called “service virtualization” to software testing.  Ordinarily, I find that “we’ve been there and done that” for much of the material in most of the white papers like this that I see.  In this case, however, I think that the idea Parasoft describes is (a) pretty new, (b) applicable to many software-development situations, and (c) quite valuable if effectively done.

The Problem

The problem to be solved, as I understand it – and my own experience and conversations with development folks suggests that it does indeed happen frequently these days – is that in the later stages of software testing, of dependency and volume testing shortly before version or product deployment, one or more key applications not involved in the software development or upgrade but with “interaction effects” is effectively unavailable for testing in a timely fashion. It may be a run-the-business ERP application for which stress testing crowds out the needed customer-transaction processing.  It may be a poorly documented mission-critical legacy application for which creating a “sandbox” is impractical. You can probably think of many other cases.

In fact, I think I ran across an example recently.  It went like this:  a software company selling a customer-facing application started up about five years ago.  Over five years of success, they ran that customer-facing application 24x7, with weekly maintenance halts for a couple of hours and 4-6-hour halts for major upgrades.  All very nice, all very successful, as revenue ramped up nicely. 

Then (reading between the lines), recently they realized that they had not upgraded their back-end billing and accounting systems that fed off the application, and these were increasingly inefficient and causing problems with customer satisfaction.  So they tested the new solutions in isolation in a “sandbox”, and then scheduled a full 12 hours of downtime on the app to install the new solutions – without “sandboxing” the back-end and front-end solutions working together first.

Everything apparently went fine until they started up a “test run” of the back-end and front-end solutions working in sync.  At that point, not only did the test fail, but it also created problems with the “snapshot” of front-end data that they had started from.  So they had to repeatedly reload the start point and do incremental testing on the back-end systems. In the end, they took more than two days (complete with anguished screams from customers) to add some changes to back-end systems and make the customer-facing application available again; and it took several more days before the rest of the back-end systems were available in the new form. As the white paper notes, in planning final testing of new software, companies can often be willing to skip integration testing involving interdependent unchanged software; and the consequences can be quite serious.

The Idea

Probably to cash in on the popularity of “virtualization”, the white paper calls the idea proposed to deal with this problem “service virtualization.”  To my eyes, the best description is “script-based dependent software emulation.” In other words, to partially replace the foregone testing, service virtualization would allow you to create a “veneer” that would, whenever you invoke the dependent software during your integration testing, spit back the response that the dependent software should give. This particular solution provides two ways of creating the necessary “scripts” (my categorization, not Parasoft’s):

1.       Build a model of how the dependent software runs, and invoke the model to generate the responses; and
2.       Take a log of the actions of the dependent software during some recent period, and use that information to drive the responses.

Before I analyze what this does and does not do, let me note that I believe Approach #2 is typically the way to go. The bias of an IT department considering skipping the dependent-software integration testing step is towards assuming that there will be no problems.  The person building the model will therefore often implicitly build it the way the dependent software would work if there really were no problems – and response times are often guesstimates.  The log of actual actions introduces a needed additional note of realism into the testing.

However, the time period being logged almost inevitably does not capture all cases – end-of-year closing, for example.  The person creating the “virtualized service” should have a model in mind that allows him or her to add the necessary cases not covered by the log.

Gains and Limits

The “service virtualization” idea is, I believe, a major advance over the previous choice between a major disruption of online systems and a risk of catastrophic downtime during deployment. If one takes Approach #2 as described above, “service virtualization” will add very little on to testing time and preparation, while in the vast majority of cases it will detect those integration-test problems that represent the final barrier to effective testing before deployment.  In other words, you should be able to decrease the risk of software introduction crashes tenfold or a hundredfold. 

The example I cited above is a case in point.  It would have taken fairly little effort to use a log of customer-facing app interactions in a sandbox integration test with the new back-end systems.  This would also have speeded up the process of incremental testing of the new software once a problem was detected. 

There are limits to the gains from the new testing approach – although, let’s note up front that these do not detract in the slightest from the advantages of “service virtualization”.  First, even if you take Approach #2, you are effectively doing integration testing, not volume/stress testing.  If you think about it, what you are mimicking is the behavior of the “dependent” software before the new systems are introduced.  It is possible, nay, likely, that the new software will add volume/stress to the other software in your data center that it interacts with.  And so, if the added stress does cause problems, you won’t find out about it until you’re operating online and your mission-critical software slows to a crawl or crashes.  Not very likely; but still possible.

Second, it is very possible that there will be a lag time between the time when you capture the behavior of the “dependent” software and the time that you run the tests.  It is fairly simple to ensure that you have the latest and greatest version of “dependent” software with the latest bug fixes, and to keep track of whatever changes happen online during sandbox testing offline. If you are just periodically refreshing the log “snapshot” as in Approach #2, or even operating from a model written a year ago, as happens all too often, then there is a real possibility that you have missed crucial changes to the “dependent” software that will cause your integration testing to succeed and then your deployment to crash. Luckily, ex-post analysis of dependent-software changes makes fixing this problem much easier – but it should be minimized by straightforward monitoring of operational-dependent-software mods during testing.

The Bottom Line for Parasoft and “Service Virtualization” Testing:  Worth Looking At Right Now

The Parasoft solution appears to apply especially to IT shops with significant experience with “skipping” integration testing due to “dependent software”, and with a reasonably sophisticated test harness.  Of course, if you don’t have a reasonably sophisticated test harness that can do integration testing of new software and other operational systems in your environments, perhaps you should consider acquiring one.  I suspect that the case I cited earlier not only failed to sandbox integration testing, but didn’t have the test harness to do so even had they wanted to. 

For those IT shops fitting my criteria, there seems no real reason to wait to kick the tires of, and probably buy, additional “service virtualization” features.  As I said, the downside in terms of added test time and effort in these cases appears minimal, and the gains from the additional software robustness clear, and potentially company-reputation-saving.

I will, however, add one note of caution, not about the solution, but about your strategy in using it.  Practically speaking, “service virtualization” is rarely if ever to be preferred to full integration testing, if you can do it.  It would be a very bad idea to use the new tools to move the boundary between what is fully tested, because you can manage it in a reasonable time, and what is quicker and easier but risks disaster.  Do use “service virtualization” to replace naked “close your eyes and hope” deployment; don’t use “service virtualization” to replace an existing thorough integration test.

Kudos to Parasoft for marketing such a good idea.  Check it out.

Monday, February 18, 2013

The Other Sad Task of Combating Climate Change

Writing this kind of blog post tends to freeze my brain.  I find it astounding sometimes to be trying to present in a logical and calm fashion a description of horrors.  But there it is.

I noticed in perusing comments in various climate-change-related web sites that even among fairly well-informed folks there seems to be a misperception, which runs something like this:  The job of combating climate change is about slowing carbon emissions, preferably as quickly as possible.  As I understand it, that is half right.  There is another distinct task:  leaving at least a significant amount of carbon-emitting “fossil fuels” in the ground – forever, or at least for the next 100-1000 years.  Moreover, that task assumes that we do not discover major new sources of oil, natural gas, and coal.  If we do, then we need to leave the equivalent of a significant amount of present “reserves” plus all reserves discovered in the future in the ground.

One implication of this:  we need to understand that there is a Hard Stop somewhere in the future, a point beyond which we dare not use even one milligram more of fossil fuels.  If we cut carbon emissions drastically in the near future, and keep them cut, that Hard Stop almost certainly will never arrive – instead, we will suffer various degrees of what Joe Romm calls “Hell and High Water”, involving at worst the decimation (not in the Roman sense – in the sense that 9/10ths of humanity will die, mostly of starvation, disease, and poisoned air) of humankind.  If we continue on the present path of fossil-fuel use increases and minor moves towards “sustainability”, the so-called “business as usual”, that Hard Stop may even arrive by the end of this century.  That Hard Stop represents absolutely no further use of fossil fuels because the alternative might be the end of all life on earth, forever.

How can I say this?  How can I not be wildly exaggerating, in Mark Twain’s sense (“the reports of my death are wildly exaggerated”)?

Keystone and Game Over

It may strike people as odd that there is such an environmental furor over one oil pipeline project in the US (the Keystone XL proposal).  Here’s a frequently cited quote (paraphrased) by Dr. James Hansen on the subject:  “If Keystone XL goes forward, then it’s game over for the climate.”  Most people, my sense is, read that as meaning that some form of “Hell and High Water” becomes inevitable.  I believe that instead, he is also referring to a previous quote (in, I believe, his book Storms of My Grandchildren, and also paraphrased):  “If we use all our present reserves of coal and oil, there is a significant chance of a runaway greenhouse effect.  If we also use all our tar sands and oil shale, I view the runaway greenhouse effect as likely.” Before I explain my understanding of this, let’s note that Keystone XL transports oil derived from Canadian tar sands to US ports for export abroad.

What’s a runaway greenhouse effect?  If we look at Venus, we see a planet with extreme heat and with acid rain that dissolves any life forms that might exist in the air, and then evaporates before it reaches the surface.  However, if Venus had no atmosphere, there would be no extreme heat and no acid rain.  Instead the temperature would be a significant distance below the “runaway point” (estimated by Dr. Hansen at somewhere around 62 degrees Fahrenheit, iirc). Carbon or other substances in the atmosphere reflect light-generated heat bounced from the surface back to the surface again, trapping it – and also increasing the acidity of water (again, as I understand it).  If Earth passes that “runaway point”, then we will become like Venus.

Now, Earth without an atmosphere would be far below the temperature of Venus – below freezing, actually.  The atmosphere adds one layer of carbon-based reflection or “trapping” of heat (yes, I realize I’m simplifying drastically).  Life itself – all life, especially vegetable – adds another.  Life is carbon-based, and it creates a carbon cycle that emits carbon to the atmosphere, and then absorbs it in non-organic matter when it returns, via a process called “weathering” that deposits much of the carbon returned into the oceans. In ordinary times, this creates a way of handling perturbations in carbon emissions so that one returns eventually to somewhat of a “steady state”. And that “steady state” is still clearly under the “runaway point.”

Now here is where we get to the importance of leaving some fossil fuels in the ground.  Because we have seen “Hell and High Water” in the past, and life has been decimated but survived.  But what are fossil fuels, really?  Primarily the carbon deposited in the ground by life – especially vegetable life – over the last up to a billion years or so.  Now compare this episode of carbon emissions to all past episodes.  We have seen surges in carbon emissions from the Milankovitch cycle before, and from long-term underwater eruptions that bring new carbon up from the Earth’s core to the air.  We have seen methane spurts due to accompanying thawing of places like the Arctic that may have made the temperature rises and carbon in the atmosphere more extreme.  What we have never seen before is taking all the stored carbon for hundreds of millions of years and injecting it into the atmosphere over what could turn out to be a period of 200 years (and carbon has a half-life of perhaps 100 years in the atmosphere). 

And Hansen’s best estimate is that use of all of that stored carbon over a period even of much longer than 200 years is likely to bring on a runaway greenhouse effect.  This is because the ocean is the primary way of restoring equilibrium to the system, and at some point before we use up all that carbon, if we do it fast enough now, the ocean stops being able to absorb as much carbon (apparently, according to Wikipedia, because of the slowing of the “biological pump”) – and carbon coming down from the atmosphere cycles right back up again.  And so, once that point is reached, carbon doesn’t cycle very much back into the ocean – it goes on accumulating in the atmosphere, for a thousand years or more, until the ocean begins to regain its ability to absorb carbon. Thus, as we get close to the “runaway point”, we can’t just slow carbon emissions down to a point at which as much carbon is returning as is being emitted – the only point at which that is true is near-zero emissions, a Hard Stop.

Now, hopefully, you begin to see why it’s important to leave significant amounts of fossil fuels in the ground for at least 100-1000 years:  it keeps us away from that Hard Stop, and hence that “runaway point.”  It keeps us away from the ultimate horror.

So why is Keystone XL so critical to this?  There is at present no real market for tar sands oil.  There are very high up-front costs, which only the Canadian government has taken on so far – and no one appears likely to, in the immediate future, if the Canadians don’t succeed.  The only realistic way of getting that oil from inland Canada to a decent market, it appears, is to add to existing pipelines and send it to ports in the southern US – all other routes appear to involve too-large costs and times of building new infrastructure – and further carbon emissions from tar sands oil will be minimal. If Keystone XL goes through, it appears likely that a significant portion of the world’s tar sand oil will be emitted over the next 40 years – if not, not.

That’s why Keystone XL matters.  That’s why Hansen has been campaigning for several years to stop most worldwide production of coal, as the least painful way of avoiding the “significant chance” of a runaway greenhouse effect. That’s why people need to think about handling climate change today as not simply a matter of adaptation to “Hell and High Water” or slowing down carbon emissions by a couple of percentage points per year right now.  It has now reached the point where we need to face the idea of effectively never using some of those fossil fuels – not just letting the market assume that using it all is OK.

Action Items and Dyslogy

What our sad other task of facing climate change amounts to, therefore, imho, is not only to stop Keystone XL in its tracks.  It amounts to making sure that Keystone and its ilk never happen.  It also amounts to trying to ensure, with each future use of fossil fuels, that a comparable amount of reserves is made unusable, effectively, forever (or until 1000 years from now, whichever comes first).  And it means keeping an eye on new sources of fossil fuels, to limit their use sharply forever.

And if we fail?  I suppose we can write a eulogy for life on Earth.  Except that writing a eulogy for a species that ended life forever seems a bit off, somehow.  The opposite of Utopia is dystopia; I guess we should write a dyslogy.  Someone recently passed me the end of a Swinburne poem that seems to fit – it even includes the sea rise that’s an initial stage.  I have changed one word.

Here death may deal not again for ever;
       Here change may come not till all change end.
From the graves they have made they shall rise up never,
       Who have left nought living to ravage and rend.
Earth, stones, and thorns of the wild ground growing,
       While the sun and the rain live, these shall be;
Till a last wind's breath upon all these blowing
               Roll the sea.

Till the slow sea rise and the sheer cliff crumble,
       Till terrace and meadow the deep gulfs drink,
Till the strength of the waves of the high tides humble
       The fields that lessen, the rocks that shrink,
Here now in his triumph where all things falter,
       Stretched out on the spoils that his own hand spread,
As a god self-slain on his own strange altar,
               Man [Swinburne – Death] lies dead.

Tuesday, February 12, 2013

GoodData and Grey's Anatomy: It's Not Always the Vendor's Fault

Recently I received information about an interesting new vendor called GoodData – it appears that they are specializing in BI-type user interfaces for function-specific dashboards (of course, in the cloud and involving cross-database information display).  One of these was GoodSupport Bash, a “bash-up” of customer-support metrics and KPIs (Key Performance Indicators).  And that caused me to reflect on the idea of applying these types of “comprehensive” metrics to customer support.

Customer-support technology has a relatively long software-vendor history:  I remember first seeing the idea in action back in 1982, when Computer Corp. of America’s email system was used to track customer inquiries. Our forms capability allowed us to enter the call info in an email, send it for resolution to the appropriate techie, and then see the “email trail” as the problem was resolved.  Like many small companies, we had a reputation for excellent customer support because the techies were developers who had spare time to actually solve problems and the clout to change the code base to do so.  In fact, I was once told that one “guru” would go to install at new sales and when the customer asked for changes, he would make them without documentation or changing the main code base, so that the next version overrode the customization – it made the customer very happy initially, and very annoyed a year later.

However, in the early 2000s, when offshoring came in, major-vendor customer support was one of the first to move, and the semi-mature customer-support software of the day now had to define support personnel tasks very carefully, as the new Indian and Filipino respondents had some difficulties with American English and more difficulties with being up to date with software technology.  The resultant upheaval in customer support has never really settled down again, imho.  Companies like Microsoft would really like to move away from live-person customer support entirely, but the complexity of the technology means that users keep having to ask questions of live representatives who can be forced to pay attention.  For example, it was not until I got to a live representative that I realized that the Windows 8 version of Windows Media Player does not support DVD playing out of the box (according to PC World, you need the $10 Media Pro or some such).

Well, this in turn meant that phone-based customer support had to be “optimized” in some way, else waits of 2 hours and unresponsive representatives would cause a significant black eye for the company providing the support for its products.  And so, solutions like GoodSupport Bash allow companies to “take the temperature” of their customer support according to all sorts of criteria, like speed to reach a representative, time taken by the representative, and whether the representative is taking the opportunity for up-sell or cross-sell.

Except that I believe that the results of this kind of fine-tuning according to various management theories can sometimes actually be counter-productive – and the fault lies not with the customer-support software vendor.

Grey’s Anatomy and Ice Removal

I must confess that Grey’s Anatomy is one of my guilty pleasures, and I often find it difficult to watch without rotfl.  To me, GA is the ultimate in actor torture.  In just about every episode, at least one and often several actors and actresses must somehow make credible a complete 360-degree turn in their characters, with the words they are given to do this stretching the limits of belief that anyone could talk in this way.  As character after character changes partners or sexual orientation on the verge of a marriage and in the middle of a surgery, I can almost imagine the reaction as that actor or actress sees the week’s script – bracing themselves, and yet unable to anticipate the next contortion.  Although there are limits:  no one has taken a sudden interest in animal husbandry – yet.

Anyway, in this episode, doctors facing a hostile takeover of the hospital went to find out what this new highly-efficient corporation was doing, and posed as patients.  The doctor started reading from a scripted set of questions, and when the “patients” attempted to derail them, started the script over again until they could get it done.  This was done, we are assured, in the name of standardized efficiency.  You will recognize the similarity to some customer support experiences.

However unlikely the situation, the GA actors and actresses, trained to make the unlikely plausible, made it very clear how customer support driven by metrics appears to the customer.  The caller reporting something that often is unusual must be fit into a series of questions that attempts to cover only the usual.  Callers who do not fit are effectively ignored – you could share the frustration as the “patient” attempting to vaguely describe symptoms is asked to spend much time talking about things that are not problems, and the feeling that no one is listening.  At one point, the “patient” confesses that he is a doctor, pretends he’s already a new part of the company, and asks why these procedures; the “representative” assures him in a scripted manner that the procedures are more effective – except it’s clear that he must answer that way, no matter what he thinks, or lose his job.

Sound implausible?  Well, I actually went through a similar experience the other day, when I tried to see if a national gutter specialist could provide snow and ice removal on an emergency basis.  Sure enough, I wound up with a representative who took some time to realize that I was not calling about gutter cleaning, finally said that the company did not provide the services, and then spent three minutes attempting to upsell me on a bigger gutter cleaning contract, despite the fact that (a) I said twice that I was not going to do so, and (b) I had just been told that they didn’t do something else I needed.  It was clear that this was one of the metrics by which he was judged:  did you run through all the questions?  Did you upsell? You can imagine what I felt about the company afterwards – and it wasn’t the representative’s fault.

Sloan Management Review, iirc, had some interesting data on the effect of this type of thing on customer loyalty.  Often, the most loyal or biggest-spending before the disappointment were the quickest to jump ship – they felt “entitled”, as it were.  But even the truly loyal needed customization that truly paid attention to their needs, else they too found it difficult to stay.  Branding only goes so far.

My point here, however, is that none of this was the customer-support software vendor’s fault.  On the contrary, whether the company’s support strategy is good or bad, the typical indication from users is that something like GoodData Support Bash will make it more efficient.  If the strategy is bad, however, it may also make it less effective, or more harmful.

The Bottom Line:  Metrics Is As Metrics Does

It is now a truism of management theory that metrics create behavior – employees game everything to their own advantage, or to minimize their own disadvantage.  And so, users of customer-support software like GoodData Support Bash badly need to remember that the most effective use of any customer-facing tool is not to create representatives who will deliver the company’s idea of customer happiness via efficiency, but to find analytics to understand the customer better.  Only after that knowledge is gained and the company has used that knowledge to improve customer support responsiveness to real customer needs should efficiency metrics be considered.

To put it more concretely:  among the metrics in your dashboard, is at least one telling you honestly the degree of customer satisfaction with the interaction?  And are you mining data from that metric telling you what’s good and bad about your present solutions?  If not, are you really making things better with your improvements in customer-support efficiency?

Or are you making them worse?

Friday, February 8, 2013

Life in a Software World

Various tech-industry “visionaries” have proclaimed that we are entering an “age of software”, pointing to the importance of software to today’s solutions and the world economy.  By and large, I agree that “all things software” is more and more a differentiator among companies, a focus of work, and a prevalent element of life outside of work.  However, it seems to me that no one has fully defined just what life in a “software world” will mean – what it has meant for those who participated in its formative stages.

Below, I lay out five things that I believe are worth considering as unique characteristics of enterprises in a “software world”.  They are my own point of view, based on 12 years in firms as a software developer informed by the business theory I learned at Sloan, plus 22 years as a computer industry analyst.  Everyone will have his or her own list; I’d just like these potential aspects of the “software world” to be considered along with the rest of those lists.

Project Management, Not Manufacturing

In a typical manufacturing firm, the norm was and is production of pieces of hardware.  In a so-called “services” firm, the norm is delivery of pre-designed service “solutions”, and therefore there is a reasonable analogy to manufacturing.  In a software firm, the cost of actually creating copies of a software program is pretty close to zero.  Instead, the focus is on creation of the next version or fixing bugs in this version.  In the manufacturing/service firm, the focus is often on optimizing the process of producing the same thing over and over.  In the software firm, the focus is on developing new features so customers don’t walk away.  And so, these firms seek to optimize new-product development – that’s the critical success factor for a software firm.  Does anyone suppose that if Google had simply cut the costs of servicing its search engine to the bone as its main focus, it would have survived, much less thrived?

But if ongoing success is a matter of new-product development, then it follows that success comes from successful development-project management, not optimization of the supply chain.  Apple grows profits and revenues while most if not all hardware/service companies achieve only flat revenues and lesser profit growth, and clearly the iPhone and iPad rather than Mac production optimization explain this. 

In fact, I believe that this changeover is also happening in so-called manufacturing and service companies.  50 years ago, it was still possible to say that the bulk of workers in a company were production workers, with support staff a poor second.  Today, pick any company and you have far more people in clerical and new-product development plus management roles, and those “clerical” and “managerial” tasks like administrative assistant, web designer, and product marketer are to a much greater degree about producing new solutions.  And so, the management of innovation projects is becoming much more important in traditional firms.

Accounting:  No Inventory, No Capital Stock

All right, that’s a bit of an overstatement, but not much.  Software requires very little if any hardware to produce these days:  just download it from a web site a small chunk of whose servers you have leased from a public cloud.  That cost is there no matter how many copies are downloaded of the single stored copy there.  What widgets of inventory?  What capital stock of hardware-producing machines and factory buildings? What FIFO vs. LIFO?

It seems to me that this software world therefore puts our favorite financial metrics out of whack.  Is inventory turnover really telling us that the firm is about to fail?  How about sales turnover?  If we are investing more in capital than in labor, aren’t we optimizing a non-existent manufacturing process rather than new-product development?  So are our metrics telling us, not that a firm is succeeding wildly, but that it is headed for failure?  And how do we tell what a successful new-product investment strategy is from our books?  For the last 20 years, since the Harvard Business Review suggested computer companies’ futures were in services rather than hardware, IBM has been focusing on both innovation and services rather than hardware, with strong metrics and some profit growth – but, especially lately, revenues have been flat overall, with only software showing strong growth over the entire period, and services effectively beginning to prosper only when IBM delivers innovative software. 

I confess that I have no strong sense for what the new accounting and the new financial metrics should be.  There needs to be some way to measure new-product development success and detect failure, and isolate it in the company’s books – but my textbooks tell me that projecting the revenues from software development is highly speculative.  Maybe so; or maybe Microsoft does have at least some handle on how many copies a new version of Windows will sell, so it’s not as bad as all that.

It’s About Growth, Not Optimization

One of the biggest shocks in the two software companies I worked in for 4+ years was the way that expectations escalated.  Every year was the baseline for the next year, no matter how good, and I was expected to do more in some way:  produce more code, do more tasks, whatever.  In fact, after the first two companies I started to assume a general curve, in which the 3rd year was necessarily disappointing to my bosses – who, by the way, were never the same at the end of the year as they were at the beginning – and sometimes I wondered if the fourth simply laid the grounds for the end of my employment.  It wasn’t that I wasn’t producing; it was that I wasn’t producing that much more. And, in point of fact, until a change of strategy my fourth year there, it looked as if Aberdeen Group was going the same way.

Recently, I met a car salesman who, as in many traditional jobs, had been there for many years.  It was clear that he was not constantly faced with rising expectations; the fact that he continued to excel compared to others was reason enough to keep him.  Clearly, an auto dealer is not yet a software company. Why the difference, I keep asking myself? 

I would suggest that in a software world, optimization of the machines that support a process simply does not speed up software development significantly – and yet, managers want to grow both revenue and profits.  The obvious answer:  everyone must do more. No matter that it does not fit the mold of unpredictable new-software development, or that it is a one-size-fits-all approach to labor; it must focus on growth of production rather than accepting optimized production, in order to conform to the expectations of the firm.

I don’t say this is good or bad, although I have my opinions.  It does suggest to me, however, some reasons why agile software development seems so inefficient and yet produces such great bottom-line results.  That is, agile development removes the ability to demand more from each person – it changes the metric from lines of code per day or some such to customer satisfaction.  And so, the company can have its bottom-line growth without efficient, ineffective programmer optimization. Because today, more and more companies, like software companies, are thinking dynamic, not static:  seeking growth in unpredictable markets with changing consumers, not seeking to optimize the supply chain or manufacturing process in more stable markets.

It’s About the New New Thing, Not the Success

In some ways, this is a repetition of my first assertion.  In a software world, especially a “virtual” one, lack of hardware means relative lack of chains to keep the customer attached – or to make the customer feel trapped.  It amuses me to see the venom attached to Microsoft Office as a boat anchor for innovation, when in point of fact it is far more innovative than, say, the gas station or the fashion industry.  And that’s the point:  software firms have a significant amount of new content in each version of each product, because they have to.  They don’t focus on success by standing still or recycling:  they focus on adding features.

I see this as a real problem for our present foundations of microeconomics.  I believe that Paul Krugman wrote recently that static manufacturing firms producing widgets in microeconomics indeed did badly as a model of real firms and markets, but that was OK, because it captured the essentials needed for effective macroeconomics. I question whether that is still true in a software world.

Specifically, it is a recipe for underperformance of macro-economies. For example, over-investment in capital that represents the capital stock necessary for optimizing a manufacturing process and under-investment in labor that represents new-product development may show decreased costs, but it also does less well at meeting consumer needs, and hence static IS-LM curves are achieving less sales at a price than they could – the economy is under-performing not just relative to “potential GDP”, but also “potential GDP” at full customer satisfaction.  Or so I wonder.  This wasn’t a clear problem when we had no alternatives; but now we have a software world.

Flexibilists Rule

This is perhaps the most speculative of my suggestions – and I realize that’s saying a lot.  There’s been a lot of attention paid to Brynjolffsohn’s assertion that increased automation via computing has meant the loss of jobs, even including knowledge workers, and it’s not clear if and when that will reverse.  I would suggest that while that may very well be true of manufacturing and service/support jobs, it should not be true of new-product development jobs in a software world, and, in the best software firms, like Google, it isn’t true. 

For one thing, software automation is very far from replacing programmers.  This is something I’ve been arguing for thirty years:  programming is partly creation of new mathematics, and that’s something that we are not near automating (and yes, that’s very distinct from the “I get human semantics” analytics of Watson).  For another thing, programming in a software world is usually about development of new features, i.e., new-product development, and it makes sense to invest there rather than in non-existent inventory management.

I say, it should be true, but I recognize that in many firms, even software ones, programmers are thought of as disposable and capital investment as to be preferred – e.g., the offshoring of development that is still ongoing, even in areas where the supposed lowered wages are pretty much vanished.  That is partly because, I think, these firms tend to think of programmers, and marketers, and so on as narrow specialists, so that as technology and so on changes, it becomes necessary to hire someone who knows the latest language.  That is completely untrue of most of the programmers I know; in an environment free of threat to their jobs and with a little time to burn, they are the most avid consumers of new programming technologies, and a wide variety of other things.  They are flexibilists, not specialists.

The same, it seems to me, can be said of the agile marketing movement, which is – surprise, surprise – strongly associated with the software industry and software-heavy firms.  Agile marketing is more and more about flexible creation and delivery of new product – in fact, a continuous feedback loop of such creation.  And testimonials from practitioners echo those from agile programming shops:  it does better, and people stay around longer.  In a software world, or at least in a better one than we have today, flexibilists rule.


Them’s my preliminary thoughts.  Reactions?

Thursday, February 7, 2013

The Smartphone as TV Remote: It's About Computing

Whilst driving the other day I overheard some NPR “experts” decrying the state of the cable/TV industry, and citing as an example that one’s elderly relative could use the Internet but not the TV remote.  Immediately, I asked myself a na├»ve question:  Why can’t the smartphone be a remote?

And so, I did my usual due diligence, and discovered that there are indeed some apps out there that purport to do just that:  allow the smartphone to switch cable channels, and the like.  But I also found that there are some surprising limitations at present, which seem as if they could be overcome in a straightforward fashion if someone really puts an effort into it. And it seems well worth the effort for some entrepreneurial person or business.

Let me therefore state the problem as I see it, the solution and why the smartphone should be part of it, and the things that need to be accomplished.

The Cable Navigation Problem

Over the years (as the “experts” noted) access to TV channels has become dominated by 2-4 cable companies, which decide to some extent what you see.  Initially, their offerings proliferated to 200-odd distinct channels.  Over the last few years, according to the “experts”, the cable companies have discovered that there’s money in sitcoms and dramas whose characters actually evolve over time, because a loyal band of watchers creates buzz over the Internet about the characters – but those series are expensive, and so the rest of the shows per channel must cut expenses to the bone.  In practice, that means thinly disguised reality shows (and, secondarily, reruns; but those have a half-life on the order of 2-3 years).

The result is almost stupefying:  almost universal prime-time programming of reruns, rehashes (true murder mysteries resliced ad nauseam), and reality shows in the oddest of places.  The Learning Channel does not feature learning.  The Discovery Channel does not feature science, and the Science Channel is going the same way with science fiction (read:  horror movie) reruns.  Animal Planet spends endless hours with real-life searchers for Bigfoot; National Geographic Channel does the same with prison reality shows.  Biography Channel?  Not really.  It isn’t just that these bear no relation to their name and what they originally set out to do; it’s also that they bear a great deal of resemblance to each other.  And so, counterintuitively, the TV watcher seeking something new needs to do a lot of searching. In fact, the watcher needs to do a lot of digging when he or she finds an innovative show, as the TV screen provides barely enough information to be misleading:  “Dangerous Attractions:   Relationships that test the limits of …” turns out to be about dogs and cats being pals.

At the same time, the goal of a “universal remote” still seems far off, if the cable company and TV manufacturer offerings are any guide.  Often TVs have their own remotes, which do things that the universals don’t, like handle various types of HD display.  If there is interference, the TV may turn on and the cable not, or vice versa – and what’s happened is not that obvious.  If one accidentally starts talking to the TV rather than the cable, changing the channel knocks the viewer out of the cable universe entirely.  And don’t get me started on attached DVD players.

So there are two basic problems:

  1. 1.       Turning TV and cable on is typically a complicated pain in the neck.
  2. 2.       It’s hard to find the right show, because it takes channel surfing through 200-odd channels with no good descriptions to hand.

A Proposed Smartphone Solution

The solution seems straightforward:  a flexible piece of software on a remote that gives you one button to push to turn both TV and cable on and off, the usual channel-changing keys, and the ability to semi-automatically display adequate information about the shows, singly or summarized in the traditional TV-Guide timeline.  Oh, and the ability to share short comments about shows as they unfold would be nice.

Yes, but what’s the form factor?  Today, the universal remote is a single-purpose, inflexible piece of extra junk.  What’s needed is something truly programmable, easy to upgrade, handling all the little subcases that universal remotes vainly try to do now, plus be used for other tasks yet to be determined.  In other words, something like the smartphone or laptop.

Those of you who follow my blog posts are aware that I have spent a great deal of effort pointing out the things that the smartphone will probably never do, that make it unlikely to take over the computing world.  However, in the TV-remote case, these are in fact virtues.  We don’t need acres of space to see a blog post or long essay on a show, just enough information to tell if it’s really new and interesting to the viewer.  We don’t need to write that long essay, either, just to share a few pithy comments.  Now that, imho, the Samsung Galaxy SIII and other Androids have fixed the iPhone-type interface that was likely to put you in an odd part of a task with no obvious way to escape, the user interface to do both turn-off/turnon and search/texting is at least adequate.  And it’s just easier to lug the smartphone into and out of the family room than do the same for a laptop.

State of the Art

Anyway, I had reasoned this far when I looked at available offerings from folks like Dijjit and Beacon, and it became clear that there are some pesky details that still need to be solved. It appears that some cable boxes and TVs talk WiFi or Bluetooth, and some talk infrared in their own code.  Thus, not only does the smartphone, TV, and cable need to connect via WiFi to a home network for one-button “on” to work in some cases, but the smartphone actually has to generate infrared signals in multiple codes in some cases.  The result:  a “dongle” on the smartphone or an attached “rock” a la Beacon for channel surfing, and no one seems to handle the full case of cable/TV on/off. 

Then there’s the nagging question about what to display about the channels one surfs.  It seems that apps are only at the point where a raw download of some cable channels’ TV guides is supported, and even those are designed for less detail than a smartphone user – or a user in general – needs.  And, of course, there is no connection between tweeting and channel surfing; but I regard that as a minor inconvenience, since when you tweet you’re going to be typically stuck on a particular cable channel anyway.

Last and least, the vendors seem to be doing iPhone first and then Android – perfectly understandable, given market sizes, but it does delay (again, imho) user friendliness of the smartphone “remote”.

All of these things seem straightforward to fix, with the possible exception of the infrared communications.  It just takes time, and will on the part of app and smartphone vendors …

The Bottom Line: The Potential Lies in the Computing

The key to the potential superiority of the smartphone over traditional TV remotes, I believe, comes down to the fact that its pedigree is in the software-dominated software-hardware partnership of computing.  A TV’s interface is just plain stodgy, and in attempting to make it a veneer for both shows and the Internet, cable companies are forever playing catch-up, and forever falling further behind.  By contrast, the main difficulties from the smartphone side are ones caused by the rigid interfaces of the cable/TV hardware combo. 

And yet, the cable companies badly need to attract the generation in its 20s right now – which by all accounts is barely buying cable at all.  The prices are certainly too high for a large proportion of this generation, which has been hit hard by the recession; but there is nothing showing their buying behavior turning around soon, and at a certain point, they may conclude they don’t need cable at all; and that could be a death knell.  A smartphone remote may seem like an odd place to start; but if it means that this generation finds it interesting to tweet on programs, there is real hope that cable will not destroy its seed corn.  As for advertisers, if there is a better connection between the Internet and cable, some of them may well start spending on smartphone ads connected with cable ads – the tail wagging the dog, so to speak.  And that should slow the bleeding of ad dollars from TV …

Or maybe not.  In any case, it seems to me that we do too much contrasting between smartphones and the computing industry, when it seems to me that the real difference is between these two, who share via Steve Jobs the same software-driven computing DNA, and traditional media – and yes, by now cable is traditional, not too long after actors and actresses started seeing it as more interesting than the movies. I’d love to see the smartphone as TV remote; I think it’s an idea whose time has come, and I would dearly love to deep-six the five remotes (two cable, two TV, one DVD) for two TVs I presently have to have.  But even if it doesn’t, similar ideas are happening all the time, for both the PC and smartphone/tablet form factors.  Vive la difference et l’identite! Vive le smartphone remote!

Friday, February 1, 2013

Why Can't the Mainframe (and the PC) Stay Decently Dead?

For more than 23 years, as an analyst, I have been watching various industry observers suggest that the end of the mainframe may be nigh.  For nearly that long, I have been hearing voices proclaiming the nearing death of the PC. For the mainframe, it was the proprietary architecture, the lack of developers, the aging of administrators, or the rise of scale-out.  For the PC, it was the expense of personal storage and its administration compared to the “network computer”, the lack of Web savvy, the large form factor compared to the smartphone, and the unaptness to touch gestures.

So what did the 2012 markets say?  Astonishingly, after a 10-20 % dip in revenues in the first 3 quarters, IBM saw a 56% mainframe-revenue jump in the fourth quarter, to overall revenue growth in 2012, better than the decreasing revenues of its Unix/Linux (System p) and Windows (System x) alternatives.  This extended the mainframe’s 3-year streak of revenue (and therefore income) growth. 

IBM was not alone in seeing these results.  HP apparently saw greater single-digit decreases in Unix/Linux than in PC revenues.  Anecdotal evidence suggests that Oracle/Sun Unix/Linux revenues continued to decrease.  And, of course, touch screen in a PC form factor had only just begun to arrive at the end of 2012 (with Microsoft Office in a hybrid tablet/PC configuration only apparently arriving some time in 2013) – so the PC was competing with smart phones and tablets with one hand tied behind its back, so to speak.

In fact, if 2012 was proclaiming anything, it was suggesting the eventual death of Unix/Linux (no, I don’t believe that either).

In an era in which IT is apparently content to spend the same amount each year on computing, much of it on saving costs – thus increasing computer maker profits but keeping revenues flat – it is not likely that any of today’s form factors, including the smart phone, is going to attain the 50%-+ growth rates that we have seen in the past.  Thus, we may have seen the popping of the Apple stock-market bubble. And therefore, in the near future, I assert that neither the mainframe nor the PC is going into terminal decline – on the contrary, they should prosper modestly, as the mainframe has done in the last 3 years.

So why will all the cited disadvantages of mainframe and PC not lead to steady or sudden terminal decline in the next 2-3 years, but rather to sober growth?  Let’s take each in turn.

The Mainframe Is Fully Reinvented

When I first suggested to IBM that the mainframe needed to become a “hub” in the sense of a fully networked node of especial prowess in certain types of workload rather than in the sense of “either mainframe or something else but not both”, it was only about five years ago.  Since then, the process of turning the mainframe into something that is a full participant in the enterprise architecture has been pretty completely achieved.  All of IBM’s major enterprise software, from administration to security to data management to development, now is on approximately the same track inside and outside the mainframe, and most apps can easily and even dynamically move between mainframe and non-mainframe servers.  As a result, users find it far easier to employ the mainframe flexibly for its strengths in robustness, security, and high-end transactional scale-up. 

Imho, IBM’s mainframe plans for 2013 contain no such dramatic transformations – and don’t need to.  The “bridging” architectures of zEnterprise and (eventually) of PureSystems form a completely adequate foundation for future elaborations in mainframe-including enterprise architectures.  To the end user, developer, and administrator, the mainframe if needed can appear more or less as a transparent part of a fully modern overall enterprise architecture. My only caveat, as noted before, is that the mainframe fails to support Windows in public cloud architectures adequately (again, imho) – but that simply limits growth, it does not portend decline.

And so, the old objections begin to melt away.  Few argue that the mainframe’s proprietary architecture cannot keep pace just as well in the near future with the evolution of hardware/software technology. IBM cites reports that a new generation of system administrators is arriving, with adequate skills, and therefore some new buys are even showing a preference for z/OS up front rather than Linux.  While the mainframe will never rival the hundreds of thousands of apps for Apple’s OS or Android out of the box, there are few major barriers to software today, either. 

To put it in a nutshell:  the mainframe is attracting new customers, even in the US, because the old knocks on the mainframe no longer apply, leaving it to leverage its strengths in a particular segment of the market that will grow with the growth of usage of Big Data.

The PC Keeps On Doing What It Does Best

The argument for the PC’s continued success is of a different sort from that of the mainframe.  I say, rather, that the doomsayers of the PC have not failed to appreciate adequately its long-term strengths.

Let’s start with the “network computing” argument.  Yes, as with the “dumb terminal” before it, the “network computer” is cheaper than the PC.  However, one value of the PC has been its ability to store personal data and applications, whether as a “home within an office” that allows the end user to generate his or her own Powerpoints and spreadsheets, or as a bridge between home and office to allow work wherever.  That is precisely why the countervailing trend of BYOD (Bring Your Own Device) – which is just as much if not more about laptops than smartphones and tablets – continues to ensure a major presence of PCs both at home and at the office.

As for Web savvy, once again the smartphone’s evolution has proven the value of personal physically-next-to-the-user storage – whether it be stored song downloads or phone logs – and hence the smartphone is turning inevitably into a PC – but one in which the small vs. large screen carves up turf between the PC blogger and the smartphone tweeter.  And, of course, my experience with Windows 8, wrong-headed as I believe some of Microsoft’s decisions are (e.g., crippling the desktop screen out of the box), shows that touch not only is valuable to the PC’s core word-processing and navigating/data-organization skills, but that it is not as easily applied to the smartphone’s small screen without a comparable, complementary typing feature (hunt-and-peck still doesn’t do it). And so, not only in terms of Web usage but also in terms of end-user-friendliness, the PC still holds its share of the market; and, for the same reasons, should continue to do so.

Note that I say that the relatively large form factor is a plus rather than a minus.  I assert that there is a “finger limit” in which the typing necessary to create large-scale amounts of real content can only be accomplished on a sufficient-sized screen.  Call it semantics, call it deep analysis, call it whatever you want, 50 lines is about the minimum for a decent blog post that does not amount to dipping your finger into a very large pool – and, as in this post, 2-3 old-style pages are more like a comprehensive look at a subject.  Note that folks like Paul Krugman use shorter blog posts as “riffs” to serve as the basis for a comprehensive multi-page paper, not as the definitive word on a subject.  A mashup is no substitute.  And so, the PC’s combination of personal storage and relatively large screen/keyboard should see continued modest growth in demand, in whatever form, over the next 2-3 years – depending, of course, on the world economy.

Sacrificing Some of the Future By Discarding the Present

I am long since resigned to periodic outbursts of “the death of …”, which sometimes are justified (I for one don’t lament the effective death of the dumb terminal, having had the dubious joy of programming for it).  What I do object to is the way in which these persuade many people in and out of the industry that the architectures in question are indeed dead, which means we don’t need to think about them, which means we don’t leverage their unique strengths in the next generation of the world-wide web of computing. 

Specifically, the mainframe pushes the limits of scale-up computing.  While scale-out grid solutions may indeed have achieved some notable successes (I understand that Google has made notable advances by thousands-of-servers divide-and-conquer applied to language comprehension and translation), on a per-processor basis scale-up’s tight integration continues to offer frequent performance advantages over scale-out’s loosely-coupled networking. 

On the PC end, the latest tablet/smartphone user interfaces are crippled by a lack of appreciation of foldering’s static personal data organization as a complement to the Web’s dynamic search-based organization. Moreover, we continue to move away from the idea of a “virtual end-user space” in which the full functionality and data of one’s own PC is available anywhere, any time, whether we are connected to the Web or not (and this will be needed because we are still quite a ways away from always-available and always-fully-functional/personalized Web avatars). 

Most of us remember the Monty Python routine in Holy Grail where one person attempts to fob off an alive person on a “dead collector”, while the corpse in question protests “I’m not dead yet!” To which the seller keeps replying “Shut up.”  For very good reasons, the mainframe and the PC refuse to stay decently dead.  Could we please not keep telling them to shut up?