Friday, February 22, 2013

Parasoft and "Service Virtualization" Testing: A Good Idea


Recently there passed across my desk a white paper sponsored by Parasoft about the idea of applying what they called “service virtualization” to software testing.  Ordinarily, I find that “we’ve been there and done that” for much of the material in most of the white papers like this that I see.  In this case, however, I think that the idea Parasoft describes is (a) pretty new, (b) applicable to many software-development situations, and (c) quite valuable if effectively done.

The Problem

The problem to be solved, as I understand it – and my own experience and conversations with development folks suggests that it does indeed happen frequently these days – is that in the later stages of software testing, of dependency and volume testing shortly before version or product deployment, one or more key applications not involved in the software development or upgrade but with “interaction effects” is effectively unavailable for testing in a timely fashion. It may be a run-the-business ERP application for which stress testing crowds out the needed customer-transaction processing.  It may be a poorly documented mission-critical legacy application for which creating a “sandbox” is impractical. You can probably think of many other cases.

In fact, I think I ran across an example recently.  It went like this:  a software company selling a customer-facing application started up about five years ago.  Over five years of success, they ran that customer-facing application 24x7, with weekly maintenance halts for a couple of hours and 4-6-hour halts for major upgrades.  All very nice, all very successful, as revenue ramped up nicely. 

Then (reading between the lines), recently they realized that they had not upgraded their back-end billing and accounting systems that fed off the application, and these were increasingly inefficient and causing problems with customer satisfaction.  So they tested the new solutions in isolation in a “sandbox”, and then scheduled a full 12 hours of downtime on the app to install the new solutions – without “sandboxing” the back-end and front-end solutions working together first.

Everything apparently went fine until they started up a “test run” of the back-end and front-end solutions working in sync.  At that point, not only did the test fail, but it also created problems with the “snapshot” of front-end data that they had started from.  So they had to repeatedly reload the start point and do incremental testing on the back-end systems. In the end, they took more than two days (complete with anguished screams from customers) to add some changes to back-end systems and make the customer-facing application available again; and it took several more days before the rest of the back-end systems were available in the new form. As the white paper notes, in planning final testing of new software, companies can often be willing to skip integration testing involving interdependent unchanged software; and the consequences can be quite serious.

The Idea

Probably to cash in on the popularity of “virtualization”, the white paper calls the idea proposed to deal with this problem “service virtualization.”  To my eyes, the best description is “script-based dependent software emulation.” In other words, to partially replace the foregone testing, service virtualization would allow you to create a “veneer” that would, whenever you invoke the dependent software during your integration testing, spit back the response that the dependent software should give. This particular solution provides two ways of creating the necessary “scripts” (my categorization, not Parasoft’s):

1.       Build a model of how the dependent software runs, and invoke the model to generate the responses; and
2.       Take a log of the actions of the dependent software during some recent period, and use that information to drive the responses.

Before I analyze what this does and does not do, let me note that I believe Approach #2 is typically the way to go. The bias of an IT department considering skipping the dependent-software integration testing step is towards assuming that there will be no problems.  The person building the model will therefore often implicitly build it the way the dependent software would work if there really were no problems – and response times are often guesstimates.  The log of actual actions introduces a needed additional note of realism into the testing.

However, the time period being logged almost inevitably does not capture all cases – end-of-year closing, for example.  The person creating the “virtualized service” should have a model in mind that allows him or her to add the necessary cases not covered by the log.

Gains and Limits

The “service virtualization” idea is, I believe, a major advance over the previous choice between a major disruption of online systems and a risk of catastrophic downtime during deployment. If one takes Approach #2 as described above, “service virtualization” will add very little on to testing time and preparation, while in the vast majority of cases it will detect those integration-test problems that represent the final barrier to effective testing before deployment.  In other words, you should be able to decrease the risk of software introduction crashes tenfold or a hundredfold. 

The example I cited above is a case in point.  It would have taken fairly little effort to use a log of customer-facing app interactions in a sandbox integration test with the new back-end systems.  This would also have speeded up the process of incremental testing of the new software once a problem was detected. 

There are limits to the gains from the new testing approach – although, let’s note up front that these do not detract in the slightest from the advantages of “service virtualization”.  First, even if you take Approach #2, you are effectively doing integration testing, not volume/stress testing.  If you think about it, what you are mimicking is the behavior of the “dependent” software before the new systems are introduced.  It is possible, nay, likely, that the new software will add volume/stress to the other software in your data center that it interacts with.  And so, if the added stress does cause problems, you won’t find out about it until you’re operating online and your mission-critical software slows to a crawl or crashes.  Not very likely; but still possible.

Second, it is very possible that there will be a lag time between the time when you capture the behavior of the “dependent” software and the time that you run the tests.  It is fairly simple to ensure that you have the latest and greatest version of “dependent” software with the latest bug fixes, and to keep track of whatever changes happen online during sandbox testing offline. If you are just periodically refreshing the log “snapshot” as in Approach #2, or even operating from a model written a year ago, as happens all too often, then there is a real possibility that you have missed crucial changes to the “dependent” software that will cause your integration testing to succeed and then your deployment to crash. Luckily, ex-post analysis of dependent-software changes makes fixing this problem much easier – but it should be minimized by straightforward monitoring of operational-dependent-software mods during testing.

The Bottom Line for Parasoft and “Service Virtualization” Testing:  Worth Looking At Right Now

The Parasoft solution appears to apply especially to IT shops with significant experience with “skipping” integration testing due to “dependent software”, and with a reasonably sophisticated test harness.  Of course, if you don’t have a reasonably sophisticated test harness that can do integration testing of new software and other operational systems in your environments, perhaps you should consider acquiring one.  I suspect that the case I cited earlier not only failed to sandbox integration testing, but didn’t have the test harness to do so even had they wanted to. 

For those IT shops fitting my criteria, there seems no real reason to wait to kick the tires of, and probably buy, additional “service virtualization” features.  As I said, the downside in terms of added test time and effort in these cases appears minimal, and the gains from the additional software robustness clear, and potentially company-reputation-saving.

I will, however, add one note of caution, not about the solution, but about your strategy in using it.  Practically speaking, “service virtualization” is rarely if ever to be preferred to full integration testing, if you can do it.  It would be a very bad idea to use the new tools to move the boundary between what is fully tested, because you can manage it in a reasonable time, and what is quicker and easier but risks disaster.  Do use “service virtualization” to replace naked “close your eyes and hope” deployment; don’t use “service virtualization” to replace an existing thorough integration test.

Kudos to Parasoft for marketing such a good idea.  Check it out.

3 comments:

Anonymous said...

This was a great read.

John Michle said...

Its really informative, highlighted point and other facts given here are quite considerable and to the point as well, would be better to look for more of these kind for efficient results.

Construction Service Management Software

Unknown said...

It is great software that developed and it offers unique inventory softwares. These are basically designed for small to mid-sized businesses which helps in saving their time. I will definitely recommend this software to all my friends.


Time Log Software