I frequently receive questions about the TCO/ROI studies that I conduct, and in particular about the ways in which they differ from the typical studies that I see. Here’s a brief summary:
• I try to focus on narrower use cases. Frequently, this will involve a “typical” small business and/or a typical medium-sized business – 10-50 users at a single site, or 1000 users in a distributed configuration (50 sites in 50 states, 20 users at each site). I believe that this approach helps to identify situations in which a typical survey averaging over all sizes of company obscures the strengths of a solution for a particular customer need.
• I try to break down the numbers into categories that are simple and reflect the user’s point of view. I vary these categories slightly according to the type of user (IT, ISV/VAR). Thus, for example, for IT users I typically break TCO down into license costs, development/installation costs, upgrade costs, administration costs, and support/maintenance contract costs. I think that these tend to be more meaningful to users than, say, “vendor” and “operational” costs.
• In my ROI computation, I include “opportunity cost savings”, and use a what-if number based on organization size for revenues, rather than attempting to determine revenues ex ante. Opportunity cost savings are estimated as TCO cost savings of a solution (compared to “doing nothing”) reinvested in a project with a 30% ROI. Considering opportunity cost savings gives a more complete picture of (typically 3-year) ROI. Comparing ROIs when revenues are equal allows the user to zero in on how faster implementation and better TCO translate into better profits.
• My numbers are more strongly based on qualitative data from in-depth, open-ended user interviews. Open-ended means that the interviewee is asked to “tell a story” rather than answer “choose among” and “on a scale of” questions, thus giving the interviewee every opportunity to point out flaws in initial research assumptions. I have typically found that a few such interviews yield numbers that are as accurate as, if not more accurate than, 100-respondent surveys.
Let me now, at the end of this summary, dwell for a moment on the advantages of open-ended user interviews. They allow me to focus on a narrower set of use cases without worrying as much about smaller survey size. By avoiding constraining the user to a narrow set of answers, they make sure that I am getting accurate data, and all the data that I need. They allow me to fine-tune and correct the survey as I go along. They surface key facts not anticipated in the survey design. They motivate the interviewee, and encourage greater honesty – everyone likes to “tell their story.” They also provide additional advice to other users – advice of high value to readers that gives additional credibility to the study conclusions.
No comments:
Post a Comment