Over the last year and a half, I have been following with interest the rise of user and vendor interest in EDAs (event-driven architectures). My own take on EDAs: there’s some real value-add there.
Now, from a database programmer’s point of view, business-event handling is older than dirt. The basics of events, triggers, business rules, and stored procedures were there in the 1970s if not before; and a network-management specialist would probably tell you the same about distributed monitoring and alerting. What’s new over the last few years is the idea of an SOA (service-oriented architecture) that spans an enterprise network using standardized software interfaces. The basis of a SOA is often an ESB, or enterprise service bus, that acts to route calls from Web service consumer code, such as a Web browser, to Web service provider code, such as an enterprise app. This means that the ESB can now see all the events already flying back and forth from systems management, database transactions, and specific apps for specific types of events (such as quant apps handling stock arbitraging in milliseconds), filter them, report them, and act on them. So an event processing engine over/in the ESB – the event-driven architecture -- is a great way to handle existing event processing (such as systems management) across the enterprise.
Moreover, such an EDA moves event processing out of hard-coded existing stored procedures and apps to the enterprise-network level, where the developer can write enterprise-spanning apps. Hence the ideas of “business activity monitoring” (figuring out what’s happening in the enterprise by monitoring events on the enterprise network), “the 360-degree view” (adding monitoring the enterprise’s environment via RSS feeds, Web searches, and the like), and “the just-in-time enterprise” (semi-automated rapid responses to business events). No wonder vendors ranging from IBM and Sun to Progress Software and users like Orbitz are talking about EDAs.
So the obvious question is, why aren’t folks talking about applying EDAs to SaaS (software as a service)? When I google EDA and SaaS, there is a mention or two of the notion of supplying a CEP (complex event processing) engine as a service, but nothing really on the idea of SaaS providers offering an EDA on top of their SOA for added user benefits.
And yet, if I were a SaaS user, I would find an EDA offered by my provider quite intriguing. I might do business activity monitoring on top of my ERP app, or react quickly to customer-relationship problems identified by SugarCRM. As a boutique investment firm, I’d love the chance to create arbitraging code at lower cost.
And if I were a SaaS provider, I’d be interested not only in attracting customers by offering an EDA interface, but also in using an EDA to fine-tune hard-coded security and administration software for each multi-tenancy user. In fact, I think EDA would give me a great new tool for metering and charging users on the basis of business transactions rather than per-CPU.
So why isn’t there more interest in EDAs applied to SaaS? Inquiring analysts want to know …
Thursday, May 15, 2008
Tuesday, May 13, 2008
HP + EDS Software: Hey, It's Insinking!
When you think about the HP EDS acquisition, don’t just think about its short-term effect on revenues. Think about the leverage this gives HP to do some long-term software investment -- for excellent profit improvements.
Why am I focusing on software? After all, in a quick scan of press releases and articles on HP’s announcement of its proposed acquisition of EDS, I find little if any mention of the acquisition’s effect on HP software. That’s scarcely surprising: by some calculations software is still less than 4% of HP’s revenues pre-EDS, although the software business is growing briskly. And that will still be true post-EDS. In fact, if you treat their software and hardware revenues and profits as part of one big pot, there’s some basis for saying that the key differences between IBM and HP post-acquisition go something like this:
HP + EDS revenues = IBM revenues – IBM business consulting revenues + HP PC/printer revenues
HP + EDS profits = IBM profits – (HP lower PC margins)
But I think there’s more to the software angle than that. Let me explain.
Way back when I was at Yankee Group, I was so tired of hearing about the glories of outsourcing that I wrote a spoof called “Insinking: The Future Lies Ahead.” The idea was that, since outsourcing was the use of outside companies to carry out an enterprise’s tasks, insinking was the consumption of a company’s products by the company itself. (The source in outsourcing is outside the firm, the sink or consumer in insinking is inside, get it?) Naturally, this lent itself to all sorts of mock-predictions (sample: “Competition will become cutthroat. The resulting bloodbath will leave the competitors red and sticky-looking”).
Well, what was farce now looks like reality. In particular, IBM has now created a structure where software and hardware to some extent compete for the services group’s business against other inside and outside vendors. And the result is, by and large, beneficial; the services group provides an addition to the software/hardware market, while outside competition continues to force long-run improvements in IBM software/hardware’s competitiveness.
To put it another way, as a Sloan Management Review article a while ago noted, there has been a long-run tendency, in high tech companies as elsewhere, to move from vertical integration to autonomous business functions that are more often parts of different companies. Insinking, or whatever you want to call it, says that if the function is autonomous and competitive, it may actually be better to include it in the company, as the cost advantages within the company of the in-house function lead more business for the function, which in turn leads to economies of scale, which leads to greater competitiveness outside the company.
Spoiler note: let’s not take this idea too far. The reason insinking lent itself to farce was that the logical endpoint of insinking’s logic is the consumption of all of a company’s products by the company itself. Such a company would operate at a perpetual loss. It’s a bit like saying that, because reducing taxes a little may actually increase government tax revenues in the long run, we should reduce taxes to zero.
How does this relate to HP software? Well, in the short run, probably very little. HP doesn’t have nearly enough infrastructure or app software to supply EDS’ needs. Insinking may very well be good for HP hardware; but it will, in the short run, not make much of a dent in the relative size of software revenues in HP.
In the long run, however, HP has an opportunity – if it wants to seize it. Insinking for HP software would definitely add software revenues. Since software is well-known for improving profit margins versus hardware and services, any additional software that HP acquires for its software arm will probably be worth more because of its additional sales to EDS in-house and its positive margin effect. So by focusing on growing software’s share of the business, HP would have an unusually strong and good effect on revenues and profits.
Why am I focusing on software? After all, in a quick scan of press releases and articles on HP’s announcement of its proposed acquisition of EDS, I find little if any mention of the acquisition’s effect on HP software. That’s scarcely surprising: by some calculations software is still less than 4% of HP’s revenues pre-EDS, although the software business is growing briskly. And that will still be true post-EDS. In fact, if you treat their software and hardware revenues and profits as part of one big pot, there’s some basis for saying that the key differences between IBM and HP post-acquisition go something like this:
HP + EDS revenues = IBM revenues – IBM business consulting revenues + HP PC/printer revenues
HP + EDS profits = IBM profits – (HP lower PC margins)
But I think there’s more to the software angle than that. Let me explain.
Way back when I was at Yankee Group, I was so tired of hearing about the glories of outsourcing that I wrote a spoof called “Insinking: The Future Lies Ahead.” The idea was that, since outsourcing was the use of outside companies to carry out an enterprise’s tasks, insinking was the consumption of a company’s products by the company itself. (The source in outsourcing is outside the firm, the sink or consumer in insinking is inside, get it?) Naturally, this lent itself to all sorts of mock-predictions (sample: “Competition will become cutthroat. The resulting bloodbath will leave the competitors red and sticky-looking”).
Well, what was farce now looks like reality. In particular, IBM has now created a structure where software and hardware to some extent compete for the services group’s business against other inside and outside vendors. And the result is, by and large, beneficial; the services group provides an addition to the software/hardware market, while outside competition continues to force long-run improvements in IBM software/hardware’s competitiveness.
To put it another way, as a Sloan Management Review article a while ago noted, there has been a long-run tendency, in high tech companies as elsewhere, to move from vertical integration to autonomous business functions that are more often parts of different companies. Insinking, or whatever you want to call it, says that if the function is autonomous and competitive, it may actually be better to include it in the company, as the cost advantages within the company of the in-house function lead more business for the function, which in turn leads to economies of scale, which leads to greater competitiveness outside the company.
Spoiler note: let’s not take this idea too far. The reason insinking lent itself to farce was that the logical endpoint of insinking’s logic is the consumption of all of a company’s products by the company itself. Such a company would operate at a perpetual loss. It’s a bit like saying that, because reducing taxes a little may actually increase government tax revenues in the long run, we should reduce taxes to zero.
How does this relate to HP software? Well, in the short run, probably very little. HP doesn’t have nearly enough infrastructure or app software to supply EDS’ needs. Insinking may very well be good for HP hardware; but it will, in the short run, not make much of a dent in the relative size of software revenues in HP.
In the long run, however, HP has an opportunity – if it wants to seize it. Insinking for HP software would definitely add software revenues. Since software is well-known for improving profit margins versus hardware and services, any additional software that HP acquires for its software arm will probably be worth more because of its additional sales to EDS in-house and its positive margin effect. So by focusing on growing software’s share of the business, HP would have an unusually strong and good effect on revenues and profits.
Tuesday, May 6, 2008
Do the Old Analysis Rules Still Apply?
I was going through some old papers the other day, and I found the presentation I used to give to newbies at Aberdeen Group on how to see past marketing hype – or, as we called it then, the Bull#*! Detector. One of the best ways, I suggested, was to “Drag Out Some Old Rules” about what would and would not succeed in the marketplace. Looking at that presentation today, I was curious as to how well those Old Rules held up, 11-12 years later.
Here are the three rules I wrote down:
Emulation rarely works. By that I meant, suppose someone came to you and said, “I have a great new platform designed to run Linux. And, if you need Windows, it emulates Windows, too; you’ll never tell the difference.” But you would be able to tell the difference: fewer Windows apps supported, slower performance, bunches of additional bugs, and slowness to adopt the latest Windows rev. And that difference, although you might not be able to articulate it, would lead the bulk of users to choose a Wintel platform for Windows over the Linux one.
I’m no longer sure that this rule will hold up; but it may. The key test case is businesses allowing employees to use Macs at work. An article in last week’s Business Week suggested that there was a big trend towards using the Mac at work, and large businesses were beginning to allow it, as long as they could run their Windows apps on those Macs. But if this rule holds up, the bulk of large businesses are never going to switch to the Mac; neither IT nor the employees will accept a solution that’s more of a pain in the neck, no matter what Microsoft does with Vista.
Open and standards means slower development of new products and less performance; proprietary means lock-in. The logic to this one was simple: if you’re going to be open, one of your main foci is adhering to standards; and standards tend to lag the technology, while preventing your programmers from fine-tuning to achieve the last ounce of performance (I don’t think anyone argues with the idea that proprietary fine-tuned products lock users into a vendor).
I don’t think this one has been disproved so much as it has become irrelevant. Most if not all software created today is mostly or all open, and in many cases not open source. And there are so many other variables going into software development and software performance today, like the effective use and reuse of components and the use of REST instead of SOAP, that the inability to fine-tune instead of using standardized code is at best a rounding error.
If it’s Wintel it’s a de-facto standard; if it’s anything else it’s not. To put it another way, Microsoft was the only vendor that had the power to produce software that was immediately broadly accepted in the market, and used as the basis for other apps (like Office, SOAP, and Windows).
I would argue that, despite appearances, this rule still holds to some extent. Sun had a big success with Java, and IBM with Eclipse; but those were not imposed on the market, but rather made available to a rapidly-growing non-Microsoft community that chose to accept them rather than other non-Microsoft alternatives. Meanwhile, despite all we might say about Vista, a massive community of programmers and lots of end users still look first to Microsoft rather than the Linux/Java/open source community for guidance and solutions.
Here’s a couple of Rules that I didn’t put down because it was politically incorrect at Aberdeen back then:
Automated speech recognition/language translation/artificial intelligence will never really take off. Despite the wonderful work being done then, and still being done, in these areas, I felt that the underlying problem to be solved was “too hard.” To put it another way, one of the great insights of computer science has been that some classes of problems are intrinsically complex, and no amount of brilliance or hard work is going to make them good enough for what the market would really like to use them for. The above three, I thought, fall into that category.
I note that periodically, Apple or IBM or someone will announce with great fanfare speech recognition capabilities in their software as a key differentiator. After a while, this marketing pitch fades into the background. Naturally Speaking has not taken over cell phones, most language translation at Web sites is not great, and there is not widespread use of robots able to reason like humans. I believe this Rule has held up just fine, and I might add “pen computing” to the list. Jury’s still out as to whether electronic books will fall in this category, although Paul Krugman is now giving Kindle an excellent review in his blog.
Network computing, under whatever guise, will always be a niche. Many folks (like, I think, Larry Ellison) still believe that a dumbed-down PC that has its tasks taken over by a central server or by the Web will crowd out those pesky PCs. My logic, then and now, is that a network computer was always going to be, price-wise, between a dumb terminal (read: a cell phone) and the PC – and products that are midways in price and not dominant in the market will inevitably get squeezed from both sides.
Now, I know that many IT shops would love to go to network computing because of the PC’s security vulnerabilities, and many Web vendors hope that most end users will eventually trust all of their apps and data to the Internet exclusively, rather than having Web and PC copies. I say, ain’t gonna happen soon. This Rule still seems to be holding up just fine.
Finally, here’s a couple of Rules I have thought up since then:
Open-sourcing a product means it never goes away, but never dominates a market. That is, open-sourcing something is an excellent defensive move; but it closes off any offensive that will take over that market. Thus, open-sourcing, say, Netscape Communicator or CA-Ingres has meant that a loyal following will keep it fresh and growing, and you’ll attract new folks like the open-source community. However, it has also meant that others now view them as not offering key differentiable technology that would be worth a look for use in new apps.
The test case for this one today is MySQL. Over the last 4 years, MySQL has probably amassed more than 10 million users, but those are mostly new users in the open-source/Linux community, not converts from big Oracle or IBM shops. Look at Oracle’s revenues from Oracle Database in the last few years, and you’ll see that open-source databases have not had the impact on revenues nor on pricing that has been expected. My Rule says that will continue to be so.
And here’s a Rule that’s for the Ages:
There’ll always be Some Rules for analysts to Drag Out to detect marketing hype. From my observation, all of us are guilty at one time or other of over-enthusiasm for a technology. But I would say that the good long-term analyst is one who can occasionally use his or her Rules to buck the common wisdom – correctly.
Here are the three rules I wrote down:
Emulation rarely works. By that I meant, suppose someone came to you and said, “I have a great new platform designed to run Linux. And, if you need Windows, it emulates Windows, too; you’ll never tell the difference.” But you would be able to tell the difference: fewer Windows apps supported, slower performance, bunches of additional bugs, and slowness to adopt the latest Windows rev. And that difference, although you might not be able to articulate it, would lead the bulk of users to choose a Wintel platform for Windows over the Linux one.
I’m no longer sure that this rule will hold up; but it may. The key test case is businesses allowing employees to use Macs at work. An article in last week’s Business Week suggested that there was a big trend towards using the Mac at work, and large businesses were beginning to allow it, as long as they could run their Windows apps on those Macs. But if this rule holds up, the bulk of large businesses are never going to switch to the Mac; neither IT nor the employees will accept a solution that’s more of a pain in the neck, no matter what Microsoft does with Vista.
Open and standards means slower development of new products and less performance; proprietary means lock-in. The logic to this one was simple: if you’re going to be open, one of your main foci is adhering to standards; and standards tend to lag the technology, while preventing your programmers from fine-tuning to achieve the last ounce of performance (I don’t think anyone argues with the idea that proprietary fine-tuned products lock users into a vendor).
I don’t think this one has been disproved so much as it has become irrelevant. Most if not all software created today is mostly or all open, and in many cases not open source. And there are so many other variables going into software development and software performance today, like the effective use and reuse of components and the use of REST instead of SOAP, that the inability to fine-tune instead of using standardized code is at best a rounding error.
If it’s Wintel it’s a de-facto standard; if it’s anything else it’s not. To put it another way, Microsoft was the only vendor that had the power to produce software that was immediately broadly accepted in the market, and used as the basis for other apps (like Office, SOAP, and Windows).
I would argue that, despite appearances, this rule still holds to some extent. Sun had a big success with Java, and IBM with Eclipse; but those were not imposed on the market, but rather made available to a rapidly-growing non-Microsoft community that chose to accept them rather than other non-Microsoft alternatives. Meanwhile, despite all we might say about Vista, a massive community of programmers and lots of end users still look first to Microsoft rather than the Linux/Java/open source community for guidance and solutions.
Here’s a couple of Rules that I didn’t put down because it was politically incorrect at Aberdeen back then:
Automated speech recognition/language translation/artificial intelligence will never really take off. Despite the wonderful work being done then, and still being done, in these areas, I felt that the underlying problem to be solved was “too hard.” To put it another way, one of the great insights of computer science has been that some classes of problems are intrinsically complex, and no amount of brilliance or hard work is going to make them good enough for what the market would really like to use them for. The above three, I thought, fall into that category.
I note that periodically, Apple or IBM or someone will announce with great fanfare speech recognition capabilities in their software as a key differentiator. After a while, this marketing pitch fades into the background. Naturally Speaking has not taken over cell phones, most language translation at Web sites is not great, and there is not widespread use of robots able to reason like humans. I believe this Rule has held up just fine, and I might add “pen computing” to the list. Jury’s still out as to whether electronic books will fall in this category, although Paul Krugman is now giving Kindle an excellent review in his blog.
Network computing, under whatever guise, will always be a niche. Many folks (like, I think, Larry Ellison) still believe that a dumbed-down PC that has its tasks taken over by a central server or by the Web will crowd out those pesky PCs. My logic, then and now, is that a network computer was always going to be, price-wise, between a dumb terminal (read: a cell phone) and the PC – and products that are midways in price and not dominant in the market will inevitably get squeezed from both sides.
Now, I know that many IT shops would love to go to network computing because of the PC’s security vulnerabilities, and many Web vendors hope that most end users will eventually trust all of their apps and data to the Internet exclusively, rather than having Web and PC copies. I say, ain’t gonna happen soon. This Rule still seems to be holding up just fine.
Finally, here’s a couple of Rules I have thought up since then:
Open-sourcing a product means it never goes away, but never dominates a market. That is, open-sourcing something is an excellent defensive move; but it closes off any offensive that will take over that market. Thus, open-sourcing, say, Netscape Communicator or CA-Ingres has meant that a loyal following will keep it fresh and growing, and you’ll attract new folks like the open-source community. However, it has also meant that others now view them as not offering key differentiable technology that would be worth a look for use in new apps.
The test case for this one today is MySQL. Over the last 4 years, MySQL has probably amassed more than 10 million users, but those are mostly new users in the open-source/Linux community, not converts from big Oracle or IBM shops. Look at Oracle’s revenues from Oracle Database in the last few years, and you’ll see that open-source databases have not had the impact on revenues nor on pricing that has been expected. My Rule says that will continue to be so.
And here’s a Rule that’s for the Ages:
There’ll always be Some Rules for analysts to Drag Out to detect marketing hype. From my observation, all of us are guilty at one time or other of over-enthusiasm for a technology. But I would say that the good long-term analyst is one who can occasionally use his or her Rules to buck the common wisdom – correctly.
Thursday, May 1, 2008
The Development Philosophy: Bottom Rail On Top This Time
There’s a wonderful story told (I believe) in Bruce Catton’s “Never Call Retreat” and other Civil War histories. An ex-slave with the Union armies happens upon his former master, now a prisoner of war toiling away at some task. “Hey, master,” the ex-slave says, “Bottom rail on top this time.”
I was reminded of this story when I reviewed my latest copy of Sloan Management Review. In it, there’s an excellent article by Keith McFarland about his consulting firm’s application of software-development techniques – particularly agile programming – to a company’s process of strategy formulation.
Today’s software development philosophy has in some cases moved from an inflexible “waterfall” process of developing software, often with attendant highly formalized governance, to a “spiral” process emphasizing frequent, flexible, incremental changes narrowing in on a solution, and involving both developers and “end users.” Keith’s insight has been that the same approach can be used to improve other processes, such as strategy formulation and modification. His experience has been that this approach leads to benefits to the company similar to those of agile development to the development organization: faster time-to-value, greater buy-in by users/the company as a whole, and easier, more frequent modification of strategies in response to new opportunities and challenges.
Now, all this sounds like another example of cross-fertilization from one area of the business to another, all part of the current mantra “innovation is great.” But as a long-time programmer and then observer of the business development scene, I can tell you that I can’t recall ever seeing an idea come from development to business strategy. Quite the reverse, in fact.
To put it another way, much of the history of development methodologies and philosophies is one of strained analogies with something else. There was the time in the 1970s when it was assumed that development was something vaguely like writing, and therefore anyone with a background in English or history could do it. More recently, and up to the present day, programmers were supposed to be like engineers, with a cookbook of mathematical kludges to apply to any situation (remember Computer-Aided Software Engineering?). As someone who worked with electrical engineers at my first job, I have always thought this was a pernicious misunderstanding of programming skills, leading to “software and the software process would be simple if they were just engineered properly.” And then there’s “development is like manufacturing,”, leading to Cusumano’s hailing Japanese software factories as the wave of the future and Brad Cox hoping that developers would simply build software like Lego blocks, not to mention the hope that Indian outsourcers with a set of requirements as their supply chain could handle all new-application development tasks.
It is only in the 2000s that we have finally seen some significant flow from software ideas to the rest of the business. In particular, today’s cross-project management tools (folks are realizing) are well ahead of legacy New Product Development and Product Portfolio Management software and techniques, and therefore you can do better if you’re Schlumberger by using PPM to enhance your product development tools. And companies are paying lots more attention to software development, too, because for more and more businesses, from investment banking to gaming to Google, continuing development of competitive-advantage software is key to company success.
But applying agile development to business strategy development is a step further. Agile development, remember, is one of the first “home-grown” development philosophies, done by developers with full appreciation for the drawbacks of previous analogies and aiming to leverage the very characteristics that make the developer unique: mastery of massive amounts of detail, mathematical and creative skills, usefulness of the individual and of self-generated “hacking” (in the old, good sense) as well as rapid prototyping. This is a process that first took off in software development, not anywhere else.
So the real story of this article is that maybe development is beginning to be not a millstone around management’s neck (as in, “why can’t I get the support for the business I need out of IT?”), nor a mysterious cabal within the organization, with its own language and norms, but a pattern of getting things done that can serve as an example elsewhere, or a harbinger of successful organizational practices to come. Why can’t the budgeting/planning/forecasting process be more flexible? How about the event-driven organization, ready to react semi-automatically to certain events on the outside? Could we open-source performance reviews? All right, maybe the last one was a little over the top.
Software development, done right, as a model for the organization! Bottom rail on top this time, indeed.
I was reminded of this story when I reviewed my latest copy of Sloan Management Review. In it, there’s an excellent article by Keith McFarland about his consulting firm’s application of software-development techniques – particularly agile programming – to a company’s process of strategy formulation.
Today’s software development philosophy has in some cases moved from an inflexible “waterfall” process of developing software, often with attendant highly formalized governance, to a “spiral” process emphasizing frequent, flexible, incremental changes narrowing in on a solution, and involving both developers and “end users.” Keith’s insight has been that the same approach can be used to improve other processes, such as strategy formulation and modification. His experience has been that this approach leads to benefits to the company similar to those of agile development to the development organization: faster time-to-value, greater buy-in by users/the company as a whole, and easier, more frequent modification of strategies in response to new opportunities and challenges.
Now, all this sounds like another example of cross-fertilization from one area of the business to another, all part of the current mantra “innovation is great.” But as a long-time programmer and then observer of the business development scene, I can tell you that I can’t recall ever seeing an idea come from development to business strategy. Quite the reverse, in fact.
To put it another way, much of the history of development methodologies and philosophies is one of strained analogies with something else. There was the time in the 1970s when it was assumed that development was something vaguely like writing, and therefore anyone with a background in English or history could do it. More recently, and up to the present day, programmers were supposed to be like engineers, with a cookbook of mathematical kludges to apply to any situation (remember Computer-Aided Software Engineering?). As someone who worked with electrical engineers at my first job, I have always thought this was a pernicious misunderstanding of programming skills, leading to “software and the software process would be simple if they were just engineered properly.” And then there’s “development is like manufacturing,”, leading to Cusumano’s hailing Japanese software factories as the wave of the future and Brad Cox hoping that developers would simply build software like Lego blocks, not to mention the hope that Indian outsourcers with a set of requirements as their supply chain could handle all new-application development tasks.
It is only in the 2000s that we have finally seen some significant flow from software ideas to the rest of the business. In particular, today’s cross-project management tools (folks are realizing) are well ahead of legacy New Product Development and Product Portfolio Management software and techniques, and therefore you can do better if you’re Schlumberger by using PPM to enhance your product development tools. And companies are paying lots more attention to software development, too, because for more and more businesses, from investment banking to gaming to Google, continuing development of competitive-advantage software is key to company success.
But applying agile development to business strategy development is a step further. Agile development, remember, is one of the first “home-grown” development philosophies, done by developers with full appreciation for the drawbacks of previous analogies and aiming to leverage the very characteristics that make the developer unique: mastery of massive amounts of detail, mathematical and creative skills, usefulness of the individual and of self-generated “hacking” (in the old, good sense) as well as rapid prototyping. This is a process that first took off in software development, not anywhere else.
So the real story of this article is that maybe development is beginning to be not a millstone around management’s neck (as in, “why can’t I get the support for the business I need out of IT?”), nor a mysterious cabal within the organization, with its own language and norms, but a pattern of getting things done that can serve as an example elsewhere, or a harbinger of successful organizational practices to come. Why can’t the budgeting/planning/forecasting process be more flexible? How about the event-driven organization, ready to react semi-automatically to certain events on the outside? Could we open-source performance reviews? All right, maybe the last one was a little over the top.
Software development, done right, as a model for the organization! Bottom rail on top this time, indeed.
Subscribe to:
Posts (Atom)