More than a decade ago, the experience of several far-seeing developers drove an Agile Development Manifesto that has more than stood the test of time. Now, finally, other areas of the business are considering equivalent follow-on manifestoes of their own – in particular, an Agile Marketing Manifesto. In order to insure that the fundamental insights of the original Agile Manifesto are not lost, I would like to propose a similar set of principles for the agile business in general – principles within which efforts such as an agile marketing manifesto should be expected to fit.
Because “the old ways” are much more entrenched in the business as a whole than in the software development arm of an organization, I set these principles up in opposition to what I view as some of the key principles of business as we have known it – or at least the ones that the truly agile business contradicts. And, of course, I have a bit of fun in doing so.
I also believe that these principles will also serve for a draft agile marketing manifesto. The reasons why will become apparent later.
Agile Organization Goals: Fast, Effective Change
The old principle 1 (monetary goal): In the long run, the most successful are most likely to survive.
Agile principle 1 (fast, effective change goal): The successful thrive; the agile survive. In the long run, survive beats thrive.
Remember that in an agile organization, the focus is on maximizing the speed and effectiveness of change, not on maximizing short-run or long-run success as defined in monetary terms. Experience shows that in many development organizations, agile teams focused on speed of customer-coordinated change rather than cutting costs, improving quality, or speeding up development do better in the short and long run in monetary terms (costs, revenues, risk profile). This principle asserts that even if an organization is more successful in the short run by focusing on maximizing profit, it is less likely to survive in the long run, and less successful if it does survive, than the truly agile organization.
Agile Coopetition
The old principle 2: What does not kill you makes you stronger.
Agile principle 2 (coopetition): What tries to kill us and fails makes us weaker and more agile – and that’s good.
Recent research suggests that humans benefit most from eating foods that give us “moderate poisoning.” That is, they do not kill us, but they do weaken some existing systems. In order to adapt to these weaknesses, the body grows new capabilities – physical and mental – that let us handle a wider range of situations – that make us “survive by fitting” a wider range of environments. By contrast, foods with no poisoning capabilities offer much less benefit, and foods that cripple us offer no benefit at all.
There is a reasonable analogy here with competition vs. what is now called coopetition. The non-agile business assumes that one “wins” in the long term by crushing the competition, and that therefore the company emerges from the process stronger. This is doubtful; in the long-run, over-success in competition leads to monopoly behavior that decreases innovation and makes “crossing the chasm” much more difficult.
The agile business seeks to create “safe” competition by combining competition with cooperation – the same business that competes with you in one area cooperates with you in another area, and neither side expects to win overall in the long run. Rather, the relative weakness in one area of one of the two vendors will lead either to new capabilities or to redefinition of its focus on that side, using the increasing weakness of a particular area as an indicator of the need to move to something different. In the short run, that competition makes the agile vendor weaker. In the long run, the safety of that “competition” leads to increased agility, which leads to long-term survival and success. Meanwhile, cooperation in another area expands the solutions that the firm can offer and provides additional customer feedback.
Agile Culture: Happiness is Customer-Coordinated Change
The old principle 3: Greed is good.
Agile principle 3: Greed is bad. Customer-coordinated change is good.
Immortalized as the “invisible hand”, the pure form of the old principle 3 asserted that focusing on maximizing profit was good for the individual employee, good for the company, good for the consumer, and ultimately good for the economy and society. The agile principle asserts that focusing on speed and effectiveness of “safe” change is better for the individual employee, better for the consumer, and ultimately better for the economy and society in both short-term and long-term monetary and “social welfare” terms.
This is not a simple matter of “doing well by doing good.” Rather, the agile business recognizes that effective change must be constantly and firmly grounded in consumer reality, and that treating the consumer as an infinitely malleable toy to be squeezed of every last drop of money has bad long-term effects not only on survivability but also on corporate culture and thence on corporate success and “social welfare.”
Agile Strategy: Focus on Who
The old principle 4: If you don’t know where you’re going, any road will take you there.
Agile principle 4: If you don’t know who you will meet, you don’t know where you’re going.
As shown in such works as “Marketing Myopia”, the focus of a traditional firm is to accept the given of a global mass “market” as it is now, and try to determine just what market within that global one fits the firm best, and how the firm can steadily increase its profits within and by extensions of that market. The agile strategist focuses on a global market full of individual consumers with common lifecycles and fine-grained differences, and seeks to capture as much as possible of the lifecycle and time allotment of the largest group of such consumers as their needs evolve over time. When the agile firm talks about “continuous delivery” of what the customer wants, that is what it means. The traditional firm focuses on the visible, static market of an undifferentiated mass of needs; the agile firm focuses on understanding and meeting the inevitably changing individual needs of each of a class of individual consumers.
The agile firm believes that such a strategy gives it two key advantages over the traditional firm: it grounds the agile firm in reality rather than its own wishes about what the customer “should” be like in order to maximize firm profit, and it focuses the agile firm on striving toward the most effective kind of long-run strategy: cradle-to-grave, comprehensive, open-ended customer support.
Agile Target Market: Learners
The old principle 5: People want solutions
The new principle 5: People buy emotions
Agile principle 5: People buy learning
This agile principle can be seen as a corrective to the extremes of the old and new principles. It is true that in the short run, people will go out and buy a solution for an immediate problem, and that given extra money with which to buy things, people will then choose a solution that makes them feel better, or even an “emotional trigger” without any usefulness at all. However, in the long run, people usually continue buying in a particular area because it continues to relate to the ongoing reality of their lifecycle. Moreover, the best kind of customer buys because he or she enjoys the open-ended pleasure of constantly learning more in a “safe” (usually successful) way.
The alternative customer is the addict: one that develops such a need for an offering that the addict is willing to continue to spend on it forever, as it becomes less and less useful. As industries like tobacco and alcohol have shown, the addict can be a successful short-term market. However, firms in such industries tend to be unusually un-agile – reflecting their customers. Moreover, these industries react to the inevitable challenges of evolving customer needs by attempting to “game the system” and prevent these changes from happening – increasing the costs to society as a whole, as well as the impact to the firm of the inevitable failure to prevent change. The tobacco and alcohol industries are among the least successful over the last 60 years. By contrast, the computer software industry, which ranks among the most agile, is close to the most successful, and software companies that focus on the consumer rather than the business more successful still (see agile principle 4).
There is another implication of agile principle 5: the agile business should never simply react to the customer, but rather constantly alternate getting ahead of the customer and reacting to the customer. Helping a person to learn forever means constantly generating new, follow-on, useful things for the customer to learn, as the customer moves through the stages of a life. The customer, in return, tells you which of these innovations is desired and what his or her own innovations are.
This Is Also A Draft Agile Marketing Manifesto
The reason that these principles apply verbatim to agile marketing, I believe, is because it is probable that in the agile business, marketing will become a “loosely-coupled CEO”. That is, agile marketing’s primary job will be to integrate the constant iteration of new products into a long-term, effective business strategy.
This is not to say that this is marketing’s present job, nor that those trying to achieve agile marketing do integration very well right now. Rather, it says that marketing is the only existing business function that can carry out this necessary task effectively in the agile organization. The other functions are too focused on particular parts of the overall business’ agile process. The CEO simply does not have the capability of handling all the integrating tasks that must be done.
Rather, individual marketers, empowered by their unique focus on long-term customer need evolution, must connect and coordinate the enthusiastically wandering dogs of the other agile business functions. Meanwhile, the CEO imposes a single long-term personality on the agile marketers’ work, giving the results a touch of individuality that appeals to the individual consumer – much the same role that a great soloist plays in giving an old war-horse of a classic musical piece new life, or the kind of role that Steve Jobs occasionally played toward the end.
Thus, agile marketing in the long run will be effectively carrying out the agile organization’s principles – but in an agile way, in which the customers are equally the other agile business functions and the firm’s actual customers.
The Long Road Ahead
I say this is a draft manifesto; but, to me, it is also a platform with which to challenge other upcoming manifestos. I believe that if another manifesto contradicts or ignores these principles, there may very well be fundamental problems with that manifesto. If so, these principles give me (and, I hope, others) a basis for criticizing and, if necessary, rejecting such a manifesto. We are trying to create manifestoes at a much earlier – dangerously early – stage of our understanding of business agility. It may be OK not to get a manifesto right; but it is very important not to get it irretrievably wrong.
This is why I am so dogmatic about what goes into an agile manifesto. If the experience of agile software development and its alternatives teaches us anything, it is that if you think agile, your process becomes more and more agile and more and more successful; while if you don’t, it doesn’t. There doesn’t seem to be much of a middle ground. And it can be hard to wrap your mind around such a different way of thinking, and therefore easy to backslide subtly but permanently.
I will try in the next few weeks to flesh out the agile marketing “thought experiment” on which these suggested principles are based. In the meanwhile, enjoy – or not.
Agile principle 1 (fast, effective change goal): The successful thrive; the agile survive. In the long run, survive beats thrive.
Agile principle 2 (coopetition): What tries to kill us and fails makes us weaker and more agile – and that’s good.
Agile principle 3: Greed is bad. Customer-coordinated change is good.
Agile principle 4: If you don’t know who you will meet, you don’t know where you’re going.
Agile principle 5: People buy learning
Thursday, April 12, 2012
Tuesday, April 3, 2012
Go, Go, Go, Said the Old Programmer
The other day, a query crossed my desk about Google’s public release of its new Go programming language – a fascinating and probably often applicable new take on the genre. As sometimes happens in these cases, what remains of my memories of computer science resurfaced as I read Google’s description of Go. And so, what follows are some possibly useful thoughts about its features.
Before I start, let me sum up, for those who dislike tech jargon (and especially jargon from someone who has not wielded a debugger in anger for a couple of decades): Google Go is worth playing with, for a wide variety of programming needs. Check it out.
Error Handling Without Exceptions
I have been waiting for many years for some recognition that error handling is actually on a level with writing correct code. To put it another way, each recognition of the possibility of an error is like the old case statement, in which good programming considers the “error” cases as just as important to code as the “correct” case.
Google Go has two back-to-the-future ways of approaching this: first, rule out “exceptions”, and second, allow multiple return variables/values. I agree with their critique entirely: an exception was always treated exceptionally, making programming more complex, and the ease (and decrease in bugs) from having one return value was more than counterbalanced by the complexity of trying to report errors in odd ways.
I do have one caveat: I am not yet convinced that programmers don’t need some form of coercion to make sure that errors are adequately handled. I don’t care whether it’s a required error value in the multi-value return or what, I just feel that error handling is still not as good as it should be.
No Assertions?
I hear Google’s point that these have been misused in error handling. However, I would suggest that the proper place to handle this type of misuse might be in the compiler, if possible. Assertions should still be used to check assumptions between two key lines of code, because simply forcing proper error handling does not account for major blind spots in the programmer’s view of his/her own code or his/her knowledge of the language. In other words, there still needs to be a final fallback check.
High-Level Concurrency and Goroutines
Now here’s a refreshing approach to concurrency and parallelism! It passes belief that we have not been able to categorize types of concurrency well enough to take the mechanisms up a level and hide things like mutexes. Likewise, goroutines seem a small-overhead way of adding some additional “multithreading” parallelism – and I suspect that this will compensate partially for the need for scaling software multithreading now that multicore chips are muddying the waters.
At the same time, I note a lack of consideration of the equivalent problem in data sharing. Let’s face it, not all data is isolatable in objects, and so concurrency of unitary data-processing functions (e.g., add, delete, update, read) needs a database-type “commit” consideration of concurrency.
No Type Hierarchy, No Type Inheritance, Interfaces
I don’t think I have a real feel for what “interfaces” are, but the idea of jettisoning object-oriented programming’s type hierarchy is a fascinating experiment. The reusability and careful thought about relationships between object classes have always been to the good, but at scale (something that we have been butting up against since the mid-1990s), the sheer effort of keeping track of that hierarchy placed severe limits on the ability to reuse code, and slowed down programming. I hope this one works; if it does, it will give a major boost to programming speed for Java types.
Focusing on string and map support
I don’t have a good feel for this approach to maps, as yet, but thank heavens someone realized that strong string operators were a very good idea. Anyone who ever had the privilege of using SNOBOL soon realized just how useful these can be.
Garbage collection and compiled code
I regard this as a sign that finally compilation speed within a virtual machine has come of age. It was always true that compilation is preferable to interpretation if compilation is infrequent, as long as the recompilation “hiccup” is not significant. I hope that Google Go really can make this work; there is, imho, a fair amount of legacy Java code out there that’s really sub-optimal in performance.
And, by the way, embedding decent online garbage collection in the machine is a major step forward, as well. After witnessing the birth pangs of JVMs in the late 1990s, I am well aware of the perils of memory leaks and other related ailments. I really think that, this time, we can do garbage collection and object-oriented long-running programs without major overhead. If so, it’s going to be another improvement in software quality from Go.
Assumed Architecture
If I understand what Google are saying, there’s a complex interplay between clients and servers in the typical Go process that should provide some better scalability in the typical Web situation. I’m not entirely convinced that the architecture is flexible enough, but it seems to me pretty easy to improve on the present client/Web server/app server architecture. I’m all for trying it.
No pointer arithmetic
Oh, yes. Finally, after all these years, that mess left over from the original C design back in the 1970s can be eliminated. It was a cute idea when it arrived; but it was soon apparent that its problems from increased nasty bugs outweighed its small benefits in performance in certain cases. I hope this puts a stake through its heart.
Minor caveats
Google has noted that performance out-of-the-box is in some cases not in striking distance of C. They make a reasonable case that that will go away; but frankly, as long as it’s within shouting distance of C, Go should be preferable. It’s much better at preventing nasty little programmer errors, which means that programming speed will lead to faster improvements in performance which will more than make up for C’s “coding on the bare metal” virtues.
Meanwhile, I’m still not convinced that Go (like all the other object-oriented or semi-object-oriented languages) really does an adequate job of handling data. The emphasis on structure orthogonality is good, but from my viewpoint, minor. We are still back in the days of the object-relational mismatch.
Simplified typing? Simplified dependency management? Simplified multicore concurrency? I’m not convinced; but I’m definitely hoping it’s true in the general case, not just within Google. Anyway, it’s worth it just to see another experiment shake up the programming world.
Repeating the Message
The quote in the title, by the way, is from T.S. Eliot’s first Quartet, “Go, go, go, said the thrush …”. Like the thrush, an old programmer thinks you should go and play with Google’s Go. After lo these many years of dealing with Java and JavaScript development, at the least, it will be a mind-opening change of pace.
Before I start, let me sum up, for those who dislike tech jargon (and especially jargon from someone who has not wielded a debugger in anger for a couple of decades): Google Go is worth playing with, for a wide variety of programming needs. Check it out.
Error Handling Without Exceptions
I have been waiting for many years for some recognition that error handling is actually on a level with writing correct code. To put it another way, each recognition of the possibility of an error is like the old case statement, in which good programming considers the “error” cases as just as important to code as the “correct” case.
Google Go has two back-to-the-future ways of approaching this: first, rule out “exceptions”, and second, allow multiple return variables/values. I agree with their critique entirely: an exception was always treated exceptionally, making programming more complex, and the ease (and decrease in bugs) from having one return value was more than counterbalanced by the complexity of trying to report errors in odd ways.
I do have one caveat: I am not yet convinced that programmers don’t need some form of coercion to make sure that errors are adequately handled. I don’t care whether it’s a required error value in the multi-value return or what, I just feel that error handling is still not as good as it should be.
No Assertions?
I hear Google’s point that these have been misused in error handling. However, I would suggest that the proper place to handle this type of misuse might be in the compiler, if possible. Assertions should still be used to check assumptions between two key lines of code, because simply forcing proper error handling does not account for major blind spots in the programmer’s view of his/her own code or his/her knowledge of the language. In other words, there still needs to be a final fallback check.
High-Level Concurrency and Goroutines
Now here’s a refreshing approach to concurrency and parallelism! It passes belief that we have not been able to categorize types of concurrency well enough to take the mechanisms up a level and hide things like mutexes. Likewise, goroutines seem a small-overhead way of adding some additional “multithreading” parallelism – and I suspect that this will compensate partially for the need for scaling software multithreading now that multicore chips are muddying the waters.
At the same time, I note a lack of consideration of the equivalent problem in data sharing. Let’s face it, not all data is isolatable in objects, and so concurrency of unitary data-processing functions (e.g., add, delete, update, read) needs a database-type “commit” consideration of concurrency.
No Type Hierarchy, No Type Inheritance, Interfaces
I don’t think I have a real feel for what “interfaces” are, but the idea of jettisoning object-oriented programming’s type hierarchy is a fascinating experiment. The reusability and careful thought about relationships between object classes have always been to the good, but at scale (something that we have been butting up against since the mid-1990s), the sheer effort of keeping track of that hierarchy placed severe limits on the ability to reuse code, and slowed down programming. I hope this one works; if it does, it will give a major boost to programming speed for Java types.
Focusing on string and map support
I don’t have a good feel for this approach to maps, as yet, but thank heavens someone realized that strong string operators were a very good idea. Anyone who ever had the privilege of using SNOBOL soon realized just how useful these can be.
Garbage collection and compiled code
I regard this as a sign that finally compilation speed within a virtual machine has come of age. It was always true that compilation is preferable to interpretation if compilation is infrequent, as long as the recompilation “hiccup” is not significant. I hope that Google Go really can make this work; there is, imho, a fair amount of legacy Java code out there that’s really sub-optimal in performance.
And, by the way, embedding decent online garbage collection in the machine is a major step forward, as well. After witnessing the birth pangs of JVMs in the late 1990s, I am well aware of the perils of memory leaks and other related ailments. I really think that, this time, we can do garbage collection and object-oriented long-running programs without major overhead. If so, it’s going to be another improvement in software quality from Go.
Assumed Architecture
If I understand what Google are saying, there’s a complex interplay between clients and servers in the typical Go process that should provide some better scalability in the typical Web situation. I’m not entirely convinced that the architecture is flexible enough, but it seems to me pretty easy to improve on the present client/Web server/app server architecture. I’m all for trying it.
No pointer arithmetic
Oh, yes. Finally, after all these years, that mess left over from the original C design back in the 1970s can be eliminated. It was a cute idea when it arrived; but it was soon apparent that its problems from increased nasty bugs outweighed its small benefits in performance in certain cases. I hope this puts a stake through its heart.
Minor caveats
Google has noted that performance out-of-the-box is in some cases not in striking distance of C. They make a reasonable case that that will go away; but frankly, as long as it’s within shouting distance of C, Go should be preferable. It’s much better at preventing nasty little programmer errors, which means that programming speed will lead to faster improvements in performance which will more than make up for C’s “coding on the bare metal” virtues.
Meanwhile, I’m still not convinced that Go (like all the other object-oriented or semi-object-oriented languages) really does an adequate job of handling data. The emphasis on structure orthogonality is good, but from my viewpoint, minor. We are still back in the days of the object-relational mismatch.
Simplified typing? Simplified dependency management? Simplified multicore concurrency? I’m not convinced; but I’m definitely hoping it’s true in the general case, not just within Google. Anyway, it’s worth it just to see another experiment shake up the programming world.
Repeating the Message
The quote in the title, by the way, is from T.S. Eliot’s first Quartet, “Go, go, go, said the thrush …”. Like the thrush, an old programmer thinks you should go and play with Google’s Go. After lo these many years of dealing with Java and JavaScript development, at the least, it will be a mind-opening change of pace.
Labels:
Google Go,
IT development,
programming languages
Subscribe to:
Posts (Atom)