Wednesday, October 31, 2018

Climate Change and Economics: The Invisible Hand Never Picks Up the Check


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.

Over the past few days, I have been reading Kim Stanley Robinson’s “Green Earth” trilogy, an examination of possible futures and strategies in dealing with climate change thinly disguised as science fiction.  One phrase in it struck me with especial force:  “the blind hand of the market never picks up the check.”  To put it in more economic terms:

·         Firms, and therefore market economies as a whole, typically seek profit maximization, and because the path to profit from new investment is always uncertain, to focus particularly on cost minimization within a chosen, relatively conservative profit-maximization strategy.

·         To minimize costs, they may not only use new technologies (productivity enhancement), but also offload costs as far as possible onto other firms, consumers, workers, societies, and governments.  Of these, the most difficult is offloading costs onto other firms (e.g., via supply-chain management), since these are also competing to minimize their costs and therefore to offload right back.  Therefore, especially for the large, global firms that dominate today’s markets, the name of the game is to not only minimize costs from workers, consumers (consider help desks as an example), and societies/governments, but also to get “subsidies” from these (time flexibility or overtime from workers, consumers performing more of the work of [and bearing more of the risk of] the sales transaction, governments not only providing subsidies but also things such as infrastructure support, education and training of the work force, and dealing with natural disasters – now including climate change). 

Often, especially in regard to climate change, economists may refer to the process of the invisible hand never picking up the check as the “tragedy of the commons.”  The flaw of this analysis is to limit one’s gaze implicitly to tangible property.  If one uses as a broader metric money equivalents, then it is clear that it is not just “common goods” that are being raided, but personal non-goods such as worker/consumer/neither time that translates to poorer health and less ability to cope with life’s demands, sapping productivity directly as well as via its effects on the worker/consumer’s support system, not to mention the government’s ability to compensate as it is starved of money.  And all of this still does not capture the market’s ability to “game the system” by monopolizing government and the law.
Another point also struck me when I read this phrase:  macroeconomics does not even begin to measure the amount of that “cost raiding”, instead referring to it as “externalities”.  And therefore:
Economics cannot say whether market capitalism is better than other approaches, or worse, or the same.   It cannot say anything at all on the subject.

Further Thoughts About Economics and Alternatives to Market Capitalism


A further major flaw, imho, in economics’ approach to the whole subject is the idea that cost minimization should not only be a desired end but also the major goal of an enterprise.  I am specifically thinking of the case of the agile company.  As I have mentioned before, agile software development deemphasizes cost, quality, revenue, time to market, and profit in favor of constantly building in flexibility to adjust to and anticipate the changing needs of the consumer.  And yet, agile development outperforms approaches that do concentrate on these metrics by 25% at a minimum and sometimes 100%.  
If the entire economy were based on real agile firms, I would suggest that we would see a comparable improvement in the economy – permanently.  Moreover, the focus on the consumer should lead to a diminution in “cost raiding”.  The focus on being truly in tune with the consumer’s needs, for example, should diminish raiding the consumer’s time in the sales transaction and forcing them to use the help-desk bottleneck.  And I still live in hope that agile development with fewer time constraints will empower the developer with the ability to seek out and implement his or her own tools to improve processes, thereby allowing better retraining.   

Implications of Climate Change for Economics and Market Capitalism


Robinson includes a critique of market capitalism in his work, and concludes that it has to change fundamentally.  I find the critique itself problematic; but that doesn’t mean he isn’t right in his conclusion.
The fundamental question to me is, what happens when externalities go in reverse, and suddenly the things that have led to ongoing profits lead to ongoing losses?  Robinson paints a frightening picture of a world in which brownouts, blackouts, killing cold, and killing heat are common, and insurance, whether private or governmental, cannot adequately compensate, leading additional costs to settle, inexorably, on their last resort, business.  Then, implicitly, firms must cannibalize each other, with the largest being best equipped to do so.
I tend to place things in less apocalyptic terms.  According to Prof. deLong, GDP performance can be thought of as part improvement in productivity and part expansion of the workforce.  The climate change scenario necessarily implies a shrinkage of that workforce (in labor-hours) faster than productivity can climb, and therefore a constantly shrinking market.  In that case, the market’s rising need for “cost raiding” as the market shrinks simply speeds up the shrinkage of the market – not to mention the underlying societies.  And that, to me, is the fundamental flaw that needs correcting. 
Theoretically, one option is to capture things like “the social cost of carbon” in company accounting – an idea I wrote about five years ago.  Practically speaking, the uneven effects of that on companies mean real impact on the employees of coal and oil companies, a fact we have already seen a small foretaste of, and that has further revealed the ability of oil and coal companies to entirely snarl the political process to prevent adequate steps at limiting “cost raiding” – and that makes our carbon pricing efforts in real-world terms more likely than not to be inadequate to reverse the “cost raiding” trend.  
The obvious alternative, which I and others have argued for and I in fact picked up on eight years ago when I first understood the dire implications of climate change, is “World War II in America”, governmental interference in the economy comparable to that of WWII in order to “win the war on climate change”.  Only, of course, the aim is to lose the war with as little damage as possible.  So suppose we do that; what then?
The obvious answer is, “sustainability” – meaning practices that will ensure that having “won the war”, we don’t lose it again in the future by slipping back into the old carbon-guzzling, ecology-devastating, arable-land-destroying habits.  Is that enough?  Robinson says no, that despite sustainability, cost raiding will continue to increase in other areas.  And here I tend to agree with him, although I am not sure.
It appears, reverting to Prof. deLong’s point above, that it is possible with sustainability to continue to improve both human welfare and corporate profitability, by improving productivity with a more or less stable (almost certainly shrunken) population and workforce.  However, productivity improvement may well be less than in the Industrial Revolution – it has already slowed for an unduly long time.  And if that is the case, then there is no market-capitalism path forward that involves today’s increases in corporate profitability and avoids cost raiding increases.
I don’t know the answer to this.  I feel, however, that the beginnings of an answer lie not in perpetually increasing the size of the workforce by improving human welfare, while somehow not increasing population, but rather in perpetually increasing “consumer productivity”:  the value that people get out of their lives, that they can then invest in others.  More specifically, I think markets can be divided into those for carrying out daily tasks (“Do”), those for socializing and participating in society (“Socialize”) and those for learning and creating (“Learn”).  A balance must be kept between these efforts in any individual’s life, so the perpetual increases must be achieved inside each of these three sets of markets. 
I would argue that today’s market economies use “Do” to crowd out much of the other two sets of markets, and are less good at perpetually increasing the value of “Socialize” and “Learn”, although the crowding-out may mean that “Do”’s superiority is illusory.  I have no clear idea as to what to do about my conclusions, except to examine each set of markets more closely to gain clues as to how to achieve this perpetual value increase.
Just some thoughts.  And oh, by the way, Robinson is indeed worth reading.

Tuesday, October 30, 2018

Climate Change Fall 2018: A Personal Addendum


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.
One thing I did not note in my recent climate-change update:  The CO2 data from Mauna Loa are now showing that CO2 levels (averaged over the last ½ year plus a projection of the next six months) reached 409 ppm in Sept.  This is about three years since that measure reached 400 ppm, and is less than 6 months before it reaches 410 ppm.
I am told that I have, on average, 8  ½ years more to live.  By the time I am dead, CO2 will in all likelihood have reached 430 ppm, and may well be approaching 440 ppm.  By 2050, if things simply continue linearly instead of accelerating the way they have done for the past 60 years, we will be at 500 ppm, nearly doubling CO2 at the start of the Industrial Revolution.  This bakes in a global temperature rise since then of 4 degrees Centigrade, or 7 degrees Fahrenheit in the long run, according to James Hansen and others, with at least 2 degrees C in the short run, or another 2 degrees F from the way things are right now.  
Heckuva job, humanity.

Local Markings of Climate Change These Days


I have lived in the Northeast US for all of my 68-year life, the last 40 years of it near Boston.  This year, there are so many weather changes I cannot remember ever seeing before.
It is now a day before Halloween.  For the first time ever, most of the leaves are still on the trees.  Leaf coloration only began happening in early October, the latest ever; it used to happen in mid-September. 
In late October, shortly before a playoff game was to be played in Fenway Park, there was a thunderstorm.  That has never happened in late October.  As a matter of fact, thunderstorms only used to happen around here once or twice in mid-summer – if that.  
This last summer was hot (as usual) and humid (something that has only been happening in the last 10 years.  It started in late June and went full tilt until mid-September, which it has also never done before, at a typical “the way it feels to you” pace of the upper 80s to the low 90s F.  Many days, I stayed indoors all day and night.
All year, the wind has been strong – typically 10 mph faster than even 15 years ago.  My backyard is well shielded by trees from the wind, and until the last couple of years I could look out and not see the leaves and branches moving.  This year, I typically see them moving even close to the house.
There has been a lot of rain this year.  What’s unprecedented is that most rains are hard rains, with big raindrops hammering on the roof.  Going out for a walk during a rainstorm, with wind blowing your umbrella wildly, the streets flooded an inch or three, and the wind driving the large raindrops horizontally onto your clothing, is contraindicated in most cases.  So even in the spring and fall of this year, some days I spend indoors all day and night.
And I know that from here on, on average, it all only gets worse.  100-mph nor’easter, anyone?

Thursday, October 18, 2018

Reading New Thoughts: O’Reilly’s What’s The Future, the Agile Entity, and Prediction


Disclaimer:  I am now retired, and am therefore no longer an expert on anything.  This blog post presents only my opinions, and anything in it should not be relied on.

Tim O’Reilly’s “WTF:  What’s the Future, and Why It’s Up to Us” is, alternately, an insightful memoir of many of the computer industry events that I lived with and to some extent participated in, a look at the oncoming technologies coming from the computer industry and its spinoffs, with particular over-emphasis on Uber and Lyft, and an attempt to draw general conclusions about how we all should anticipate what’s going to impact us in the future and how we should “ride the wave”.
First, I want to add a caveat that I think should become a Law:

The future happens faster than we think – and then it happens slower than we think.

By this I mean:  when new technological breakthroughs arrive, not all are obvious to a particular part of the economy that we attend to, even if (today) they are linked by software technology.  Then, even when they seem like the “new new thing” everywhere in our particular area, they typically take 10-30 years to spread to the world at large.  For example, smartphones and their apps (themselves over 10 years old) are by no means ubiquitous in the Third World, despite the hype.
I’d like to note here several instances of the future arriving “faster than we think”, some profiled in WTF.  Among the ones that I find amazing (and sometimes frightening):
·         We can now alter and replace 20-gene DNA and RNA segments, and hence genes in general, not only for the next generation but also in many cases over the course of a few months for our own.  The work to achieve that happened less than 10 years ago, practical implementation was achieved less than 5 years ago, and the Nobel Prize for that work (led by Jennifer Doudna and her team, described in her book) was awarded this month.

·         Pictures of anyone can be inserted seamlessly in a different scene, making it very hard to tell the truth of the news pictures that we see every day.

·         Understandable automated language translation (e.g., Google), automated voice recognition, and automated picture recognition have been achieved (although “good” speech recognition has still not been reached).

·         Semi-automated bots generate comments on articles and in blogs that are often indistinguishable from the ungrammatical and rambling comments of many humans.  Hence, the attempts at hacking political elections and the increasing difficulty of figuring out the truth of events in the public arena, “crowded out” as they now sometimes are by false “rumors.”

More subtly:
·         Uber and Lyft create “instant marketplaces” matching buyers and sellers of taxi services.

·         Supermarkets now dictate to growers production of specific items with detailed specification of quality and characteristics, based on narrow segments of the consumer market.

Now, let’s talk about what I think are the new thoughts generated by WTF.  In particular, I want to suggest that O’Reilly’s view of how to “predict” which oncoming technologies should be factored into one’s business, government, or personal strategy going forward, how to fit these into an overall picture, and how to use that picture to develop strategy, is really of most use in developing an “agile strategy” for an agile entity.

WTF Key Technologies and Strategies


Perhaps the best place to start is with WTF’s “Business Model of the Next Economy,” i.e. a model of the typical firm/organization in the future.  There are many subtleties in it, but it appears to break down into:
·         Central “Networked Marketplace Platforms”, i.e., distributed infrastructure software that provides the basis for one or many automated “marketplaces” in which buyers and sellers can interact.  In the supply chain, the firm would be the primary buyer; at the retail level, it would be the primary seller.

·         Feeding into these platforms, an approach that “replaces markets with information” – instead of hoarding information and using that hoarding to drive monopoly sales, the firm releases information openly and uses the commoditization of the product to drive dominance of new products.

·         Also feeding into the platforms, a new approach to user interfacing that seeks to create “magical experiences.”  This particularly enhances the firm’s and platform’s “reputation.”

·          Another “feeder” is “augmented” workers – workers enabled by rather than replaced by AI-type software.

·         A fourth “feeder” is “On-Demand” (applied flexibly, as needed) talent (workers given added value by their talents) and resources.  This includes an emphasis on actively helping workers to succeed, including over the long run.

·         A fifth feeder – somewhat complementary to the fourth – is “Alternatives to Full-Time Employment”, where the emphasis is on being flexible for the benefit of the worker, not the employer – the takeaway being that this actually benefits the employer more than WalMart-style “show up when we need you and never mind your personal life” approaches.  The key newness about this approach is that is “managed by algorithm” – the algorithm allows both the employer and employee to seek to manage their needs in a semi-automated fashion.

·         Returning to the business itself, the final feeder to the marketplace platform is “Services on Demand”, which offers to the consumer an interface that is providing an ongoing service rather than simply selling a product.  This is enhanced by “marketplace liquidity,” ways to make it easier for the consumer to buy the service.

At this point I revert to my caveat/Law in the beginning.  This “next economy” model is already operating in parts of the computer industry and related fields, e.g., Amazon, Google, Lyft – the future has already happened faster than we think.  At the same time, there will be a longer time than we think before it diffuses across the majority of organizations, if it does so at all.  Government and law are two obvious places considered in WTF where this model holds great potential, but will take a long, long time to effectively apply.
If the object of the game is to “ride the technology wave” by predicting which oncoming technologies should be factored into one’s business, then the technologies in this model are relatively safe bets.  They are already past the stage of “timing”, where the technology is attractive but it may not yet be time for the market to implement it.  As WTF points out, the trick is not to simply latch on to a strategy like this, but to constantly update the model and its details as new technologies arrive at their “timing” stage. 
Enter the agile strategy.

Prediction and the Agile Entity


The agile process is, on its face, reactive.  It does not attempt to get out ahead of the combined wisdom of developers/process-users and consumers/end-users.  Rather, it seeks to harvest that wisdom rapidly in order to get out in front of the market as a whole, and only for the purposes of each development/rollout process.
An agile strategy (which, up to this point, I haven’t examined closely) should be a different animal.  Precisely because any strategy bridges a firm/organization’s entire set of new-product-development efforts as well as aligning the rest of the organization with these, an agile strategy should be (a) long-term and (b) to a significant degree in advance of current markets.  
In the case of the strategy outlined in the previous section (i.e., implement the “new business economy model”), one very straightforward way of adding agility to the strategy would be to add agility to the software and analytics used to implement it.  One tried-and-true method for doing this is “refactoring” – adding a layer of abstraction to the software so that it is relatively easy to change.
Another method is simply to plan to revisit the strategy every 3-12 months.  The agile CEO I interviewed and reported on in a 5-years-old blog post did exactly that – a 5-year plan, revisited and informed with both his outside feedback and the information he gathered by attending scrum meetings.
WTF adds a third dimension:  attempt to discern upcoming technologies and approaches that are “important”, and then “time” the shift to a new strategy incorporating those technologies/approaches.  “Prediction,” in these terms, means anticipating which oncoming technologies/approaches are important and also the pace of their evolution into “timely” products and services.
I would argue, however, that this is precisely where an agile strategy adds value.  It does not assume that what seems important now stays important, or that an important technology/approach will arrive in the market in the next 5 years, but rather that whatever steps we take towards preparing the way for a new technology/approach must be flexible enough to switch to another technology/approach even midway in the process.  For example, we may move towards augmenting our workers with AI, but in such a way that we can instead fully automate one set of workers in order to augment a new type of worker whose responsibilities include that of the old.  We would be, in a sense, “refactoring” the worker-task definition.
So here’s my take from reading WTF:  It should be possible, using WTF’s method of anticipating change, to implement an agile strategy as described.  Moreover, an agile strategy should be clearly better than usual ones.  Usual strategies and agile processes do not anticipate the future; agile strategies such as this do.  WTF-type strategies anticipate the future but are not flexible enough to handle changes between identification of the future and the time for its implementation; an agile strategy should be able to do so.