I see that Anthony Tommasini of the NY Times has joined the crowd who attempt to rank the top classical composers of all time. Since, as Josephine Tey once wrote, there are far too many books on such subjects written as is, I was strongly tempted not to respond to his article. However, it does give me another reason to remember some of the cherished experiences of my life – and I would like to try, yet again, to lift a lone voice against the “loud orchestra bigots” who dominate today’s classical music scene.
Aside from the differences over whether loudness should be valued for its own sake and whether works with large numbers of instruments or voice involved should be valued over others, I differ with Mr. Tommasini in one more subtle but important way: he appears to value the ability of the composer to stretch the compositional palette by being innovative; I value the composer’s ability to do so extremely well. Yes, Stravinsky and Bartok did radical things, and did them pretty well; but, to my mind, only pretty well. By contrast, when Mozart and Sibelius stretched the boundaries, they did so with exceptional integration into an overall “sound personality.”
Enough of that. Here’s my list.
(1) Brahms. I really wrestled with this, between Brahms and Mozart. I don’t think that Brahms’ symphonies are the best, nor what I have heard of his vocal works. But, over and over again, his chamber works, his solo works, his harmonies are at the very highest level, and all in the service of a feeling that I see as so profound as to be indescribable. I confess that I see the Brahms Horn Trio as among the two or three greatest pieces of music of all time.
(2) Mozart. Yes, it was a little tough between him and Beethoven. Beethoven is more of our era; his melodies appeal to all, and his harmonies connect to ours. But the real problem is that we don’t know how to perform Mozart. There is wonderful feeling there, and – if you understand conventions – a constant play with the ear’s expectation for how to finish that the best performers bring out, to give an air of endless, blossoming astonishment. And, yes, his operas show his knack for the single, piercing phrase that makes the whole piece memorable. The main problem is that his later works never use the solo strings to their full potential. Other than that, every inner instrument has its song, and every harmony is fresh and new. We just have to learn how to play him – strings, especially.
(3) Beethoven. I got there eventually. Mostly, I have been turned off for years by his brute-force repetition of octaves and the way he forces you to use the maximum number of instruments as loudly as possible. But in pieces such as the Violin Concerto or Romanzen, not to mention some parts of the chamber music, the performer who really knows how to bring out the soft voice will be extraordinarily rewarded. And, of course, his melodies range from memorable to great.
(4) Bach. In some ways, I feel that Bach should be rated lower than this. He is, above all, obsessed by the beat, and too much of an organ/small orchestra composer. His voice compositions are great but don’t truly wed the words to the music in the way Wagner and Mozart do. However, there are so many superb melodies with such superb harmony, if you spend extraordinary effort as a performer – the Double Violin Concerto, the Violin and Oboe concerto, some of the interludes from the Cantatas, the Passacaglia and Fugue. Now there was a composer that truly valued the bass voice.
(5) Schubert. Yes, the songs are extraordinary; but the chamber music is truly great. There is a story that a musician put one of the melodies from one of his trios on his tombstone; and having heard it, I understand why. It’s odd that the Trout Quintet and Death and the Maiden quartet, his best known chamber music, are not really his best chamber music. Again, his chamber music must be played just right to understand its greatness.
(6) Sibelius. No, this isn’t really about chamber pieces. Sibelius is an exception to that rule; he achieves his great effects in orchestral works, with the possible exception of the Violin Concerto. But in those works, the ability to find the right harmony, and to challenge the ear, puts him at the top of the second tier.
(7) Ravel. Yes, I know that everyone likes Debussy, and thinks he’s the best of the French. But I consistently find that Debussy is a one-trick pony as far as harmony goes (and it’s a very good trick!). Ravel, on the other hand, uses instruments like the English horn to achieve harmonies that are superb, and works like Gaspard de la Nuit and the Quartet show his other, incredibly varied gifts.
(8) Wagner. Yes, he forces singers to shout endlessly, and yes, he does go on and on. But he uses consonance, assonance, and alliteration superbly in his lyrics, forming an amazingly integrated, powerful whole. Tristan and Isolde’s leitmotif is enough for any composer.
(9) Stravinsky. The Classical Symphony shows both his strengths and limitations: superb orchestration and the ability to sing, set against an inability to plumb the full depths of a phrase. That’s good enough for the third tier.
(10) Barber. I confess that I don’t know enough about Barber to make this more than tentative. But the quartet that everyone plays now plus the first two movements of the Violin Concerto make him stand out for me.
Ones with occasional gifts that make me regret leaving them out: Monteverdi, of course, because he is the best of the best in madrigals, imho. Schumann, a little for his symphonies but more for his occasional chamber genius, and especially for the Quintet. Copland – although I think his “Western” pieces are overrated, Fanfare and the Shaker hymn are not. Gershwin – untutored as he is, both Rhapsody in Blue and Porgy and Bess deserve better ratings. Puccini – I still feel I am learning to plumb the depths of Turandot, whatever the lack of depth of orchestration of his other works. Mendelssohn – for his Piano Trio, one of the truly great works.
Overrated: Mahler – sorry, not a good enough ratio of quality to length. Bartok – what his left hand giveth with feeling, his right hand taketh away with atonal lack of point; and yes, I do know his violin concerto. Debussy – I might have put him at 11, but see above. Shostakovitch – actually, Symphony Number 11 sounds best, despite its being forced to be the most tonal of all; which is damning with faint praise. Dvorak – his gifts are superb, and then he forces the strings into unnatural high octaves and harmonies.
Don’t know enough – Verdi.
Wonder about; maybe there’s something there I don’t know about – Faure, Poulenc, Grieg, Elgar, Vivaldi, Gade.
Never great, but good enough – Haydn, Berlioz, sometimes Bizet, Ward, some Rossini, Handel (all right, sometimes he’s great; but not in long enough bursts).
And so many more, that I will never discover.
Sunday, February 13, 2011
Friday, February 11, 2011
IBM's Watson: Less Than It Seems, More Maybe Than It Will Seem
As we move towards yet another revival of Jeopardy-type game popularity, IBM’s Watson – a computer competing in Jeopardy with Americans -- has garnered an enormous amount of attention. The excitement seems to center around the following beliefs:
• Watson is an enormous step forward in computer capabilities for understanding and interacting with humans;
• Straightforward evolution of these capabilities via “learning” will lead to human-level “artificial intelligence” (AI) sooner rather than later;
• In the meanwhile, Watson can be applied to a wide variety of BI/analytic functions in the typical business to deliver amazing new insights;
• These insights will have their primary value in enhancing existing customer-facing business processes such as help desk and decreasing fraud.
I think that all of these beliefs are wrong. I see Watson as a minor step forward for existing AI, natural-language processing, and computer learning software. I see a huge gulf remaining between Watson and most useful “intelligence,” and no indication of software on the horizon to “evolve” or “learn” to bridge that gap. I think that Watson’s advances will only be applicable to a small subset of business data, and will yield only incremental “deep insight.” And finally, I see the immediate applications of Watson as cited by IBM – help-desk improvement and fraud detection, for example – as far less valuable than, say, improving eDiscovery or restaurant finding by less-dumb response to text-oriented searches.
And yet, the potential is there for the smart business to use a successor to Watson effectively. The real potential of Watson is cross-domain correlation: the ability to identify connections between two diverse, well-defined areas of knowledge. The same process that draws the relationship between customer and business ever tighter serves to exclude many potential customers ever more strongly, often because the language becomes different: how easy is it for the makeup expert to understand the computer geek, and vice versa? Customizing interactions by balancing the potential customer’s and the company’s domain of expertise makes acquiring new customers more cost-effective. That’s not computer evolution or a New New Thing; but once the excitement dies down and the disappointment sets in, it’s nothing to sneeze at.
Deep Analysis of Deep Analysis
First, let’s pierce through the hype to understand what, from my viewpoint, Watson is doing. It appears that Watson is building on top of a huge amount of “domain knowledge” amassed in the past at such research centers as GTE Labs, plus the enormous amount of text that the Internet has placed in the public domain – that’s its data. On top of these, it places well-established natural-language processing, AI (rules-based and computer-learning-based), querying, and analytics capabilities, with its own “special sauce” being to fine-tune these for a Jeopardy-type answer-question interaction (since Watson is an analytical successor to IBM’s Deep Thought chess-playing machine, should we call it Deep Analysis?). Note that sometimes Watson must combine two or more different knowledge domains in order to provide its question: “We call the first version of this an abacus (history). What is a calculator (electronics)?”
Nothing in this design suggests that Watson has made a giant leap in AI (or natural-language processing, or analytics). For 40 years and more, researchers have been building up AI rules, domains, natural-language translators, and learning algorithms – but progress towards meeting a true Turing test, in which the human side of the interaction can never tell that a computer is the other side of the interaction, has been achingly slow. All that the Jeopardy challenge shows is that the computer can now provide one-word answers to a particular type of tricky question – using beyond-human amounts of data and of processing parallelism.
Nor should we expect this situation to change soon. The key and fundamental insight of AI is that when faced with a shallow layer of knowledge above a vast sea of ignorance, the most effective learning strategy is to make mistakes and adjust your model accordingly. As a result, brute-force computations without good models don’t get you to intelligence, models that attempt to approximate human learning fall far short of reality, and models that try to invent a new way of learning have turned out to be very inefficient. To get as far as it does, Watson uses 40 years of mistake-driven improvements in all three approaches, showing that it’s going to require many years of further improvements – not just letting the present approach “learn” more – before we can seriously talk about human and computer intelligence as apples and apples.
The next point is that Jeopardy is all about text data: not numbers, yes, but not video, audio, or graphics (so-called “unstructured” data), either. The amount of text on Web sites is enormous, but it’s dwarfed by the amount of other data from our senses inside and outside the business, and in our heads. In fact, even in the “semi-structured data” category to which Watson’s Jeopardy data belongs, other types of information such as e-mails, text messages, and perhaps spreadsheets are now comparable in amount – although Watson could to some extent extend to these without much effort. In any case, the name of the game in BI/analytics these days is to tap into not only the text on Facebook and Twitter, but also the information inherent in the videos and pictures provided via Facebook, GPS locators, and cell phones. As a result, Watson is still a ways away from providing good unstructured “context” to analytics – rendering it far less useful to BI/analytics. And bear in mind that analysis of visual information in AI, as evidenced in such areas as robotics, is still in its infancy, used primarily in small doses to direct an individual robot.
As noted above, I see the immediate value of Watson’s capabilities to the large enterprise (although I suppose the cloud can make it available to the SMB as well) to be more in the area of cross-domain correlation in existing text databases, including archived emails. There, Watson could be used in historical and legal querying to do preliminary context analysis, to avoid having eDiscovery take every reference to nuking one’s competitors as a terrorist threat. Ex post facto analysis of help desk interactions (one example that IBM cites) may improve understanding of what the caller wants, but Watson will likely do nothing for user irritation at language or dialect barriers from offshoring, not to mention encouraging “interaction speedup” that the most recent Sloan Management Review suggests actually loses customers.
The Bottom Line: Try, Get What You Need
Still, I do believe that a straightforward evolution of Watson’s capabilities can be highly valuable to the really smart, really agile enterprise. Or, as the Rolling Stones put it, “You can’t always get what you want, but if you try, somehow, you get what you need.”
In this case, trying Watson-type analytics will involve applying it primarily to cross-domain correlation, primarily to Internet data outside the enterprise, and primarily in ways that are well integrated with better unstructured-data analytics. Such an engine would sieve the sensor-driven Web in real time, detecting trends not just in immediate “customer prospects” but also in the larger global markets that have been “included out” of enterprises’ present market definitions. The enterprise can then use these trends to identify niches whose entry would require least “translation” between a niche prospect’s knowledge domain and company jargon, and then provide such translation for the initial sale. Gee, maybe even real estate contracts might become more understandable … nah.
More importantly, such an application would reinforce a company’s efforts to become more outward-facing and (a complementary concept) more truly agile. It abets a restructuring of company processes and culture to become more focused on change and on reactive and proactive evolution of solutions in interaction with customers and markets.
What I am suggesting to IT buyers is that they do not rush to acquire Watson as the latest fad. Rather, they should determine at what point IBM or similar analytics products intersect with a long-term company strategy of Web-focused unstructured-data analytics, and then integrate cross-domain correlation into the overall analytics architecture. Try to implement such a strategy. And find, somehow, that you get from Watson-type technology not what you wanted, but what you need.
• Watson is an enormous step forward in computer capabilities for understanding and interacting with humans;
• Straightforward evolution of these capabilities via “learning” will lead to human-level “artificial intelligence” (AI) sooner rather than later;
• In the meanwhile, Watson can be applied to a wide variety of BI/analytic functions in the typical business to deliver amazing new insights;
• These insights will have their primary value in enhancing existing customer-facing business processes such as help desk and decreasing fraud.
I think that all of these beliefs are wrong. I see Watson as a minor step forward for existing AI, natural-language processing, and computer learning software. I see a huge gulf remaining between Watson and most useful “intelligence,” and no indication of software on the horizon to “evolve” or “learn” to bridge that gap. I think that Watson’s advances will only be applicable to a small subset of business data, and will yield only incremental “deep insight.” And finally, I see the immediate applications of Watson as cited by IBM – help-desk improvement and fraud detection, for example – as far less valuable than, say, improving eDiscovery or restaurant finding by less-dumb response to text-oriented searches.
And yet, the potential is there for the smart business to use a successor to Watson effectively. The real potential of Watson is cross-domain correlation: the ability to identify connections between two diverse, well-defined areas of knowledge. The same process that draws the relationship between customer and business ever tighter serves to exclude many potential customers ever more strongly, often because the language becomes different: how easy is it for the makeup expert to understand the computer geek, and vice versa? Customizing interactions by balancing the potential customer’s and the company’s domain of expertise makes acquiring new customers more cost-effective. That’s not computer evolution or a New New Thing; but once the excitement dies down and the disappointment sets in, it’s nothing to sneeze at.
Deep Analysis of Deep Analysis
First, let’s pierce through the hype to understand what, from my viewpoint, Watson is doing. It appears that Watson is building on top of a huge amount of “domain knowledge” amassed in the past at such research centers as GTE Labs, plus the enormous amount of text that the Internet has placed in the public domain – that’s its data. On top of these, it places well-established natural-language processing, AI (rules-based and computer-learning-based), querying, and analytics capabilities, with its own “special sauce” being to fine-tune these for a Jeopardy-type answer-question interaction (since Watson is an analytical successor to IBM’s Deep Thought chess-playing machine, should we call it Deep Analysis?). Note that sometimes Watson must combine two or more different knowledge domains in order to provide its question: “We call the first version of this an abacus (history). What is a calculator (electronics)?”
Nothing in this design suggests that Watson has made a giant leap in AI (or natural-language processing, or analytics). For 40 years and more, researchers have been building up AI rules, domains, natural-language translators, and learning algorithms – but progress towards meeting a true Turing test, in which the human side of the interaction can never tell that a computer is the other side of the interaction, has been achingly slow. All that the Jeopardy challenge shows is that the computer can now provide one-word answers to a particular type of tricky question – using beyond-human amounts of data and of processing parallelism.
Nor should we expect this situation to change soon. The key and fundamental insight of AI is that when faced with a shallow layer of knowledge above a vast sea of ignorance, the most effective learning strategy is to make mistakes and adjust your model accordingly. As a result, brute-force computations without good models don’t get you to intelligence, models that attempt to approximate human learning fall far short of reality, and models that try to invent a new way of learning have turned out to be very inefficient. To get as far as it does, Watson uses 40 years of mistake-driven improvements in all three approaches, showing that it’s going to require many years of further improvements – not just letting the present approach “learn” more – before we can seriously talk about human and computer intelligence as apples and apples.
The next point is that Jeopardy is all about text data: not numbers, yes, but not video, audio, or graphics (so-called “unstructured” data), either. The amount of text on Web sites is enormous, but it’s dwarfed by the amount of other data from our senses inside and outside the business, and in our heads. In fact, even in the “semi-structured data” category to which Watson’s Jeopardy data belongs, other types of information such as e-mails, text messages, and perhaps spreadsheets are now comparable in amount – although Watson could to some extent extend to these without much effort. In any case, the name of the game in BI/analytics these days is to tap into not only the text on Facebook and Twitter, but also the information inherent in the videos and pictures provided via Facebook, GPS locators, and cell phones. As a result, Watson is still a ways away from providing good unstructured “context” to analytics – rendering it far less useful to BI/analytics. And bear in mind that analysis of visual information in AI, as evidenced in such areas as robotics, is still in its infancy, used primarily in small doses to direct an individual robot.
As noted above, I see the immediate value of Watson’s capabilities to the large enterprise (although I suppose the cloud can make it available to the SMB as well) to be more in the area of cross-domain correlation in existing text databases, including archived emails. There, Watson could be used in historical and legal querying to do preliminary context analysis, to avoid having eDiscovery take every reference to nuking one’s competitors as a terrorist threat. Ex post facto analysis of help desk interactions (one example that IBM cites) may improve understanding of what the caller wants, but Watson will likely do nothing for user irritation at language or dialect barriers from offshoring, not to mention encouraging “interaction speedup” that the most recent Sloan Management Review suggests actually loses customers.
The Bottom Line: Try, Get What You Need
Still, I do believe that a straightforward evolution of Watson’s capabilities can be highly valuable to the really smart, really agile enterprise. Or, as the Rolling Stones put it, “You can’t always get what you want, but if you try, somehow, you get what you need.”
In this case, trying Watson-type analytics will involve applying it primarily to cross-domain correlation, primarily to Internet data outside the enterprise, and primarily in ways that are well integrated with better unstructured-data analytics. Such an engine would sieve the sensor-driven Web in real time, detecting trends not just in immediate “customer prospects” but also in the larger global markets that have been “included out” of enterprises’ present market definitions. The enterprise can then use these trends to identify niches whose entry would require least “translation” between a niche prospect’s knowledge domain and company jargon, and then provide such translation for the initial sale. Gee, maybe even real estate contracts might become more understandable … nah.
More importantly, such an application would reinforce a company’s efforts to become more outward-facing and (a complementary concept) more truly agile. It abets a restructuring of company processes and culture to become more focused on change and on reactive and proactive evolution of solutions in interaction with customers and markets.
What I am suggesting to IT buyers is that they do not rush to acquire Watson as the latest fad. Rather, they should determine at what point IBM or similar analytics products intersect with a long-term company strategy of Web-focused unstructured-data analytics, and then integrate cross-domain correlation into the overall analytics architecture. Try to implement such a strategy. And find, somehow, that you get from Watson-type technology not what you wanted, but what you need.
Subscribe to:
Posts (Atom)