The Artificial Intelligence vs. Intelligence Augmentation Debate

Fri, Jan 15, 2016 - 9:52am

A year ago, Tesla [NASDAQ: TSLA] CEO Elon Musk and Cambridge physicist Stephen Hawking created a stir as prominent signatories of an open letter warning about the dangers of artificial intelligence (AI) and the need for concerted efforts in the research community to guide AI towards socially beneficial outcomes. In an interview, Musk went so far as to characterize AI as “summoning the demon.”

Listen to: Dr. Ben Hunt on Big Data, Cybernetics, and Technology

AI has often figured in popular culture and science fiction as a dangerous, or even an apocalyptic power. (Think of HAL, the homicidal computer in Stanley Kubrick’s 2001: A Space Odyssey, or “AM,” the sadistic torturer of the hapless band in Harlan Ellison’s much-anthologized tale I Have No Mouth and I Must Scream.) However, public icons of science and technology such as Musk and Hawking have usually played the role of cheerleaders for artificial intelligence, not Cassandras warning of technological catastrophe.

The open letter—signed not just by Musk and Hawking, but by hundreds of professionals in the field — called for sensible caution and for study of the social, economic, political, and even military dangers that AI might present.

I Can Tell You’re Very Upset About This, Elon

Source: Wikipedia

The worry expressed by these tech and science heavyweights was unusual, but the history of the field has been marked by cycles of extreme optimism and relative pessimism since its origins in the late 1950s. Government and business investment in AI has tended to follow that cycle. When there’s a bust in AI interest, typically following a period where some successes led to irrational exuberance about future prospects, funding for research and development typically flows into another quarter: not artificial intelligence, but intelligence augmentation.

These two fields, which sound so closely related, actually represent an old split in the computer science research community, and that split is key to understanding the promise and the direction of artificial intelligence today, and we think to identifying the most powerful directions of research and development. The distinction is critical, and below we’ll discuss why, and what it means for investors. (Our apologies to our readers from computer science disciplines who may find this brief summary too simplified.)

Artificial Intelligence and Intelligence Augmentation: Very Different Approaches Yield Very Different Results

“Artificial intelligence” is the idea of a computer system that, by reproducing human cognition, allows that system to function autonomously and effectively in a given domain. An AI system demonstrates a kind of intentionality—it initiates action in its environment and pursues goals.

“Intelligence augmentation,” on the other hand, is the idea of a computer system that supplements and supports human thinking, analysis, and planning, leaving the intentionality of a human actor at the heart of the human-computer interaction. Because intelligence augmentation focuses on the interaction of humans and computers, rather than on computers alone, it is also referred to as “HCI.”

If an AI system were pictured as an autonomously functioning robot, an HCI-inspired system might be more like the powered exoskeletons that will someday enable paraplegics to walk, or soldiers to have superhuman strength or endurance on the battlefield. AI aims to work alone; HCI aims to support and strengthen human actors.

What Does “Intelligence” Mean?

The research program and goals of AI were originally set out by mathematicians and engineers. It’s not surprising, then, that they started from a very particular definition of “intelligence.” “Intelligence,” to them, was something that could be mathematically modeled—in their view, the brain is simply a machine for processing information. The problem of modelling human cognition is, to them, just a problem of creating a rich enough symbol system and a good enough set of rules for manipulating those symbols. (They also throw in some random elements and some rules-of-thumb to try to make the system more flexible.)

The HCI crowd, however, thought differently. Maybe some aspects of human thinking are machine-like, they thought; but looking carefully, they found many indications that human intelligence doesn’t proceed in the methodical, brute-force way that a computer works through problems. They doubted that human thinking could really be boiled down to explicit algorithms, or that critical powers of human thought such as intuition, interpretation of context, or tolerance of ambiguity, actually functioned in an algorithmic way. They also saw many aspects of human intelligence that derive from our non-rational bodily and perceptual experience. They became increasingly suspicious that AI researchers were not modelling their computers on the human mind, but that they were modelling the human mind on their computers.

Parting Ways at Stanford

From the 1940s, Stanford University was one of the most important research hubs in the development of modern computer science, and of course it remains so today. The campus hosted two institutes: the Stanford Research Institute, which separated from the University in 1970 as a nonprofit called SRI International; and the Stanford Human-Computer Interaction Group, founded in the early 1970s by a professor named Terry Winograd. His name is not well-known to the public, but two of his students’ names may be more familiar: Larry Page and Sergey Brin, the co-founders of Alphabet, Inc. [NASDAQ: GOOG] (Page was an advisor of Winograd’s for his PhD studies). The “Google guys” thus came from a research lineage that was oriented not to the AI goal of modelling human cognition exhaustively and mathematically, but to the more humble HCI goal of using computers to do the brute force work they’re best at, while giving pride of place to the humans whose intentions and desires are directing what the computers do.

Read: Google Brings Quantum Computing Closer to Reality

Even more than many of the great architects of modern computer science, Winograd is a polymath. He began his career as a partisan of old-school AI. In the late 1960s, though, he read the criticisms of AI that had been leveled by Hubert Dreyfus, a Berkeley professor who wrote a pessimistic assessment of the field for the RAND Corporation in 1963. Dreyfus’ criticisms were largely inspired by 20th century philosophy: names such as Heidegger, Merleau-Ponty, Wittgenstein, and Gadamer, which are not the typical diet for a computer scientist. (We don’t think it’s just chance that such wide-ranging trans-disciplinary expertise has led to such powerful results.)

Winograd found those criticisms compelling, and he became equally fluent in the languages of computer science and Continental philosophy. AI was in the midst one of its periodic down-cycles, since the glowing, extravagant predictions of the early 60s had met with disappointment. Winograd changed his research direction, with his HCI Group leaving the grand goal of modelling human cognition to others on campus. It turned out that the research trajectory he initiated resulted in perhaps the most powerful “augmented intelligence” advance in the history of technology—GOOG’s premier product, its search service.

The Proof of the Pudding: AI and HCI Results

The approach of Winograd’s HCI Group was perhaps the strongest influence on Page and Brin. GOOG’s search epitomizes the approach of HCI. While AI aimed at reproducing human thinking in algorithmic form, a goal that has proved elusive to this day, and that has prompted continuing cycles of euphoria and despair in the field, GOOG has in essence turned the world-wide web into a massive mechanical encyclopedia.

Winograd believes that since human intelligence is not purely algorithmic, no algorithm will ever be able to reproduce it. Thus, he thinks that we will never have “thinking machines”—what we will have, will be more and more powerful language machines.

According to Winograd, computers will never be able to do the kind of human thinking that is non-algorithmic—the thinking that enables us to interpret vague statements, to intuitively size up a situation based on input that never becomes explicit, or to immediately select relevant sensory phenomena without exhaustively analyzing all the inputs. But they are exceptionally good at mental “grunt-work”—analyzing by brute-force, exhaustive examination of a whole given set of possibilities.

Interview: Book Interview: “A Radically Beneficial World: Automation, Technology and Creating Jobs for All”

Dreyfus identifies two basic kinds of human problem-solving thought. One is “zeroing in,” in which a human identifies an area of interest on an intuitive basis—without going through all the possibilities systematically, or even explicitly noting them all. The other is “counting out,” where all the possibilities are worked through systematically and evaluated thoroughly. It’s not clear, even now, how computers could ever “zero in”—but they can be much better than humans at “counting out.”

It was that “counting out” processing that enabled IBM’s Deep Blue to defeat world chess champion Garry Kasparov in 1997. While a chess master thinks like a human, in a vague, intuitive, non-algorithmic way, with only 100 or so possibilities in his mind at a given time, Deep Blue, performing 200 million operations per second, just charts out all the possible moves and assesses them according to an inflexible algorithm. It turned out that when processors reached a certain speed, that power was enough, within the formal, algorithmic system of chess, to defeat the best human player. But it did not, Winograd notes, show that the computer was in any way reproducing what was happening inside Kasparov’s mind.

Lessons to Be Learned… and Implications

The first lesson we take away from all the preceding is that excitement about AI comes and goes in cycles, and that hype has characterized the field from its inception. Early successes in a given endeavor are typically followed by extravagant predictions, which then founder on disappointing results a few years later.

Thus, when we are presented with glowing, or alarmist, predictions about the future effects of AI, we begin from a position of mild skepticism. The cycle has repeated itself many times over the past 60 years, and we think that the promised arrival of some kind of “technological singularity” in which humans will be rendered obsolete, workers will be rendered unemployed, or demonic battle-robots will rebel against their human creators, is much more the stuff of dream (or nightmare) than of reality.

However, we see that the effects of HCI-inspired design and development have been much more concrete and notable, giving us the internet, the use of search algorithms to render the internet as a vast encyclopedia, the graphic user interface, touchscreens, and soon, as we have often discussed in these pages, the next big leap in interfaces—virtual and augmented reality devices. All of these developments came from the discipline of HCI rather than from AI.

In investing, then, we would look for the most interesting emerging opportunities in information technology not by focusing on AI, but by focusing on HCI. One way to do this is to evaluate projects on the basis of the kinds of problems they are trying to solve. If the AI cycle is “hot,” with a lot of media hype surrounding it, we’d be cautious to invest in companies that are aiming at the longstanding and intractable problems of AI.

Another way to find interesting developments in information technology is to follow the academic threads—identifying thought-leaders (such as Terry Winograd) whose work has borne the most fruit in entrepreneurial students (such as Page and Brin), and then follow the projects in which those students engage. This involves monitoring the work of research organizations such as the Human-Computer Interface Group, and following the careers of the researchers associated with them.

Good News for Us Humans

We’ll conclude with some thoughts about one of the most common AI-related fears—radical disruption of the labor market and widespread unemployment.

The early industrial automation of manufacturing did produce disruption in labor markets, but ultimately, it produced far more jobs than were destroyed. Humans were not rendered obsolete—in fact, there turned out to be more need for humans than ever. Many AI doomsayers suggest that this time will be different; we disagree. The work of Terry Winograd and other figures in the field of HCI suggest to us that in fact, there are critical aspects of human perception and thought that computers will never be able to model, because they are simply not amenable to algorithmic formulation.

Therefore there will always be a need for humans; computers will never render us obsolete. Within any discipline, profession, or entrepreneurial activity, there will be aspects of the work where brute-force “counting out” processes are most significant, and where the assistance of mechanical thinking will enable humans to work more quickly and effectively. But there will also be aspects of the work where human flexibility, intuition, and non-rational insight are critical—and these capabilities are likely, in the end, to be decisive.

Algorithmic and high-frequency trading, and the rise of robo-advisors, provide appropriate examples. These are in essence AI systems—systems that purport to be able to act autonomously in the world, in this case in financial markets. Those who use them hope that they will be faster and more reliable than humans, whose emotions and limited knowledge supposedly make them inferior to their robotic kin.

Read also: Tech Innovations You Should Watch: Light-Based Memory, Quantum Computing, and Wireless Charging

However, although they may not fail in the same way that human agents fail, when they do fail, it can be catastrophic, because their analysis misses things that only a human intelligence would notice. Information scientist Anatol Holt put it like this: “A brilliant chess move while the room is filling with smoke because the house is burning down does not show intelligence.”

As Winograd notes, the fact that such systems lack real human intelligence—the kind that notices that the house is burning down—means that they do not eliminate risk; they just make it systemic and more difficult to see. Eventually, that systemic risk will make itself known—and then the advocates of algorithmic investing might change their minds about the “inferiority” of human managers and the intuitive dimensions of their approach to the markets.

We’re not worried about the displacement of deep human intelligence by machines, and we see that human skill and intuition will continue to rule the roost—even in an era where the tools of intelligence augmentation become more and more powerful.

Investment implications: While artificial intelligence aims at autonomous computer systems, the related discipline of “intelligence augmentation” has a humbler goal: harnessing computers’ best strengths while keeping human agents at center stage. Intelligence augmentation advocates, including one of the academic mentors of Alphabet’s founders, are skeptical that some critical aspects of human intelligence will ever be modeled with computers. Intelligence augmentation has produced some of the most powerful developments in information technology, while AI has tended to follow boom-bust cycles of extravagant optimism followed by disappointment. From an investor’s perspective, we prefer to follow intelligence augmentation researchers and entrepreneurs rather than AI. And as thoughtful citizens and investment advisors, we are skeptical that AI will ever replace the deeply human aspects of intelligence that are critical in almost every creative discipline and entrepreneurial activity—including our own.

For more commentary or information on Guild Investment Management, please go to guildinvestment.com.

About the Authors

Chief Investment Officer
guild [at] guildinvestment [dot] com ()

President
tdanaher [at] guildinvestment [dot] com ()
randomness