Slaying the Centaur

In every technological revolution—the first and second Industrial Revolutions and the dawn of the computer age, to name a few—some of those swept up in the ensuing disruption have dug in their heels to resist changes coming faster than people can adapt to them. The Luddites fought back in the Industrial Revolution by destroying new technology that they believed was robbing people of jobs and condemning them to a life of poverty. Life went on, however, and the economy didn’t grind to a halt; neither did it stop or even slow when “thinking machines” entered the world in the mid-20th century. Standard economic theory predicts that some structural unemployment results from the introduction of new technology, but otherwise humans simply pick themselves back up again and move forward in a different field.

The advent of AI has led to fears that while previous kinds of technology may not have destroyed humans’ capacity to provide meaningful work, this time is different. AI could be the everything machine—better, faster, smarter than humans, with infinite adaptability. The previous adaptations that humans could make simply won’t apply, and as Nick Bostrom of Oxford puts it, “With a sufficient reduction in the demand for human labor, wages would fall below the human subsistence level. The potential downside for human workers is therefore extreme: not merely wage cuts, demotions, or the need for retraining, but starvation and death.”[1] This view has been downplayed as needlessly pessimistic about humanity’s adaptability and uniqueness. Peter Thiel, founder of PayPal and Palantir, has written that “computers are complements for humans, not substitutes.”[2] Other figures such as Bridgewater founder Ray Dalio and chess grandmaster Garry Kasparov, who have also had unique experiences working with AI, agree that humans have little to fear from an economy increasingly reliant on AI.

Dalio, for one, acknowledges in his 2017 book Principles that AI “could lead to our demise,”[3] but only after spending several pages arguing that its capabilities are so narrow that it will never truly replace humans and that human-computer teams will prove superior to computers alone. Kasparov has been hard at work promoting this view through so-called “centaur chess,” in which humans team up with computers and compete with each other. In a May 2017 interview with economist Tyler Cowen, Kasparov said there was “no doubt” that “a human paired with a set of programs is better than playing against just the single strongest computer program in chess.”

Dalio’s lack of concern primarily stems from his view that AI has lacked two key elements: evolution and the ability to determine cause-and-effect. “It’ll be decades—and maybe never—before the computer can replicate many of the things that the brain can do in terms of imagination, synthesis, and creativity. That’s because the brain comes genetically programmed with millions of years of abilities honed through evolution,”[4] says Dalio regarding the first element. But evolution is not a purposeful or intelligent process—in fact, it’s not even a single process, as Eliezer Yudkowsky points out in his essay “Evolutions are Stupid (But Work Anyway)”:

Complex adaptations take a very long time to evolve.  First comes allele A, which is advantageous of itself, and requires a thousand generations to fixate in the gene pool.  Only then can another allele B, which depends on A, begin rising to fixation.  A fur coat is not a strong advantage unless the environment has a statistically reliable tendency to throw cold weather at you.  Well, genes form part of the environment of other genes, and if B depends on A, B will not have a strong advantage unless A is reliably present in the genetic environment. […]

[…] Contrast all this to a human programmer, who can design a new complex mechanism with a hundred interdependent parts over the course of a single afternoon… Humans can foresightfully design new parts in anticipation of later designing other new parts; produce coordinated simultaneous changes in interdependent machinery; learn by observing single test cases; zero in on problem spots and think abstractly about how to solve them; and prioritize which tweaks are worth trying, rather than waiting for a cosmic ray strike to produce a good one. By the standards of natural selection, this is simply magic. […]

[…] In some ways, biology still excels over the best human technology: we can’t build a self-replicating system the size of a butterfly. In other ways, human technology leaves biology in the dust.  We got wheels, we got steel, we got guns, we got knives, we got pointy sticks; we got rockets, we got transistors, we got nuclear power plants. With every passing decade, that balance tips further.

Not only is Dalio incorrect about the nature and benefits of evolution, he neglects the fact that we can in fact replicate evolutionary processes with machine-learning algorithms—and do it better and faster, too. Random variation, natural selection, recombination—all are replicable in a virtual environment, and thanks to that foresight Yudkowsky mentioned, humans can intelligently prune dead ends, add complex improvements in a single step, and push the timescale of reproduction down from years and decades to minutes and seconds. So even if Dalio was correct about evolution providing human brains with unique capabilities, the fact that we can use evolution as a tool in our development of technology means there shouldn’t be anything stopping those capabilities from arising in our creations as well.

Those capabilities could, for instance, include the ability to assess cause and effect, of which Dalio claims machines are less capable. But again Dalio overestimates humans and underestimates machines. Humans are notoriously bad at determining cause-and-effect, particularly when it comes to false positives. Daniel Kahneman, who developed the now-dominant two-system model of cognition, found that System 1, which is our dominant method of making decisions, “automatically and effortlessly identifies causal connections between events, sometimes even when the connection is spurious…it suppresses ambiguity and spontaneously constructs stories that are as coherent as possible. Unless the message is immediately negated, the associations that it evokes will spread as if the message were true.”[5] Furthermore, there is simply no reason whatsoever to say that computers can’t be programmed to take cause-and-effect into account—or that computers can’t learn it themselves.

It’s common to throw out the platitude “correlation does not equal causation” and by extension, insinuate that since computers only measure correlation, they’re not really determining causation. But causation cannot be established without correlation, and moreover, the way to weed out false hypotheses (and assign greater validity to the true one) is by finding evidence that’s negatively correlated with those hypotheses. Bayesian inference, one of the foundational tools in modern machine learning, is a method of assigning a probability to your hypothesis (let’s call it H) based on how likely we are to see some piece of evidence (we’ll call it E) given that it’s true. If you only include that one type of evidence, it’s correct that you will only have a crude, one-dimensional understanding of the relationship between E and H. But just like human scientists can clarify their theories by introducing other variables to test, so can machines—and in practice they almost always do. Virtually any major machine learning algorithm in use today will make use of large datasets with countless variables, allowing them to test and rule out many different causal relationships. In many cases, this has made machines superior to humans in determining the likelihood of some claim being true, as they are able to digest much larger sets of possibilities.

Kasparov has echoed Dalio’s claims regarding how intelligent computers can really be. In his interview with Cowen, Kasparov also said that the Deep Blue machine that beat him in 1997 was “anything but intelligent” and simply brute-forced its way to victory. But this view of artificial intelligence is astonishingly outdated. The cutting edge of AI research today is in deep learning, in which humans rarely have direct input over the decision-making process the program uses. Rather, the program uses a network of nodes that work like neurons, and these nodes are given “weights” in the decision-making process based on how accurate previous iterations of the network have been. The human, aside from setting up the initial instructions, has very little say in how exactly the network operates, and it is even becoming increasingly common for algorithms to operate as black boxes.

“The workings of any machine-learning technology are inherently more opaque, even to computer scientists, than a hand-coded system,” Will Knight wrote in the MIT Technology Review. “This is not to say that all future AI techniques will be equally unknowable. But by its nature, deep learning is a particularly black box. You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behavior of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers.” Knight describes a medical program called Deep Patient that has proven incredibly successful at diagnosing patients, but “offers no clue as to how it does this,” and Deep Patient is far from the only example. It and other modern neural networks are nothing like what Kasparov describes; they can perform tasks like medical diagnosis that are far more open-ended than chess, adjust their internal structure without human intervention, and reach conclusions humans can’t reach, based on reasoning humans can’t understand.

Peter Thiel, whose company Palantir works on digesting massive amounts of data for business and national security applications (and who certainly can’t be dismissed as ignorant), handles this objection by making a distinction between planning (and other sorts of allegedly human-specific cognition) and “mere” data processing:

People have intentionality—we form plans and make decisions in complicated situations. We’re less good at making sense of enormous amounts of data. Computers are exactly the opposite: they excel at efficient data processing, but they struggle to make basic judgments that would be simple for any human.[6]

Notice Thiel’s use of the present tense to describe AI, however, and think about how quickly Kasparov’s dismissals of AI became outdated. While Thiel’s view has not been left in the dust like Kasparov’s just yet, it relies on stagnation nonetheless, and with the development of AlphaZero at DeepMind, it lies on the razor’s edge of obsoletion.

Consider the 2017 paper “Mastering the game of Go without human knowledge,” published by DeepMind, responsible for the creation of the programs AlphaGo and AlphaGo Zero, as well as AlphaZero:

Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: [emphasis mine] a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games…Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.

Read: AlphaGo, with minimal human input aside from the rules, defeated the human world Go champion. AlphaGo Zero, with no human input aside from the rules, defeated AlphaGo. The gap between these two accomplishments was only two years, from October 2015 to October 2017. Humans beat horses. Then centaurs beat horses. Now horses beat both humans and centaurs. In the realm of games, at least, centaurs reached obsoletion in a timeframe that is short by the scale of a human life and vanishingly small by the scale of civilizational progress.

Two months after the defeat of AlphaGo, the DeepMind team posted a paper on the preprint site Arxiv titled “Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm.” Their announcement? A new algorithm, derived from AlphaGo Zero and simply named AlphaZero, had improved on its predecessor in two astonishing ways: first, it was able to beat AlphaGo Zero after just 24 hours of training (i.e. competing against itself to refine its network nodes), and second, its algorithm was able to generalize to chess and shogi, not just Go, beating the state-of-the-art Stockfish (chess) and Elmo (shogi) programs in an even shorter amount of time than it took to beat AlphaGo Zero.

In chess, AlphaZero outperformed Stockfish after just 4 hours (300k steps); in shogi, AlphaZero outperformed Elmo after less than 2 hours (110k steps); and in Go, AlphaZero outperformed AlphaGo Lee (29) after 8 hours (165k steps).[7]

AlphaZero deals a devastating blow to the idea that machines must be confined to the role of narrow, rigid subordinate to broad, flexible humans. Even if the algorithm did not generalize beyond Go, AlphaZero’s success at that game would have gone a long way toward establishing that computers can handle more ambiguity than previously given credit for. In contrast to chess, where establishing which player is in the lead can be estimated by simple heuristics like pawn structure and piece count, it can be “maddeningly difficult to determine who is ahead” in Go, as George Johnson put it. Cornell University’s Fellows, Malitsky, and Wojtaszczyk explain in more technical terms that “The large branching factor in the game makes traditional adversarial search intractable while the complex interaction of stones makes it difficult to assign a reliable evaluation function.” The difficulty of performing even such a basic task as determining who is winning a particular game of Go would be a significant roadblock for a machine trying to win the game itself—if the machine were dumb in the ways Thiel describes. But now a machine can beat another machine that beat another machine that beat the greatest human player in the world, demonstrating that it can, in fact, make sense of a complex and ambiguous board with only minimal instruction and a few hours of self-training.

Prediction, undirected learning, adaptability—all domains swallowed by this adolescent creation. If this is mere data processing, data processing can do a lot more than we’ve given it credit for.

Not that this even matters when it comes to Thiel’s ultimate defense—that computers will be “supplements to humans, not substitutes.” Thiel asserts: “the stark differences between man and machine mean that gains from working with computers are much higher than gains from trade with other people. We don’t trade with computers any more than we trade with livestock or lamps. And that’s the point: computers are tools, not rivals.” Thiel and his camp believe that the human economy is transitioning to a centaur economy, to put it in Kasparov’s terms.

But people are tools, too, in a contextual sense: they trade their skill, time, and effort for money in a way that is mutually productive. The entire fear is that those in a position to control automation and reap the rewards from it—quite possibly through no hard work or ingenuity of their own, just the luck of birth—will no longer need anything from any of the billions who previously had something of value to provide, and Thiel swings and misses on this softball. Naturally, from the perspective of those who are in a position to control machines, they don’t look like a replacement, but it sure looks that way for those who aren’t in such a position. AI, if not managed properly, could lead to a small group of individuals having a stranglehold on the entire world.

However much the human is crowded out of the equation now, they will be crowded out further and further the more time passes. Barring major biomedical breakthroughs, human intelligence is more or less static, while machine intelligence improves by leaps and bounds, and the assumption that humans will always be able to shift to new activities is borne out neither by the historical evidence nor by the simple reasoning that such an assumption relies on a blind search for what we know must be a finite resource.

The belief that computers can’t possibly be a meaningful substitute for humans ultimately relies on a hodgepodge of unstated assumptions about humans themselves; that computers made of meat are somehow special, that the next hundred years of our existence will look like the last ten, that there will always be a place set aside for us in the universe. Facing these assumptions and overturning them is not pleasant, but it must be done to navigate an increasingly opaque future.

It may very well be the case that the optimists are correct; no one can tell you with certainty what the future holds. But it’s precisely that uncertainty, especially mixed with vulnerability, that makes caution necessary, for while creative destruction is real, so is uncreative destruction. The answers may be unclear, but to come up with a solution, one must first face the problem as it is, not as they want it to be. Tyler Cowen, in his essay reacting to the triumph of AlphaZero over man-machine hybrids, put the problem in the bluntest terms possible: “The age of the centaur is over.”

Long live the horse.

 

[1] Bostrom, N. (2016). Superintelligence: Paths, dangers, strategies. Oxford, United Kingdom: Oxford University Press.

[2] Thiel, P., & Masters, B. (2014). Zero to one: Notes on startups, or how to build the future. New York: Crown Business.

[3] Dalio, R. (2017). Principles. New York: Simon and Schuster.

[4] Dalio, R. (2017). Principles. New York: Simon and Schuster.

[5] Kahneman, D. (2013). Thinking, fast and slow. New York: Farrar, Strauss and Giroux.

[6] Thiel, P., & Masters, B. (2014). Zero to one: Notes on startups, or how to build the future. New York: Crown Business.

[7] From the paper’s footnotes: “AlphaGo Master and AlphaGo Zero were ultimately trained for 100 times this length of time: we do not reproduce that effort here.”

One thought on “Slaying the Centaur

  1. Pingback: Miscellanea: December 2018 – The Next Five Minutes

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s