You are on page 1of 7

2/1/2018 David Runciman · Diary: AI · LRB 25 January 2018

This site uses cookies. By continuing to browse this site you are agreeing to our use of cookies.×
(More Information)
Back to article page

Diary
David Runciman
It’s three weeks before Christmas and Los Angeles is in flames, though you wouldn’t know it
from inside the bowels of the Long Beach Convention and Entertainment Centre, where all is
cool and grey. I am here with eight thousand other attendees of the Neural Information
Processing Systems (Nips) conference – the great annual get-together of people who work in
machine learning. These are the men and women (mainly men, but we’ll come to that) who
are building the artificial systems that may one day, perhaps quite soon, be able to perform
many tasks that have traditionally been thought to require human intelligence. The prevailing
mood of the conference is one of remorselessly practical problem-solving, mixed with
occasional bursts of euphoria at how far machine learning has come in recent years.
Meanwhile, about thirty miles to the north, wildfires have reached the Bel Air district, lapping
at the edges of the UCLA campus and the Getty Museum. Smoke is drifting across the 405
highway, which carries 400,000 vehicles a day, the busiest stretch of road in the United
States. If the people at Nips get their way, it won’t be long before most of these cars drive
themselves through the haze, human cargo safely stowed. For now, though, it’s humans doing
the driving and strung out firefighters doing battle with the elements. Plenty could still go
wrong, though it will take a while for the news to reach Long Beach.

I’m here to take part in a symposium on different kinds of intelligence, natural and artificial.
Yet this is not simply an AI conference. In fact, one of the reasons for the remarkably rapid
recent progress in machine learning has been the deliberate detachment of algorithmic
problem-solving from hoary questions about what counts as true intelligence. Trying to get
machines to mimic how the human mind works – which has long been the holy grail of the AI
community – turns out to be a distraction. Machines that are capable of learning do not have
to be capable of thinking the way we do. What they require are vast amounts of data, which
they can filter at incredible speed searching for patterns on the basis of which they make
inferences, and then for further patterns in their own failures to draw the optimal inferences.
In other words they learn from their mistakes, without stopping to wonder what it really
means to learn or to make a mistake. It is amazing how much can be accomplished by
machines that operate like this. It’s as though they were just waiting to be told that they
should try doing it their way, not ours.

The prince of this brave new world – and the closest Nips has to a resident rock star – is
Demis Hassabis, co-founder and CEO of Google’s DeepMind, best known for having taught
https://www.lrb.co.uk/v40/n02/david-runciman/diary 1/7
2/1/2018 David Runciman · Diary: AI · LRB 25 January 2018

an algorithm (AlphaGo) to play the fearsomely demanding game of Go better than any human
being has ever managed.[*] Hassabis is here to talk about his latest breakthrough. AlphaZero
is a machine learning system designed to teach itself games from scratch, without any human
input outside the basic rules. It does this by playing against itself over and over again,
working out its own lessons from its own mistakes, and then correcting for them. When
AlphaZero was let loose on chess, it took just four hours before it could play well enough to
defeat the best rival program, Stockfish, which could already see off any human grandmaster.
Algorithms have been better than all human players at chess since 1997, when IBM’s Deep
Blue beat Garry Kasparov, then world champion, in a six-game match. But twenty years ago
computers had to be trained with a large number of historic games, so as to learn which
human tactics tended to work, and which didn’t: it was taught openings and endgames and all
the rest. AlphaZero, though, uses an entirely different approach: it isn’t given any information
about what humans consider an advantageous position, and just calculates whether any given
move will be more likely to make it win or not.When applied to Go – a far more difficult game
– the AlphaZero algorithm, as Hassabis put it, acquired in the space of 72 hours the level of
knowledge that human civilisation had painstakingly accumulated over three thousand years.
Then they left it running. Within weeks it had taken Go to a level that was beyond anything
that had been previously imagined. It did the same for chess, effectively turning it into a
different kind of game. It made moves that made no sense, until much later they emerged as
part of a winning strategy that only AlphaZero could have envisaged. After a month, the
DeepMind team switched it off, not because it had stopped making progress, but because it
wasn’t clear what else there was to gain by letting it gobble up so much energy. It hadn’t just
proved its point. It was now making a point that it alone was able to reckon with.

Hassabis presented these results hot off the press to a packed auditorium at Nips. His sense
of wonder at what he had accomplished was infectious. Hassabis is himself an exceptionally
strong chess player – at 13 he was the second highest-rated player of his age in the world. He
is also a five-time champion at the Mind Sports Olympiad. So when he showed some of the
bewildering but ultimately successful moves made by AlphaZero playing chess – withdrawing
its queen to a corner of the board during an intense middle game, sacrificing a rook for no
apparent gain as though it were a worthless encumbrance – and described them as ‘chess
from another dimension’, it was hard to disagree. Because he had built a machine that was
free from human input, it didn’t know what it wasn’t allowed to do. No one had ever told it
that a rook is worth five pawns and therefore inherently valuable. We have to assume it didn’t
even know the meaning of sacrifice, which is why it found making sacrifices so easy. All it saw
were threats and opportunities. As Hassabis put it, for AlphaZero ‘everything is contextual.’
And anything else is just prejudice.

The notion of a machine that rapidly surpasses human capabilities, and then surpasses the
capabilities of other machines, and then keeps going, is vertiginous. Hearing its maker
express his own astonishment at its prowess – Hassabis said he had no idea that it would turn
out to be so good, so fast – was a little otherworldly, like a fleeting echo of the moment of
divine creation. But the feeling soon passed. Really, is AlphaZero anything more than a toy?
Hassabis’s critics, of whom there are plenty, even at Nips, point out that chess is a perfect

https://www.lrb.co.uk/v40/n02/david-runciman/diary 2/7
2/1/2018 David Runciman · Diary: AI · LRB 25 January 2018

information game, where everything you need to know is laid out on the board in front of you.
Nothing is hidden and nothing is ambiguous. The ability of machines to learn in these
circumstances is contingent on the fact that they can’t misread what has already happened;
they can only misjudge what may happen next and then try not to make that mistake again. A
computer won’t confuse a rook for a knight. But outside the world of perfect information
games even the smartest machines regularly misidentify objects, misinterpret language and
misunderstand nuance. They struggle with information whose source is unclear and whose
character is open to question. As one of the speakers who followed Hassabis pointed out, deep
learning machines are still capable of mistaking turtles for rifles (don’t ask me how). So no
one should feel confident that this technology is ready to be weaponised. You might want
AlphaZero fighting your chess battles for you. You wouldn’t want it fighting your wars.

That said, DeepMind’s summary mission statement is not lacking in ambition:

1. Solve intelligence.

2. Use that to solve everything else.

What solving intelligence means here is instrumentalising it: turning it into a tool that can be
used by machines. Hassabis put this slide up at the start of his presentation and then he
talked about chess and Go for forty minutes. Only at the end did he come back to the question
of the wider social implications of his work, and expressed his hope that this technology
would soon be delivering equally impressive results in healthcare, science and energy. He did
not provide details. Is AlphaZero really a step on the road to solving intelligence? It all
depends on what we mean by intelligence, which is the question that machine learning
cannot defer for ever. When he showed us some of the astonishing chess moves made by his
machine, Hassabis laughed with pleasure at the hilarity of it, as though it were a kind of joke.
His audience laughed too because it was funny to see something so outlandish. But the
computer wasn’t joking. We were only laughing at, and with, ourselves. Alpha-Zero may have
overcome thousands of years of human civilisation in a few days, but those same thousands of
years of civilisation have taught us to register in an instant forms of communication that no
machine is close to being able to comprehend. Chess is a problem to be solved, but language
is not and this kind of open-ended intelligence isn’t either. Nor is language simply a problem-
solving mechanism. It is what enables us to model the world around us; it allows us to decide
which problems are the ones worth solving. These are forms of intelligence that machines
have yet to master.

Before Hassabis spoke, the child psychologist Alison Gopnik took to the stage to remind the
audience that many human beings aren’t so hot at this kind of intelligence either. That
included all of the humans in the room, notwithstanding their unusually high IQs. The most
creative modellers of new information environments are children. The older we get, the less
flexible our intelligence becomes. When we are very young, we are much more likely to be
open to new ways of thinking and to countenance unlikely hypotheses. That doesn’t make
children any good at getting things done, especially if it involves a lot of planning. Frankly,
kids are terrible at concerted action. But they are excellent at imaginative reasoning. In
Gopnik’s memorable phrase, children ‘think like theorists’, whereas adults think like
https://www.lrb.co.uk/v40/n02/david-runciman/diary 3/7
2/1/2018 David Runciman · Diary: AI · LRB 25 January 2018

decision-makers. Human development turns out to be a trade-off between plasticity and


efficiency: as we become more efficient at making decisions, we gradually lose our capacity to
absorb the things that don’t fit with the world we know. We become more intelligent with age,
and we become less intelligent as well.

In one sense, children are the AlphaZeros of the natural world: they start a lot of their
thinking from scratch and try to work things out for themselves. Context trumps prejudice,
until we learn better. But in another sense, children are nothing like AlphaZero, because they
are doing their learning in an open information environment, packed with human input and
ambiguity. They are also learning how to play, which is something else kids are exceptionally
good at. Despite being a game, chess is not play. It is a practical business. Some of
AlphaZero’s moves may look playful, but only humans would be capable of seeing them in
those terms. To the machine, everything is about getting the job done. The danger of
conceiving intelligence in these terms is that it makes everything else about getting the job
done too. What we gain in efficiency we lose in plasticity.

Undoubtedly, there are plenty of jobs that could be done better than humans manage at
present. That is the great promise of machine learning. Healthcare, energy and transportation
systems are all riddled with inefficiencies that these machines could help to eradicate. Until I
arrived in Long Beach, I had never used Uber before. After a few days I couldn’t imagine
living without the service it provides, in a city where I had to walk half a mile just to cross the
freeway outside my motel. There was something almost magical about the time, money and
peace of mind it saved me. I know Uber has a very mixed reputation and some people told me
I should be using Lyft, which treats its drivers better. Maybe I should have thought harder
about that. But my Uber experience was so spectacularly efficient that I didn’t think about it
at all.

Is Uber intelligent? Ironically, that is what I was at Nips to talk about. Not Uber specifically,
but corporations in general were my remit at the symposium. Just as adult human beings are
not the only model for natural intelligence – along with children, we heard about the
intelligence of plants and animals – computers are not the only model for intelligence of the
artificial kind. Corporations are another form of artificial thinking machine, in that they are
designed to be capable of taking decisions for themselves. Information goes in and decisions
come out that cannot be reduced to the input of individual human beings. The corporation
speaks and acts for itself. Many of the fears that people now have about the coming age of
intelligent robots are the same ones they have had about corporations for hundreds of years.
If these artificial creatures are taking decisions for us, how can we hold them to account for
what they do? In the words of the 18th-century jurist Edward Thurlow, ‘corporations have
neither bodies to be punished nor souls to be condemned; they may therefore do as they like.’
We have always been fearful of mechanisms that ape the mechanical side of human
intelligence without the natural side. We fear that they lack a conscience. They can think for
themselves, but they don’t really understand what it is that they are doing.

When corporations misbehave, we look for human beings to blame. If we don’t like Uber, we
try to pin it on Travis Kalanick and punish him where we can, which turns out to be harder

https://www.lrb.co.uk/v40/n02/david-runciman/diary 4/7
2/1/2018 David Runciman · Diary: AI · LRB 25 January 2018

than it looks (as I write he is poised to sell $1.4 billion of shares in the company). The
problem is that Kalanick is not Uber – he goes, but the business goes on. Of course,
corporations are not inherently bad, any more than any other kind of machine. But they are
not neutral either. They provide many benefits for their human creators, which is why we
can’t live without them. Corporations have shown themselves to be remarkable devices for
pooling resources, finding efficiencies, driving innovation and increasing value. But there is
still something ghostly about corporate identity. We inhabit a world dominated by these
entities and yet we have little understanding of how they think. We don’t really know whether
they think at all. Three hundred years of worrying about corporate responsibility has
produced an enormous patchwork of laws and regulations trying to define it – and a vast
army of lawyers and accountants trying to redefine it – but there has been little progress in
understanding what goes on inside the black box of the corporate mind. What does Uber
want? We could still be asking that question long after the company has taken over the
world’s transportation systems.

As well as being a showcase for the latest research, Nips is also a job fair, and all the big tech
companies are represented at glossy booths staffed by fresh-faced young recruiters. The most
famous names are present, of course, including Google, Facebook, Amazon and Microsoft. So
too, at even bigger and glossier booths, are the giant Chinese companies, such as Alibaba,
Baidu and Didi (the Chinese Uber). The Chinese have increasingly been driving research in
machine learning, particularly as a tool for targeting consumers. There is also a host of newer
entrants to the market, with wonderfully indeterminate names: Recursion, Quantum Black,
XPRIZE, CrowdFlower, Criteo Research, Voleon Group. Some of these are the offshoots of far
more established players: Quantum Black is the machine learning wing of McKinsey, the
ubiquitous management consulting firm. All seem to be engaged in an unspoken contest to
see who can come up with the most aspirational mission statement. CrowdFlower: ‘The
essential human-in-the-loop AI platform.’ Criteo: ‘Let’s connect more shoppers to the things
they need and love.’ XPRIZE: ‘An innovation engine. A facilitator of exponential change. A
catalyst for the benefit of humanity.’ That one probably wins.

Because many of these enterprises have only been around for a decade or less, they are the
children of the corporate world. It’s tempting to think they might have some of that childlike
curiosity and imaginative capacity that characterises the early stages of human development.
Certainly that’s how they would like to present themselves: all of their iconography is
intended to signal that these are places where out-of-the-box thinking is encouraged and
human intelligence is given free rein. But it’s a false prospectus, because corporations aren’t
really children: they are still just money-making machines. Too often, the story we hear about
the coming of AI focuses exclusively on the interaction between super-smart individuals, like
Hassabis, and super-smart algorithms, like AlphaZero. Yet behind both lie super-powerful
corporate actors, which are neither. One day, perhaps, humans will build the intelligent
robots capable of supplanting us. But for now, the biggest threat to our collective survival
comes from the corporate machines we first started building hundreds of years ago to make
our lives easier and have never really learned how to control. If we do end up manufacturing

https://www.lrb.co.uk/v40/n02/david-runciman/diary 5/7
2/1/2018 David Runciman · Diary: AI · LRB 25 January 2018

killer robots, it won’t be because individual humans made it happen. It will be because the
corporations for which they worked didn’t know how to stop it.

Something else that made the sponsors’ hall feel a little strange was the gender distribution
on display. More than 85 per cent of the attendees at Nips are male. In a crowd of this size,
that still leaves plenty of room for diversity, but taken together the impression is one of
uniformity: lots of men, dressed in jeans and T-shirts, talking about cool new stuff in a
language it was hard for an outsider like me to understand. By contrast, a disproportionate
number of the recruiters staffing the corporate booths were women. These companies were
clearly keen to put their best face forward, to indicate to would-be employees that they were
diverse and progressive places to work. On the basis of what was going on elsewhere at the
conference, this was mainly just for show.

Looking for a different perspective, I went in search of an organisation called Women in


Machine Learning (WiML) to ask if I could sit in on their event. I assured them I was an
academic writing for a respectable publication, but they suspected I was just another greedy
hack hoping for lurid tales of the horrors of being a woman in a male-dominated industry.
(After I left Nips I saw this notice posted on the conference website, under the heading
‘Statement on Inappropriate Behaviour’: ‘Nips has a responsibility to provide an inclusive and
welcoming environment for everyone involved in the fields of AI and machine learning.
Unfortunately several events held at – or in conjunction with – the 2017 conference fell short
of these standards. We are determined to do better in 2018 and beyond.’ I didn’t notice any of
this while I was there. Perhaps I should have been looking harder.) The nice people at WiML
welcomed me in anyway. The atmosphere wasn’t so different from the rest of the conference:
practical, serious-minded, problem-oriented. The president of WiML reminded me that her
organisation did not exist to provide a haven for women within machine learning but a
platform for more women to do machine learning. It was a support group and a professional
network. Despite the overall balance of numbers at Nips, it seemed to be doing its job well.
Most of the presentations at WiML were as technically oriented, and for me as daunting, as
the presentations happening outside.

But not all. The opening talk was given by Raia Hadsell, a research scientist on the Deep
Learning team at DeepMind. She didn’t discuss her own work, but rather her route into the
machine learning business, which had been a circuitous one. Her undergraduate degree was
in philosophy and religious studies. At the same time, she had always been a lover of games
and puzzles, and after graduating she took a further degree in maths, before retraining as a
computer scientist. The title of her PhD thesis was ‘Learning Long-Range Vision for Off-Road
Robots’. Now here she was, still open to new ideas. She also talked about her mother, who
had worked as an artist for thirty years before learning in her sixties how to code and
becoming a successful computer game designer. Hadsell’s message was to stay curious, but
also to stay committed. Her talk didn’t have the evangelical fervour of the presentation given
by Hassabis, who is now her boss. In some ways it was a dose of solid good sense that could
have been delivered at any academic or professional conference. But that made it particularly
resonant at this one. It was very human and it was still with me later as I got an Uber back to

https://www.lrb.co.uk/v40/n02/david-runciman/diary 6/7
2/1/2018 David Runciman · Diary: AI · LRB 25 January 2018

my motel through the clear Los Angeles night, trusting that someone, somewhere had put out
the fires.

[*] Paul Taylor wrote about DeepMind and AlphaGo in the LRB of 11 August 2016.

Vol. 40 No. 2 ∙ 25 January 2018 » David Runciman » Diary


pages 38-39 | 3616 words

ISSN 0260‑9592 Copyright © LRB Limited 2018 ^ Top

https://www.lrb.co.uk/v40/n02/david-runciman/diary 7/7

You might also like