2019 marks the 70th anniversary of the Soviet Union successfully testing its own version of an atomic bomb. It’s also the 70th anniversary of something rather less frightening, but highly disruptive in its own way, and, like nuclear weapons, one we ought still to be engaged with. The ‘300-year-old sum’, still outstanding in 1949, was the question which numbers of the form 2p – 1 were prime. This problem had first been tackled, with some inaccuracy, by the mathematician Marin Mersenne in 1644.
The 300-year-old sum itself was of no consequence – what was at stake in 1949 was who, or rather what, had been doing the thinking. Back then, the Manchester University computer team had been building a computing machine. During the summer, the world-changing implications of what they were doing broke out of the lab. Here’s an extract from The Times of 11 June 1949:
THE MECHANICAL BRAIN
ANSWER FOUND TO 300 YEAR-OLD SUM
Experiments which have been in progress in this country and the United States since the end of the war to produce an efficient mechanical “brain” have been successfully completed at Manchester University, where a workable “brain” has been evolved. …The Manchester “mechanical mind” was built by Professor F.C. Williams, of the Department of Electro-Technics, and is now in the hands of two university mathematicians, Professor M.H.A. Newman and Mr. A. [M.] Turing. It has just completed, in a matter of weeks, a problem … which was started in the seventeenth century and is only just being completed by human calculation….
So what was so alarming about this boring arithmetic? Read on…
Mr. Turing said yesterday:
“This is only a foretaste of what is to come, and only the shadow of what is going to be. We have to have some experience with the machine before we really know its capabilities. It may take years before we settle down to the new possibilities, but I do not see why it should not enter any one of the fields normally covered by the human intellect, and eventually compete on equal terms.”
An artificial intelligence scare was out in the open. Over the next few weeks, the pages of The Times were graced with correspondence, on the one hand, explaining how dull the 300-year-old sum actually was, and on the other declaiming with post-Victorian outrage the sanctity of the spirit and the impossibility of inanimate objects having human characteristics. Thus, the seed for the famous Alan Turing ‘imitation game’ paper on thinking machines was sown, and a series of radio broadcasts on the same topic soon followed.
Seventy years on, the discussion has partly disappeared beneath the dust on the capacitors of the old Manchester machine. Sure, there are still contests for bots trying out the ‘Turing Test’. But more importantly, the upsurge in technologies using AI has rekindled debate on the proper boundaries for AI research. Whether machines can think or not is a stale question, and the Turing Test is for hobbyists – but in a revised form, the 1949 question is still live to engage with. Ask your bot to ponder the following:
- We have rather feeble governance rules (ethics, regulation, whatever you want to call it) about how AIs learn. In the process a judgment is being made (how to decide whether ‘that’s a cat’ or ‘that’s a terrorist’), a judgment ultimately based on flawed, selective, human ideas about how to form judgments. How do we judge good judging? Are market forces the right approach? Is that unsafe?
- We are creating new forms of trust system. In past centuries we relied on the state, or state-supervised bodies, to look after our money, our title to land, our communications infrastructure. Now we expect the same level of trust from distributed and dematerialised systems which may not be plugged into traditional accountability structures. Is that okay? What new-tech failsafe arrangements do we need?
- We delegate control of our digital lives and expose ourselves in cyberspace. To combat this addiction, we rely on complex ‘security’ structures in the form of multiple passwords, forgot-your-password options, captchas, liability rules, operating-system obsolescence and so forth, all of which puts grit into the lubrication of business. Who are we trying to protect against what? What actually works?
- Use of technology in decision-making is now routine. AIs are making choices for us. When is it right, and when is it wrong, to delegate decisions to non-humans? How do we decide?
In 2019, we can pick up work on the 70-year-old problem. It’s not about whether machines can think, but about ‘the fields normally covered by the human intellect’, and whether (and if so how) we should assert control. The Turing Test contestants should have their fun, but the rest of us should stop pretending that the debate closed in the 1950s with the knowledge that robots will not take over the planet. It’s not about robots, it’s about our lives, and how we want to be governed.
1 comment