digitaLiberties

"Gains from AI could mean humans live for leisure some day"

What should we be asking about AI? A chat with Miles Brundage, researcher at the Future of Humanity Institute and specialist in artificial intelligence policy.

Miles Brundage Matthew Linares
26 September 2017

ML: Interest and activity in artificial intelligence (AI) has boomed in the last few years with common questions including "to what extent will robots replace human labour" and "will AI supersede humans entirely".

I note the recent launch of the Electronic Frontier Foundation (EFF) AI Progress Measurement Experiment to which you have contributed. As the subject reaches a new level of debate, what would you say are some key developments for people to keep track of? Perhaps it's wrong to focus on a few specifics given the scope of artificial intelligence as an issue, and we should treat it more like a classical discipline such as economics, which deals with countless concerns. Is there a framework for people trying to get a grip on the most important issues?

MB: The lack of a good framework for thinking about AI progress is perhaps the main remaining roadblock to a consensus on how fast things are progressing. We now, thanks to the EFF AI Progress Measurement Experiment and other ongoing initiatives like Stanford's AI Index, have or will soon have plenty of data.

But how important is, say, progress in playing Atari computer games versus speech recognition versus parsing of the role of words in sentences? It's hard to say.

10856326715_da6924a9a4_o.jpg

The Atari computer game QBert is often played by artificial intelligence systems. The results are used as a benchmark to measure their progress. Image: TFC / CC 2.0

Another important related question is: assuming we know what the right metric is, how predictable is it? How steady is AI progress over time? Different theories abound, and there is more empirical and statistical work to be done in disentangling the contributions of algorithms, hardware, data, efficient software frameworks, etc. in pushing AI forward.

Personally, I've tried to make some short-term forecasts in specific areas like how well AI systems can play Atari and see how they turn out after a year. If we can't forecast reliably, even with some error bars, in that timeframe, we'll probably have even more trouble with making longer term forecasts. 

ML: Where can we look at your Atari game playing and related predictions? Have your predictions been successful?

MB: You can find a blog post I wrote on this here, which I will revisit at the end of the year to see how it went. My Atari forecasts in 2016 turned out pretty well, and it's too soon to say for sure about 2017. The early evidence suggests that they're pretty much on track, but we'll have to wait and see. I am starting to think that I might have been overly optimistic on progress in speech recognition, but again, I'll wait until 2017 to revisit these.

ML: You say that some experts in the field are cautious about engaging in certain public debates. This is understandable given alarmist attitudes to AI and the gravity of the issues involved. It'd be interesting to hear if there are any particular issues experts might avoid engaging in e.g. to avoid public alarm, and what the implications might be for public debate and policy. 

MB: I think it's natural that AI researchers, having either experienced or read about the AI winters of the past when hopes for AI turned to disappointment, don't want to over-hype the progress that's happening or look too far into the future. And many impacts of AI, if they turn out to be substantial, like job displacement, are likely to be controversial, and not everyone wants to talk about this (even if they think big impacts are likely).

But there are plenty of exceptions, and many AI researchers are now speaking about issues they're concerned about, including job displacement, lethal autonomous weapons, and long-term safety. Thousands of AI researchers signed the open letters on some of these topics organized by the Future of Life Institute in 2015 and 2017, for example.

...it may make sense to not invest one's own identity too much in one's ability to do a job better or more cheaply than machines indefinitely.

ML: I’m steeling myself for a future where we may need to become cyborgs in order to keep up. I'm thinking about implants, merging my mind with networks (NeuraLink), etc. I must admit, it’s not going very well. Perhaps I’m somewhat conservative in my preferred human state. Are you making any personal preparations for a world where AI, in its various forms, is much more prevalent than today? Would you make any recommendations to others of how to get ready for the coming changes?

MB: I think people should probably plan for a world in which there is a fair amount of churn in the job market, so building up or keeping "fresh" one's skills in areas that are hard to automate is probably wise – for example, creativity, dealing with complexity, and social interaction.

Over the very long term, it's hard to say how big the impacts of AI will be or how quickly they will arise, and especially for young people, it may make sense to not invest one's own identity too much in their ability to do a job better or more cheaply than machines indefinitely.

Eventually, it's plausible that a large fraction of people will be able to have their basic needs taken care of via redistribution of the large productivity gains from AI and robotics, and that's potentially exciting for many people: one might get an early retirement and get to focus on what you want to do with your life e.g. arts, continuous education, and leisure, rather than what you think is the best way to get paid. 

ML: This idea of not investing your identity too much in a job role is fascinating. It'll be a real shift from prevalent approaches to self-worth and understanding. However, some folk contend that AI will also dominate the arts, literature, politics and almost every other domain of human life. Is there reason to think it won't?

MB: I think it's plausible that AI will eventually be capable of exceeding humans at performing any specific task in principle, though that could be very far from now. That doesn't mean that:

1) AI will actually do all of those tasks in the marketplace (we may be willing to pay a premium for humans performing the task, or require certain tasks to be done by humans using laws), or

2) that humans can't also do those tasks voluntarily in leisure time, or

3) that there will be nothing left for humans to do or create. The amount of possible art forms, leisure activities, etc. is probably infinite even if, for a given instance of creative work, a machine could be directed to do it, too. 

ML: Will the average policy-makers, politicians and citizens need to learn a whole new conceptual toolset to take part in debates? Will we be able to keep up or will policy-making, in the age of AI, become an ever-more specialised, elite field? Will folks need to upgrade their physical hardware to do so?

MB: It's getting increasingly easy to stay up to date with advances in AI, as there are lots of popular books being written on the topic, lectures to watch on YouTube, newsletters, etc. There do need to be clear accessible explanations of some of these advances for the public and policy-makers to grasp without going into the technical details, but I think this is a solvable problem.

Just like many people don't know the finer points of quantum mechanics but learn a bit about physics in school or in popular science books, I think the necessary level of knowledge to understand the basics of AI is within the grasp of most people without much or any technical training, if it's clearly explained.

ML: Your own focus is around policy in AI. How do you feel about the way that fiction, art, and other domains are treating the issue of AI? Are current approaches helpful?

MB: Science fiction is, of course, a major way that AI is introduced to the public – in fact, surveys by the Royal Society show that it is mainstream media and science fiction that teach most people about AI, rather than direct exposure to experts or personal experience in developing AI.

A lot of AI-related science fiction sidestep the real issues a bit in order to be entertaining, so one can be misled. For example, science fiction often dwells on the humanness and human appearance of AI systems, when in fact some of the more interesting advances happening today are in systems that don't look that much (or at all) like humans and think in quite different ways.

ML: Would you recommend any recent AI-related fiction, artwork, or entertainment, for its accuracy, or for asking the right questions, or otherwise?

MB: I have some sort of beef with each piece of AI-related fiction, but there are two that I think are reasonably good in the accuracy respect. Ex Machina demonstrates the perils of anthropomorphising AI systems too much.

Person of Interest demonstrates what a world with highly effective AI-based surveillance might look like, even if it arguably overestimates how good an AI could be at predicting the future; it also notably doesn't dwell on the physicality of AI, which I liked. 

Had enough of ‘alternative facts’? openDemocracy is different Join the conversation: get our weekly email

Comments

We encourage anyone to comment, please consult the oD commenting guidelines if you have any questions.
Audio available Bookmark Check Language Close Comments Download Facebook Link Email Newsletter Newsletter Play Print Share Twitter Youtube Search Instagram WhatsApp yourData