Will true AI be the best thing to ever happen to us, or the last?

AiPT! Science was out in full force for New York City’s recent World Science Festival — keep checking back all week for more coverage, with the tag WSF18!

From 2001: A Space Odyssey to Ex Machina, Artificial Intelligence (AI) is ubiquitous in pop culture. But how do we teach machines to think? Will we ever see the kind of sentience or sapience depicted in movies? And if so, will it be closer to the benign operating system Samantha from the film Her, or Skynet’s existential threat to humanity depicted in The Terminator franchise?

This past Friday, the World Science Festival in New York City assembled a panel of experts to discuss these question, an event titled Teach Your Robots Well: Will Self-Taught Robots Be the End of Us? 

Yann LeCun, Peter Ulric Tse, and Max Tegmark Photo: World Science Festival/Greg Kessler

Yann LeCun, Chief AI Scientist at Facebook, was a key figure in a paradigm shift from rules-based learning to machine learning. He outlined various methods of teaching machines, including a way to identify objects like, say, a car by programming in millions of pictures of various types.

In another process called reinforcement learning, the machine trains itself through trial and error, being told whether its choice was good or bad. The machines may, for instance, play millions of games of Chess or Go against themselves in order to learn, in a way that may remind of how Doctor Strange used his mastery of time to determine the one outcome that would lead to Thanos’ defeat, in the recent Avengers: Infinity War.

One evolutionary advantage both humans and animals possess over machines we take for granted, LeCun explained, is a natural tendency towards object permanence. From early in life, we develop a basic understanding of physics, a model of the world which, when that model is violated, can be perceived as either funny or scary. To demonstrate his point, LeCun showed a video of an orangutan responding to a magic trick.

We’ve yet to figure out how to build machines capable of this basic observation learning, and we’ll never have truly intelligent machines, LeCun believes, until we can solve that problem. That’s the direction he sees AI research going in over the next decades.

Peter Ulric Tse, Professor of Cognitive Neuroscience at Dartmouth, agrees we have an evolutionary advantage — one he calls a “representation of the invisible” — that arises from having a mental model. To explain this, he alludes to the Sherlock Holmes story The Adventure of Silver Blaze, in which a clue came from a dog not barking. Since the dog didn’t bark, the famous detective concluded the dog knew the crime’s culprit. It’s this sort of ability to form a working theory based on an absence of information that would be tricky to develop in AI — it lacks the kind of natural mental models LeCun described.

The panel also weighed in on AI “creativity.” Can machines compose music, write a screenplay, or make an original painting? Susan Schneider, Director of the AI, Mind and Society (AIMS) Group at the University of Connecticut, was skeptical such a thing would qualify as true creativity. LeCun too was unconvinced AI-generated art had merit, suggesting that, even if the machine determined the optimal means of creating art that moved us emotionally, something would be missing simply because the audience knows the artist lacked genuine emotion. Tse suspected AI won’t invent new artistic forms but merely learn to better imitate. But Max Tegmark, President of the Future of Life Institute, called this view that creativity and intelligence are mysterious and out of reach of machines “carbon chauvinism.”

Moderator Tim Urban, co-Founder of the website Wait But Why, concluded the topic of AI creativity with a clip from the AI-written short film Sunspring, citing the almost incoherent film as proof AI at least has a long way to go before it can replace the Hollywood screenwriter.

Tim Urban and Susan Schneider Photo: World Science Festival/Greg Kessler

The panel suggested the kind of narrow AI skilled in specific tasks we’re already seeing in our mobile devices and self-driving cars will greatly expand in the next decade, perhaps even anticipating our needs. For instance, if you show signs of fatigue, it will suggest coffee unprompted and then proceed to order it for you once you’ve given the go ahead.  However, Tegmark urged the audience to not conflate this narrow AI with the kind of fully conscious machines seen in the movies that, “could, in principle, learn to do anything that we do.”

But would such a conscious AI choose to take over the world? The panel was a bit divided on that. LeCun argued it was a mistake to assume an intelligent machine would have all the characteristics of human intelligence and that, even in humans, the desire to conquer is not correlated with intelligence. Orangutans, who are almost as smart as us, show no such drive, because they’re not social animals. Tegmark agreed the Terminator scenario was silly, but did stress that artificial general  intelligence is a big deal, because intelligence is power.

While LeCun says movies are more interesting when bad things happen, most get the emergence of AI completely wrong. The one he thought “didn’t get it too wrong” was Her. Conversely, he called Ex Machina‘s premise of a single person developing the conscious AI, Eva, as well as Eva’s desire for freedom, preposterous.

“The design of an intelligent machine is one of the biggest scientific and technological problems of our times, and it’s not going to be solved by one person at any one time,” he said. “It’s going to be progressive; it’s going to take thousands of people doing research.” The breakthrough wouldn’t even be realized for the first 5-10 years, and no lab is ahead of any other by more than six months.

Schneider remarked that, after hearing this panel, she was even more worried about super-intelligence. Putting super-intelligent AI together with autonomous weapons, she said, “that’s not a happy marriage.” Tegmark stressed the importance that super-intelligence only be developed to serve all humankind, that the AI’s goals are aligned with our own, and that they retain those goals long-term.

Schneider took the final minute of the panel to echo Elon Musk on the possibility of people merging with AI to keep up with technological unemployment, urging society to start thinking about bringing “AI into the brain.” She concluded the future of Artificial Intelligence wouldn’t be like The Jetsons, with humans merely “surrounded by fancy, robotic equipment — the AI will change us as well.”

Check out the whole panel for yourself!

Related Posts