Fortune: Elon Musk, these experts will bet you $500,000 that you’re wrong about the future of A.I.
Co-authored with Gary Marcus
Elon Musk has a habit of using Twitter and interviews to make big statements. This week, for instance, Musk told Jack Dorsey via tweet that AGI—artificial general intelligence, or A.I. with the power and flexibility of human intelligence—would most likely be here by 2029.
And when Elon talks, people listen. But should they?
He has a history of making bold predictions, not always correct; those self-driving taxis he promised still aren’t here, for example. In this particular instance, the idea that some quantum jump in A.I. is imminent might actually cause some people to panic, especially given that Musk himself once famously told a crowd at MIT that, “with artificial intelligence, we are summoning the demon.” And at the same time, suggesting that humanlike intelligence is not far away might distract from all the current flaws in A.I. that so desperately need fixing.
The truth is, there is a giant gap between today’s A.I., which is largely pattern recognition, and the kind of Star Trek–computer-level A.I. that Musk is dreaming about. Yes, A.I. can already do some amazing things like speech recognition, with the ability to hold surrealistic but entertaining conversations about virtually any topic. But when it comes to reliability, dependability, and coherence, current A.I. is nowhere near what it needs to be. Despite years of promises, A.I. continues to regularly make bizarre and unexpected errors of “discomprehension.” It also perpetuates stereotypes; spreads misinformation; and still fails even at everyday tasks like human-level driving, especially in unexpected circumstances. Just a few weeks ago a “summoned” Tesla crashed into a $3 million jet that was parked at a mostly empty airport. Inside the field, these kinds of challenges are well-known, but there are no firm fixes at hand.
Remedying A.I.’s current flaws (and using the A.I. we actually have now wisely) must start with realism. Building an A.I. that is genuinely trustworthy is one of the most important but also challenging engineering missions of our time. But being glib about it isn’t helping. In painting a rosy and likely unrealistic picture, Musk has, in our view, misled the public about how far we still have to go.
With so much at stake, we decided to call BS.
It began Tuesday, when one of us, Gary Marcus, drafted a $100,000 bet. In essence, the bet highlights the disconnection between Musk’s latest claims about the future of A.I. and current reality. In the spirit of serious betting, Marcus laid out five very specific conditions.
To really say that AGI had been achieved, it would have to clear at least three of the following five benchmarks of intelligence, compiled in collaboration with NYU computer scientist Ernest Davis. To be considered artificial general intelligence, A.I. would need to be able to accomplish some of the following:
Watch movies and tell us accurately what is going on. Who are the characters? What are their conflicts and motivations? Et cetera.
Read novels and reliably answer questions about plot, character, conflicts, motivations, etc. The key is to go beyond the literal text and show a real understanding of the material.
Work as a competent cook in any old random kitchen (a tip of the hat to Steve Wozniak’s cup-of-coffee benchmark).
Reliably construct bug-free code of more than 10,000 lines from natural language specification, or by interactions with a nonexpert user. (Gluing together code from existing libraries doesn’t count.)
Take arbitrary proofs from the mathematical literature, written in natural language, and convert them into a symbolic form suitable for symbolic verification.
The other of us, Vivek Wadhwa, thought the bet was terrific, fair and provocative, and something that could move the field of A.I. forward. So Wadhwa decided to match Marcus’s wager. Within a couple hours there was a flurry on Twitter and Marcus’s Substack had close to 10,000 views, and soon other experts in the field also offered their support for the wager, increasing the pool to $500,000. But not a word from Musk.
Then writer and futurist Kevin Kelly, who cofounded the Long Now Foundation, offered to host the bet on his website side by side with an earlier, and related bet that Ray Kurzweil made with Mitch Kapor. Ben Goertzel, for decades one of the leaders in trying to make AGI into something real, rather than just a fantasy, tweeted that he thought the tests would signify real progress. World Summit AI, the world’s leading A.I. conference, offered to host a debate. Others wondered aloud which benchmarks might fall first, and in what order.
Despite all that excitement in the A.I. community, there has still been no word from Musk.
Half a million bucks is chump change, of course, for someone who is perhaps the richest person in the world. But it is real money to us, and it symbolizes something important: the value of getting public voices who hype A.I.’s near-term prospects to stand by their claims.
Spreading misinformation about the potential of A.I. and its likely progress may serve Tesla by diverting attention from the many problems it has with its self-driving software, but it doesn’t serve the public. If Musk believes what he says, he should stand up and take the bet; if not, he should own up to the reality that his pronouncements are little more than off-the-cuff hunches that even Musk himself realizes aren’t worth the virtual paper he’s printed them on.
Gary Marcus is a scientist, bestselling author, and entrepreneur. He was founder and CEO of Geometric Intelligence, a machine-learning company acquired by Uber in 2016. His most recent book, coauthored with Ernest Davis, Rebooting AI, is one of Forbes’s 7 Must Read Books in A.I..