Fortune Magazine: The AI wars gave not even begun

franck-v-dRMQiAubdws-unsplash.jpg

To gauge by the news headlines, it would be easy to believe that artificial intelligence (AI) is about to take over the world. Kai-Fu Lee, a Chinese venture capitalist, says that AI will soon create tens of trillions of dollars of wealth and claims China and the U.S. are the two AI superpowers.

There is no doubt that AI has incredible potential. But the technology is still in its infancy; there are no AI superpowers. The race to implement AI has hardly begun, particularly in business. As well, the most advanced AI tools are open source, which means that everyone has access to them.

Tech companies are generating hype with cool demonstrations of AI, such as Google’s AlphaGo Zero, which learned one of the world’s most difficult board games in three days and could easily defeat its top-ranked players. Several companies are claiming breakthroughs with self-driving vehicles. But don’t be fooled: The games are just special cases, and the self-driving cars are still on their training wheels.

AlphaGo, the original iteration of AlphaGo Zero, developed its intelligence through use of generative adversarial networks, a technology that pits two AI systems against each another to allow them to learn from each other. The trick was that before the networks battled each other, they received a lot of coaching. And, more importantly, their problems and outcomes were well defined.

Unlike board games and arcade games, business systems don’t have defined outcomes and rules. They work with very limited datasets, often disjointed and messy. The computers also don’t do critical business analysis; it’s the job of humans to comprehend information that the systems gather and to decide what to do with it. Humans can deal with uncertainty and doubt; AI cannot. Google’s Waymo self-driving cars have collectively driven over 9 million miles, yet are nowhere near ready for release. Tesla’s Autopilot, after gathering 1.5 billion miles’ worth of data, won’t even stop at traffic lights.

Today’s AI systems do their best to reproduce the functioning of the human brain’s neural networks, but their emulations are very limited. They use a technique called deep learning: After you tell an AI exactly what you want it to learn and provide it with clearly labeled examples, it analyzes the patterns in those data and stores them for future application. The accuracy of its patterns depends on completeness of data, so the more examples you give it, the more useful it becomes.

Herein lies a problem, though: An AI is only as good as the data it receives, and is able to interpret them only within the narrow confines of the supplied context. It doesn’t “understand” what it has analyzed, so it is unable to apply its analysis to scenarios in other contexts. And it can’t distinguish causation from correlation.

The larger issue with this form of AI is that what it has learned remains a mystery: a set of indefinable responses to data. Once a neural network has been trained, not even its designer knows exactly how it is doing what it does. They call this the black box of AI.

Businesses can’t afford to have their systems making unexplained decisions, as they have regulatory requirements and reputational concerns and must be able to understand, explain, and prove the logic behind every decision that they make.

Then there is the issue of reliability. Airlines are installing AI-based facial-recognition systems and China is basing its national surveillance systems on such systems. AI is being used for marketing and credit analysis and to control cars, drones, and robots. It is being trained to perform medical data analysis and assist or replace human doctors. The problem is that, in all such uses, AI can be fooled.

Google published a paper last December that showed that it could trick AI systems into recognizing a banana as a toaster. Researchers at the Indian Institute of Science have just demonstrated that they could confuse almost any AI system without even using, as Google did, knowledge of what the system has used as a basis for learning. With AI, security and privacy are an afterthought, just as they were early in the development of computers and the Internet.

Leading AI companies have handed over the keys to their kingdoms by making their tools open source. Software used to be considered a trade secret, but developers realized that having others look at and build on their code could lead to great improvements in it. Microsoft, Google, and Facebook have released their AI code to the public for free to explore, adapt, and improve. China’s Baidu has also made its self-driving software, Apollo, available as open source.

Software’s real value lies in its implementation: what you do with it. Just as China built its tech companies and India created a $160 billion IT services industry on top of tools created by Silicon Valley, anyone can use openly available AI tools to build sophisticated applications. Innovation has now globalized, creating a level playing field—especially in AI.

Original link