robotic armRay Kurzweil made a startling prediction in 1999 that appears to be coming true: that by 2023 a $1000 laptop would have the computing power and storage capacity of a human brain.  He also predicted that Moore’s Law, which postulates that the processing capability of a computer doubles every 18 months, would apply for 60 years—until 2025—giving way then to new paradigms of technological change.

Kurzweil now says that the hardware needed to emulate the human brain may be ready even sooner than he predicted—in around 2020—using technologies such as graphics processing units (GPUs), which are ideal for brain-software algorithms. He predicts that the complete brain software will take a little longer: until about 2029.

The implications of all this are mind-boggling.  Within seven years—about when the iPhone 11 is likely to be released—the smartphones in our pockets will be as computationally intelligent as we are.  It doesn’t stop there, though.  These devices will continue to advance, exponentially, until they exceed the combined intelligence of the human race.  Already, our computers have a big advantage over us: they are connected via the Internet and share information with each other billions of times faster than we can.  It is hard to even imagine what becomes possible with these advances and what the implications are.

Doubts are understandable about the longevity of Moore’s Law and the practicability of these advances. There are limits, after all, to how much transistors can be shrunk: nothing can be smaller than an atom.  Even short of this physical limit, there will be many other technological hurdles.  Intel acknowledges these limits but suggests that Moore’s Law can keep going for another five to 10 years.  So the silicon-based computer chips in our laptops will likely sputter their way to match the power of a human brain.

Kurzweil says Moore’s law isn’t the be-all and end-all of computing and that the advances will continue regardless of what Intel can do with silicon.  Moore’s Law itself was just one of five paradigms in computing: electromechanical, relay, vacuum tube, discrete transistor, and integrated circuits. In his (1999) “Law of Accelerating Returns”, Kurzweil explains that technology has been advancing exponentially since the advent of evolution on Earth and that computing power has been rising exponentially: from the mechanical calculating devices used in the 1890 U.S. Census, via the machines that cracked the Nazi enigma code, the CBS vacuum-tube computer, the transistor-based machines used in the first space launches, and more recently the integrated-circuit-based personal computer.

With exponentially advancing technologies, things move very slowly at first and then advance dramatically.  Each new technology advances along an S-curve—an exponential beginning, flattening out as the technology reaches its limits.  As one technology ends, the next paradigm takes over.  That is what has been happening, and why there will be new computing paradigms after Moore’s Law.

Already, there are significant advances on the horizon, such as the GPU, which uses parallel computing to create massive increases in performance, not only for graphics, but also for neural networks, which constitute the architecture of the human brain. There are 3D chips in development that can pack circuits in layers. IBM and the Defense Advanced Research Projects Agency are developing cognitive-computing chips. New materials, such as gallium arsenide, carbon nanotubes, and graphene, are showing huge promise as replacements for silicon. And then there is the most interesting—and scary—technology of all: quantum computing.

Instead of encoding information as either a zero or a one, as today’s computers do, quantum computers will use quantum bits, or qubits, whose states encode an entire range of possibilities by capitalizing on the quantum phenomena of superposition and entanglement.  Computations that would take today’s computers thousands of years will occur in minutes on these.

Add artificial intelligence to the advances in hardware, and you begin to realize why luminaries such as Elon Musk, Stephen Hawking, and Bill Gates are worried about the creation of a “super intelligence”.  Musk fears that “we are summoning the demon”.  Hawking says it “could spell the end of the human race”.  And Gates wrote: “I don’t understand why some people are not concerned”.

Kurzweil tells me he is not worried.  He believes we will create a benevolent intelligence and use it to enhance ourselves.  He sees technology as a double-edged sword, just like fire, which has kept us warm but has also burned down our villages.  He believes that technology will enable us to address the problems that have long plagued human civilization—such as disease, hunger, energy, education, and clean water—and that we can use it for good.

These advances in technology are a near certainty.  The question is whether humanity will rise to the occasion and use them in a beneficial way.  We can either build a Star Trek future, in which our civilization rises to new heights, or descend into a Mad Max world.  It is up to us.


more related posts

  • Pingback: Uberの最近のトラブルから学ばなければならないこと – THE BRIDGE,Inc. / 株式会社THE BRIDGE (プレスリリース) (登録) (ブログ) – keiei()

  • GMail’s categories label system has, as far as I can tell, forced onto its users without any testing. Sure, a classifier is a classifier, and if spam filtering works why not general categories? But the performances of the general categories classifiers are no better than a coin toss. Google still believed in it. Don’t even mention the arrogance of changing our search queries or “autocorrecting” our typing without asking for our consent.

    AI is already ruling the world, just not in the way you imagined. People are believing in AI output instead of human judgement. We will probably be doomed sooner than seven years.

  • Manish Mehta

    You know you should be careful with these general statements on AI ruling the world and the suppositions in such articles. There are many strong counter arguments in current technical literature that debunk this myth of AI and this concept of a super intelligence created by machines. So this discussion on destructive AI or benevolent AI is frankly superficial and poorly researched.
    And this bit about iPhone/smartphone power being bigger than a supercomputer yadadada…remember it took a 4-bit computer and smart science to put man on the moon. By that yardstick we should have travelled into many more galaxies by now, riding on our AI and super-duper power phones and what else out there…
    Remember, just parallel processing or neural networks or deep learning systems etc. don’t make the grade. These concepts and technologies have been there for a long while now. The best computing systems are actually those that are purpose built for a specific objective, their task written into code that is a zillion times more reliable than your conventional smartphone buggy software, that are also tested to work failsafe in their actual physical environment.
    If faster computing chips could solve all of mankind’s problems….I just wish it was true.

  • Engineeer

    I’m one of those unemployed middle-aged engineers for whom you exhibited so much disdain in a a recent column.

    How can I get a job writing this kind of stuff?