Foreign Policy: The World Must Regulate Tech Before It’s Too Late
Co-authored with Tarun Wadhwa
A century’s worth of change is about to be squeezed into a single decade. By 2030, entire industries are likely to be replaced with software code. Whole professions could wake up to find their livelihoods superfluous. Robots may be doing our chores, patrolling our streets, and fighting our wars.
Besides lives and jobs, entire nations could be upended: Digital currencies may destabilize global finance, robotics will likely accelerate the relocation of manufacturing, and the plunging cost of renewable energy will shift power away from petrostates. Nations will compete more fiercely than they have in generations. What’s more, all these changes will occur simultaneously and in ways that promise to be disorderly all around.
It’s therefore more urgent than ever that the nations of the world get together to hammer out a shared consensus on a broad range of technologies and their future use. The earthquake will not stop at borders or respect national policies—what’s urgently needed is a common understanding on the ethics of what’s permitted, what isn’t, and how to cooperate globally to make sure that countries, companies, research institutions, and individuals respect these bounds. Yes, we know all the arguments against governments intervening in scientific advances and free market innovation: that they will stifle them or use them only to their own ends. But not to act would be reckless. The giant bulldozer of challenges coming straight at us makes it unavoidable to make important collective decisions.
So far, governments have tried haphazardly to stay in control. Countries have, for example, effectively broken the global internet into a series of national or regional networks under their control—including social media, payments, shopping, news, and data storage. But as technology rapidly advances, this quilt of different approaches will no longer work. Each further advance will raise new and fundamental questions of ethics and equity that transcend borders and affect the interests of everyone involved.
Facing different pressures, nations will come to very different conclusions about appropriate uses of technology. Until recently, societies could adapt to new technologies in slow motion—they could study their effects and determine how to regulate them over a span of decades. But the growing speed and breadth of change, powered by the widening availability of powerful yet low-cost new technologies, make regulatory change at such a slow pace untenable.
The battles that Facebook fought with Australia over who should pay for linking to news articles—pitting a corporation against a major country and its media—will come to seem quaint as we argue over the deadly and destabilizing effects of battlefield killing machines controlled by artificial intelligence. Whether COVID-19 was the result of an accident of nature or a failed lab experiment will be irrelevant as biohackers and governments readily engineer viruses to create pandemics.
We urgently need a consensus between governments that limits the use of a broad range of technologies and institutes a mechanism for reparations by countries liable for their misuse. But before governments can do that, societies need to decide what is acceptable. Laws are codified ethics, after all—and ethics are defined by social consensus. Every society approaches each advance with its own cultural, historical, and moral perspective.
These cultural differences were front and center in a series of Exponential Innovationworkshops we ran with business executives in more than 30 countries. We put before them a hypothetical dilemma involving the use of CRISPR gene-editing technology, a fast and low-cost method of highly targeted genetic engineering. If their unborn child had a debilitating genetic disorder resulting in a lifetime of suffering, and a doctor had the technology to manipulate the fetus’s genes by giving the mother a single injection, what would they decide? As many as one-fifth of participants said they would refuse the novel treatment—but their reasons differed widely across cultures. In Mexico, Catholic participants worried about God’s will; in Malaysia, the executives discussed the technology’s consistency with the teachings of Islam; in Switzerland, many raised the social inequities the technology would create.
The questions and moral dilemmas raised by new technologies are frequently unexpected and difficult to grapple with. There are also no easy answers for how to integrate these technologies into our world safely and responsibly. To keep up with technology, we need our collective ethical governance to keep pace with technology creep. We can achieve that only by building layers of joint understanding and consequent agreement on acceptable limits. Once we find those limits locally, then we have to set them globally. Technology continually expands the boundaries of the possible, but policy and culture are what ultimately determine what we permit.
Finding common cause in such complex areas clearly won’t be easy, but the world has risen to the occasion before. Chemical weapons, ozone-depleting chemicals, climate change, marine protection, human rights, and the protection of sites of cultural and natural value—these are some of the issues on which nations have been able to find broad agreement. International treaties and agreements have set boundaries on what’s permissible, created oversight bodies, established pools of capital, and laid down the consequences for failing to abide by the rules.
Unanimous agreement is not necessary for progress. Genetic engineering is a case in point: Although we have been able to clone cattle, sheep, cats, dogs, deer, horses, mules, rabbits, and rats for decades, nobody has cloned full human beings—at least as far as we know. Even though there is no formal treaty banning the practice, instruments of global governance such as the United Nations Declaration on Human Cloning of 2005 have created powerful norms and guidelines that have kept the technology in check. Even partial experimentation on humans has received the strongest discouragement across different social and political cultures. When a Chinese researcher, He Jiankui, announced that he had created the first gene-edited babies, the consequent global uproar led Chinese authorities to arrest him and later sentence him to three years in prison for unethical conduct, drawing a clear line between what is and what is not acceptable—even if the rebuke only came after the damage had occurred.
In the 1970s, a wave of environmentalism swept much of the globe and gave rise to two decades of global conferences with high ambitions. And it worked: An understanding of resource limits and the ecological fragility of the only planet available to us led to a slew of effective conventions, recommendations, and strategies. That should be our model—except we must move faster.
The time has come for us to reach a common understanding of the advancing technologies that stand to remake our world. International institutions and old-fashioned diplomacy may seem like a naive hope and an outdated approach. But in the face of the tremendous and truly unprecedented challenges before us, it’s the only chance we have. The alternative isn’t just technological disruption on a scale the world has never seen but social, economic, and political mayhem.