The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma by Mustafa Suleyman, Michael Bhaskar

Summary and takeaways from the book.



The book is about developments in AI, and their benefits and dangers.

The central theme of the book is about containment of AI so rogue AI cannot cause catastrophic harm.

There is no magic bullet. Government regulation is neither desirable, nor workable.

The authors suggest 10 independent but mutually reinforcing coherent steps the industry needs to take to minimize risks from AI.


ISBN: 9780593593950
Published: September 5, 2023
Pages: 352
Available on: amazon


The book is about developments in AI, and their benefits and dangers.

The central theme of the book is about containment of AI so rogue AI cannot cause catastrophic harm.

There is no magic bullet. Government regulation is neither desirable, nor workable.

The authors suggest 10 independent but mutually reinforcing coherent steps the industry needs to take to minimize risks from AI.

"the potential benefits of these technologies are vast and profound. With AI, we could unlock the secrets of the universe, cure diseases that have long eluded us..."

"But on the other hand, the potential dangers of these technologies are equally vast and profound. With AI, we could create systems that are beyond our control and find ourselves at the mercy of algorithms that we don’t understand. With biotechnology, we could manipulate the very building blocks of life, potentially creating unintended consequences for both individuals and entire ecosystems."

The author rightly calls it "Twenty-first Century's Greatest Dilemma".

Dangers of AI

The relatively benign scenario is of AI replacing "intellectual manual labor" e.g. engineers, accountants, financial advisors. Technologies have been doing this for centuries.

This time it is different as instead of replacing categories of jobs, AI will replace not just the existing jobs but the new jobs as well.

This is beyond 'Software will eat the world' scenario. It is not that engineers will develop AI software to replace financial advisors. It is AI eating the world. AI will replace engineers and "intellectual manual labor" as well.

A more dangerous scenario is with synthetic biology, "a single person today likely “has the capacity to kill a billion people.” All it takes is motivation."

If we do nothing, "openness-induced catastrophe" is possible.

If we try to stop it, regulation and containment will require "techno-authoritarian dystopia" to monitor people at a level that will leave it open for abuse.

Our choice: "techno-authoritarian dystopia on the one hand, openness-induced catastrophe on the other".

"This is the core dilemma: that, sooner or later, a powerful generation of technology leads humanity toward either catastrophic or dystopian outcomes. I believe this is the great meta-problem of the twenty-first century."

Absolute containment is not possible

The benefits of AI are immense, and its dangers evident and equally horrifying. Do we want to contain it? How much to contain it without severely restricting research in it? How much containment and restrictions are too much or too little.

"technologically stagnant societies are historically unstable and prone to collapse. Eventually, they lose the capacity to solve problems, to progress."

"Is it even possible to step away from developing new technologies and introduce a series of moratoriums? Unlikely". "controlling, curbing, or even stopping it—is not possible"

"With their enormous geostrategic and commercial value, it’s difficult to see how nation-states or corporations will be persuaded to unilaterally give up the transformative powers unleashed by these breakthroughs."
If one nation constraints AI, other nations will accelerate development in AI for commercial and military advantage.

Powerful nations can agree on benign and controlled use only, but that is not the direction they are going with mutual mistrust and deglobalization.

Author gives example of "techno-nationalism, a conscious, backward-looking rejection of modernity". Ottoman empire banned the printing press for centuries: "As the printing press roared across Europe in the fifteenth century, the Ottoman Empire had a rather different response. It tried to ban it. Unhappy at the prospect of unregulated mass production of knowledge and culture,"

"Make no mistake: standstill in itself spells disaster."

"Once established, waves are almost impossible to stop."

"Investment in AI technologies alone has hit $100 billion a year." "PwC forecasts AI will add $15.7 trillion to the global economy by 2030." It will be hard to contain with so much money invested, and so much potential benefit.

Government control no assurance or protection

"Here is a parable for technology in the twenty-first century. Software created by the security services of the world’s most technologically sophisticated state is leaked or stolen. From there it finds its way into the hands of digital terrorists working for one of the world’s most failed states and capricious nuclear powers. It is then weaponized, turned against the core fabric of the contemporary state: health services, transport and power infrastructures, essential businesses in global communications and logistics. In other words, thanks to a basic failure of containment, a global superpower became a victim of its own powerful and supposedly secure technology."

Government control or even monopoly of a technology is no assurance or protection from cyber attack.

Software for cyber warfare and lab created pathogens leak or are stolen, and end up in the hands of criminals and nations.

Government regulation not workable

"Saying “Regulation!” in the face of awesome technological change is the easy part." "It’s a simple way to shrug off the problem."

"regulation alone is not enough. Convening a White House roundtable and delivering earnest speeches are easy; enacting effective legislation is a different proposition. As we’ve seen, governments face multiple crises independent of the coming wave—declining trust, entrenched inequality, polarized politics, to name a few. They’re overstretched, their workforces under-skilled and unprepared for the kinds of complex and fast-moving challenges that lie ahead."

"While garage amateurs gain access to more powerful tools and tech companies spend billions on R&D, most politicians are trapped in a twenty- four-hour news cycle of sound bites and photo ops. When a government has devolved to the point of simply lurching from crisis to crisis, it has little breathing room for tackling tectonic forces requiring deep domain expertise and careful judgment on uncertain timescales. It’s easier to ignore these issues in favor of low-hanging fruit more likely to win votes in the next election."
"Even technologists and researchers in areas like AI struggle with the pace of change. What chance, then, do regulators have, with fewer resources? "

Who will bell the cat?

"People need to trust that government officials, militaries, and other elites will not abuse their dominant positions."

"Trust in government, particularly in America, has collapsed. Postwar presidential administrations like those of Eisenhower and Johnson were trusted to do “what is right” by more than 70 percent of Americans,... For recent presidents such as Obama, Trump, and Biden, this measure of confidence has cratered, all falling below 20 percent."

"No less than 85 percent of Americans feel the country is “heading in the wrong direction.”"
Assuming we want to regulate and contain AI, do we trust the government to do it, knowing well that government abuses this trust and their dominant position.

George Monbiot also shares similar fears when he asks "Do new technologies make autocrats impossible to overthrow?"

"Democracy depends on an equality of arms. If governments acquire political weapons unavailable to their opponents, they become harder to dislodge. They now possess so many that I begin to wonder how an efficient autocracy, once established, might ever again be overthrown." - George Monbiot.

Scattered insights

"The price of scattered insights is failure".

"The price of scattered insights is failure".

"Discussions of technology sprawl across social media, blogs and newsletters, academic journals, countless conferences and seminars and workshops, their threads distant and increasingly lost in the noise. Everyone has a view, but it doesn’t add up to a coherent program. Talking about the ethics of machine learning systems is a world away from, say, the technical safety of synthetic bio. These discussions happen in isolated, echoey silos. They rarely break out."

"scattered insights are all we’ve got: hundreds of distinct programs across distant parts of the technosphere, chipping away at well- meaning but ad hoc efforts without an overarching plan or direction. At the highest level we need a clear and simple goal, a banner imperative integrating all the different efforts around technology into a coherent package."

"the goal has to be unified: containment."

"the odds are stacked against us in making this a reality. But, it doesn’t mean we shouldn’t try."

"Most organizations, however, not just governments, are ill-suited to the complex challenges on the way. As we’ve seen, even wealthy nations can struggle in the face of an unfolding crisis."

No magic bullet

"Approaching the dilemma, we are left in the same all-too-human position as always: giving it everything and hoping it works out.".

"There will be no single, magic fix from a roomful of smart people in a bunker somewhere. Quite the opposite. Current elites are so invested in their pessimism aversion that they are afraid to be honest about the dangers we face. "

"There are no guarantees here, no rabbits pulled out of hats. Anyone hoping for a quick fix, a smart answer, is going to be disappointed. Approaching the dilemma, we are left in the same all-too-human position as always: giving it everything and hoping it works out."

Steps to containment

Authors suggests 10 steps that taken together will minimize the risks from AI. Some of these are listed here. The reader is encouraged to read the book for the rest and detailed explanation.

Engineer technical safety: using experience from space program, and nuclear programs.

Audit: allow audit of decisions and external scrutiny, including a confidential and public AI incidents reporting tool to share incidents so others can learn from it.

Build choke points and kill switches at critical points where AI can be choked to stop it.

Make businesses aware of risks and their purpose - not just profit but to serve humanity.They should be wary of building tools and products just for profit that have potential to do incalculable harm.

Transparency: report failure and risks without repurcusions. It can save lives.

Education, advocacy, outreach: making public aware of benefits and dangers.

Harmony and collaboration: not to work in isolation.

"There will be no single, magic fix from a roomful of smart people in a bunker somewhere.

There are no guarantees here.

Approaching the dilemma, we are left in the same all-too-human position as always: giving it everything and hoping it works out.
"






Related articles

The Age of AI and Our Human Future
Why Government Is the Problem by Milton Friedman



External Links