The Coming Wave

Technology, Power, and the Twenty-first Century’s Greatest Dilemma

Reviewed by Mr. William Hubick Vance

Article published on: May 1, 2024 in the Chaplain Corps Journal

Read Time: < 5 mins

Image of the Book U.S. Go Home: The U.S. Military in France, 1945 to 1968 by M. David Egan and Jean Egan

By Mustafa Suleyman and Michael Bhasker New York: Crown, 2023. 352 pp.

It was timely that I was asked to review The Coming Wave: Technology, Power, and The Twenty-first Century’s Greatest Dilemma by Mustafa Suleyman. I had recently connected with collaborators at the United States Army Institute for Religious Leadership to discuss rapidly advancing capabilities such as Extended Reality (XR) and Artificial Intelligence (AI). I have been attempting to raise awareness and generate a sense of urgency around the implications of recent leaps forward in AI. This book is the most comprehensive summary I have read to date on how AI and adjacent technologies are certain to disrupt and transform civilization. The speed of these advances is dizzying, yet few of us are tuned into even the near-term implications.

What makes this book so powerful is that it systematically puts “the coming wave” of technology in historical, political, economic, and ethical context. Suleyman demonstrates the general applicability of AI to nearly every domain and the related implications, including why this wave is now inevitable. He connects AI to related advances in adjacent spaces such as biotechnology and robotics, noting how the complementary nature of each space accelerates and reinforces the others. He contrasts this technological wave with historical revolutions from the printing press to the internet (as well as a surprisingly poignant and relevant example about the impact of stirrups). He explores how these sudden changes are similar to today and why this one underway is dramatically different. He discusses the age-old challenge of balancing freedom vs. security and the narrow, winding path between the potential dystopian outcomes associated with total openness or total closure. From every angle he presents complexity and contradictions. The same technology that makes an institution more powerful also leaves it more vulnerable to disruption by casual and entry-level participants. Suleyman discusses the challenges and risks associated with action, inaction, and the many variants between them. It is a worthwhile challenge for us to grapple with such complexity and contradictions and to seek the wisdom required to lead decisively and ethically.

The dangerously low barrier to entry is an important variable in “the coming wave.” We can and should celebrate the potential for untrained enthusiasts to participate on the frontiers of science and medicine. But those new superpowers will become equally available to criminals, cults, and terrorist organizations. Ukraine’s clever use of inexpensive drones to defeat Russian tanks can be framed as a classic underdog story. However, both nation states and individual bad actors are watching and learning these emerging techniques. They do so as the cost of advanced drones drop and their range,intelligence, and autonomy increase. Imagine great swarms of drones, each autonomous and ready to deliver lethal payloads (explosive, chemical, or biological) when prompted conditions are met. Domestically, it’s not a matter of if but when such technologies will deliver a September 11th-equivalent disaster of the AI age. Internationally, few if any powerful states will be outmaneuvered by simple drone tactics for long. Adversaries will not only improve their defenses, but invest heavily in creating powerful swarms of autonomous, weaponized drones. While the public expects ever-advancing technology in international conflicts, the domestic application is more worrisome. Bad actors inclined to maximize harm will have powerful, lower risk, automated, and affordable means to deliver dangerous payloads. Powerful AI will be available to help acquire and prepare the dangerous payload, to plan the most impactful delivery strategy, to fly the drone, to avoid surprises, and to apply complex decision-making logic without a person in the loop. Of course, that same technology will save countless U.S. lives in our military operations, and AI-powered autonomous operations are a key technology we must embrace. Leaders need to understand these sweeping implications and begin preparing their teams and communities. The U.S. response needs to be multidimensional, spanning technology, processes, culture, and ethics.

Advances in AI are helping to accelerate the biotech industry far more quickly than most Americans realize, with near-term possibilities belonging to the domain of science fiction just a few years or even months ago. The realities of altering the human genome, creating novel forms of life (including viruses), extending life indefinitely, brain scanning, and cognitive attacks need real consideration. Advances in synthetic biology will allow humanity to develop new, living materials that could revolutionize construction methods, including the concept of bio-engineered structures or “living” buildings. Should the U.S. be investing in creating genetically altered super-soldiers? Who, when, where, and under what conditions should the U.S. experiment with genetic modifications of viruses? The same work that improves our chances of identifying effective treatments may also permit a catastrophic leak. Grappling with the ethics, the policies, and the oversight required to both maximize benefits and avoid catastrophe is an important challenge of our times. It may be difficult to predict the impacts on any individual mission space, but a fluency in the concepts is likely to be beneficial.

When catastrophe strikes, the natural political response will be to ramp up security and attempt to contain these suddenly obvious risks. Suleyman notes the potential for a dramatic worldwide rise in autocracy. Opportunistic leaders are likely to make the case for total government control to ensure safety, leveraging powerful AI systems for mass surveillance and sweeping control. While this may seem distant and hyperbolic, many of those components are already visible in China. Even in the U.S., the Amazon Ring doorbell system has created an ad-hoc neighborhood surveillance system of millions of units before policy discussions could weigh in. Not having enough time for a perfect response will be a repeated theme. Suleyman contends that leaders and engaged stakeholders will be challenged to accelerate and expedite the right processes while slowing others and creating intentional chokepoints and failsafes. Leaders and others will need to maintain focus in the face of overwhelming complexity, to develop new ethical guidelines, and to create the right channels for awareness and engagement.

While much of the book calls out for closing Pandora’s box with duct tape and a nail gun, there’s a twist ending. Not only will it be impossible to shut Pandora’s box, but we humans should not do it even if we could. Suleyman examines the fundamental nature of the global economy (i.e., the “Grand Bargain”) and the criticality of technology to not only grow but even to maintain the status quo. Only with technology can we continue to feed the world and offer better lives to each generation. American scientist and historian Jared Diamond notes that archaeological evidence suggests that civilizations tend to collapse after about 400 years.1 Suleyman believes constant advances in technology explain how we humans have continued to stave off modern collapse, and that these advances remain necessary to even maintain modern society. He summarizes, as many of us have inherently known, that technology will remain both our curse and our salvation for the rest of our civilization. Will we as a people be smart enough to survive long enough to break out of our indefinite growth model on a finite planet? Will our human population stabilize and develop new and sustainable models? This book helps readers take inventory of the variables and to consider them through relevant lenses. My guess is that our species is more likely to retain our model and to simply point it toward the colonization of space. That great frontier of science and ethics awaits if we are successful in surfing “the coming wave.” Suleyman wraps up with an outline of ten ways in which society might be able to walk that narrow path, ten concentric circles starting close to AI with safety and audit and moving farther into government policy and world culture.

The Coming Wave provides a basis for imagining both the amazing possibilities and the need to mitigate the tremendous risks of AI and related technologies. Pessimism aversion is a powerful force, often leading us to dismiss or downplay potential negative outcomes in favor of a more comfortable outlook. This impulse, while understandable, can hinder our ability as a species and a Nation to prepare for and address the complex implications of AI. Acknowledging and overcoming this bias will be crucial, especially for leaders and decision-makers who are tasked with navigating these uncharted waters.

Endnotes

1 Jared Diamond, Collapse: How Societies Choose to Fail or Succeed (New York: Viking Press, 2005).

Author

Mr. William (Bill) Hubick is a technologist who facilitates novel innovation and tailored solutions for DOD customers. He holds a B.S. in Applied Communications Technology from Wayland Baptist University and maintains a Project Management Professional (PMP) certification. His background includes diverse roles such as Mandarin Chinese linguist, cybersecurity PM, software engineering, XR and AI discovery, training, and co-founding the non-profit Maryland Biodiversity Project.