AI is gaining popularity.
In 2020, American AI start-ups raised almost $ 38 billion in funding. Their Asian counterparts raised $ 25 billion. And their European counterparts raised $ 8 billion.
AI is not an industry, let alone a single product. It is an enabler of many industries and facets of human life: scientific research, education, manufacturing, logistics, transportation, defense, law enforcement, politics, advertising, art, culture, and more.
Where We Are
In late 2017, a quiet revolution occurred. AlphaZero, an artificial intelligence (AI) program developed by Google DeepMind, defeated Stockfish — until then, the most powerful chess program in the world. After training for just four hours by playing against itself, AlphaZero emerged as the world’s most effective chess program.
In early 2020, researchers at the Massachusetts Institute of Technology (MIT) announced the discovery of a novel antibiotic that was able to kill strains of bacteria that had, until then, been resistant to all known antibiotics.
MIT did something else: it invited AI to participate in its process.
When it was done training, the researchers instructed the AI to survey a library of 61,000 molecules, FDA-approved drugs, and natural products for molecules that (1) the AI predicted would be effective as antibiotics, (2) did not look like any existing antibiotics, and (3) the AI predicted would be nontoxic. Of the 61,000, one molecule fit the criteria. The researchers named it halicin — a nod to the AI HAL in the film 2001: A Space Odyssey.
Halicin was a triumph. Compared to chess, the pharmaceutical field is radically complex.
The AI did not just process data more quickly than humanly possible; it also detected aspects of reality humans have not detected, or perhaps cannot detect.
In contrast to AI that does a particular task, such as playing chess or discovering antibiotics, models like GPT‑3 generate possible responses to various inputs (and thus are called generative models).
AlphaZero’s victory, halicin’s discovery, and the humanlike text produced by GPT‑3 are mere first steps — not just in devising new strategies, discovering new drugs, or generating new text (dramatic as these achievements are) but also in unveiling previously imperceptible but potentially vital aspects of reality.
As with all technologies, AI is not only about its capabilities and promise but also about how it is used.
For millennia, humanity has occupied itself with the exploration of reality and the quest for knowledge. The process has been based on the conviction that, with diligence and focus, applying human reason to problems can yield measurable results.
Humanity has traditionally assigned what it does not comprehend to one of two categories: either a challenge for the future application of reason or an aspect of the divine, not subject to processes and explanations vouchsafed to our direct understanding. The advent of AI obliges us to confront whether there is a form of logic that humans have not achieved or cannot achieve, exploring aspects of reality we have never known and may never directly know.
Only very rarely have we encountered a technology that challenged our prevailing modes of explaining and ordering the world. But AI promises to transform all realms of human experience. And the core of its transformations will ultimately occur at the philosophical level, transforming how humans understand reality and our role within it.
AI‑powered technology will become a permanent companion in perceiving and processing information, albeit one that occupies a different “mental” plane from humans. Whether we consider it a tool, a partner, or a rival, it will alter our experience as reasoning beings and permanently change our relationship with reality.
Four centuries after Descartes promulgated his maxim, a question looms: If AI “thinks,” or approximates thinking, who are we?
AI will usher in a world in which decisions are made in three primary ways: by humans (which is familiar), by machines (which is becoming familiar), and by collaboration between humans and machines (which is not only unfamiliar but also unprecedented).
As AI’s role in defining and shaping the “information space” grows, its role becomes more difficult to anticipate.
When various groups or nations adopt differing concepts or applications of AI, their experiences of reality may diverge in ways that are difficult to predict or bridge.
When AI is applied to achieve comparable breakthroughs in diverse fields of endeavor, the world will inevitably change. The results will not simply be more efficient ways of performing human tasks: in many cases, AI will suggest new solutions or directions that will bear the stamp of another, nonhuman, form of learning and logical evaluation.
A novel human-machine partnership is emerging: First, humans define a problem or a goal for a machine. Then a machine, operating in a realm just beyond human reach, determines the optimal process to pursue. Once a machine has brought a process into the human realm, we can try to study it, understand it, and, ideally, incorporate it into existing practice.
Humanity has centuries of experience using machines to augment, automate, and in many cases replace manual labor.
Although AI can draw conclusions, make predictions, and make decisions, it does not possess self-awareness — in other words, the ability to reflect on its role in the world.
But while the number of individuals capable of creating AI is growing, the ranks of those contemplating this technology’s implications for humanity — social, legal, philosophical, spiritual, moral — remain dangerously thin.
This is a revolution for which existing philosophical concepts and societal institutions leave us largely unprepared.
How We Got Here
Every society has, in its own way, inquired into the nature of reality: How can it be understood? Predicted? Shaped? Moderated?
The classical world, Middle Ages, Renaissance, and modern world all cultivated their concepts of the individual and society, theorizing about where and how each fits into the enduring order of things.
The emerging AI age is increasingly posing epochal challenges to today’s concept of reality.
In the West, the central esteem of reason originated in ancient Greece and Rome.
The conviction that what we see reflects reality — and that we can fully comprehend at least aspects of this reality using discipline and reason — inspired the Greek philosophers and their heirs to great achievements.
Still, the classical world perceived seemingly inexplicable phenomena for which no adequate explanations could be found in reason alone. These mysterious experiences were ascribed to an array of gods.
The rise of monotheistic religions shifted the balance in the mixture of reason and faith that had long dominated the classical quest to know the world.
In these Middle (or medieval) Ages — the period from the fall of Rome, in the fifth century, to the Turkish Ottoman Empire’s conquest of Constantinople, in the fifteenth — humanity, at least in the West, sought to know God first and the world second. During the medieval epoch, scholasticism became the primary guide for the enduring quest to comprehend perceived reality, venerating the relationship between faith, reason, and the church — the latter remaining the arbiter of legitimacy when it came to beliefs and (at least in theory) the legitimacy of political leaders.
In the fifteenth and sixteenth centuries, the Western world underwent twin revolutions that introduced a new epoch — and, with it, a new concept of the role of the individual human mind and conscience in navigating reality.
The vision of doctrinal, philosophical, and political unity gave way to diversity and fragmentation — in many cases attended by the overthrow of established social classes and violent conflict between contending factions.
The rediscovery of Greek science and philosophy inspired new inquiries into the underlying mechanisms of the natural world and the means by which they could be measured and cataloged. This exploration of historical knowledge and increasing sense of agency over the mechanisms of society. The philosophers of the Enlightenment answered the call, declaring reason — the power to understand, think, and judge — both the method of and purpose for interacting with the environment.
In this atmosphere of intellectual challenges, once axiomatic concepts — the existence of physical reality, the eternal nature of moral truths — were suddenly open to question. As a result of these pioneering philosophical explorations, the relationship between reason, faith, and reality grew increasingly uncertain. Into this breach stepped Immanuel Kant.
According to Kant’s account, human reason had the capacity to know reality deeply, albeit through an inevitably imperfect lens. Objective reality in the strictest sense — what Kant called the thing‑in‑itself — is ever-present but inherently beyond our direct knowledge. Kant argued that because the human mind relies on conceptual thinking and lived experience, it could never achieve the degree of pure thought required to know this inner essence of things.
Whether human perception and reason ought to be the definitive measure of things, lacking an alternative, for a time, they became so. But AI is beginning to provide an alternative means of accessing — and thus understanding — reality.
For generations after Kant, the quest to know the thing‑in‑itself took two forms: ever more precise observation of reality and ever more extensive cataloging of knowledge.
By separating reason from tradition, the Enlightenment produced a new phenomenon: armed reason, melded to popular passions, was reordering and razing social structures in the name of “scientific” conclusions about history’s direction.
In the late nineteenth and early twentieth centuries, progress at the frontiers of physics began to reveal unexpected aspects of reality.
The brilliant and iconoclastic theoretical physicist Albert Einstein revealed a picture of physical reality that appeared newly mysterious. Space and time were united as a single phenomenon in which individual perceptions were apparently not bound by the laws of classical physics.
Developing a quantum mechanics to describe this substratum of physical reality, Werner Heisenberg and Niels Bohr challenged long-standing assumptions about the nature of knowledge. This “uncertainty principle” (as it came to be known) implied that a completely accurate picture of reality might not be available at any given time. Bohr, in his own pioneering work, stressed that observation affected and ordered reality. The human mind was forced to choose, among multiple complementary aspects of reality, which one it wanted to know accurately at a given moment. Wittgenstein counseled that knowledge was to be found in generalizations about similarities across phenomena, which he termed “family resemblances”.
Later, in the late twentieth century and the early twenty-first, this thinking informed theories of AI and machine learning.
Even if AI would never know something in the way a human mind could, an accumulation of matches with the patterns of reality could approximate and sometimes exceed the performance of human perception and reason.
As humans began to approach the limits of their cognitive capacity, they became willing to enlist machines — computers — to augment their thinking in order to transcend those limitations. As we are growing increasingly dependent on digital augmentation, we are entering a new epoch in which the reasoning human mind is yielding its pride of place as the sole discoverer, knower, and cataloger of the world’s phenomena.
We have reached a tipping point: we can no longer conceive of some of our innovations as extensions of that which we already know.
Digital natives do not feel the need, at least not urgently, to develop concepts that, for most of history, have compensated for the limitations of collective memory. They can (and do) ask search engines whatever they want to know, whether trivial, conceptual, or somewhere in between.
When information is contextualized, it becomes knowledge. When knowledge compels convictions, it becomes wisdom. The digital world has little patience for wisdom; its values are shaped by approbation, not introspection.
From Turing to Today — and Beyond
What mattered, Turing posited, was not the mechanism but the manifestation of intelligence. Because the inner lives of other beings remain unknowable, he explained, our sole means of measuring intelligence should be external behavior. With this insight, Turing sidestepped centuries of philosophical debate on the nature of intelligence.
In 1956, computer scientist John McCarthy further defined artificial intelligence as “machines that can perform tasks that are characteristic of human intelligence.”
AIs are imprecise, dynamic, emergent, and capable of “learning.”
The building blocks of these “learning” techniques are algorithms, sets of steps for translating inputs into repeatable outputs.
Humanity has always dreamed of a helper — a machine capable of performing tasks with the same competence as a human. In Greek mythology, the divine blacksmith Hephaestus forged robots capable of performing human tasks, such as the bronze giant Talos, who patrolled the shores of Crete and protected it from invasion. France’s Louis XIV in the seventeenth century and Prussia’s Frederick the Great in the eighteenth century harbored a fascination for mechanical automata and oversaw the construction of prototypes.
Early attempts to create practically useful AIs explicitly encoded human expertise — via collections of rules or facts — into computer systems.
In the 1990s, a breakthrough occurred.
In short, a conceptual shift occurred: we went from attempting to encode human-distilled insights into machines to delegating the learning process itself to the machines. In the 1990s, a set of renegade researchers set aside many of the earlier era’s assumptions, shifting their focus to machine learning.
To enable machine learning, what mattered was the overlap between various representations of a thing, not its ideal — in philosophical terms, Wittgenstein, not Plato.
Using machine learning to create and adjust models based on real-world feedback, modern AI can approximate outcomes and analyze ambiguities that would have stymied classical algorithms.
In 1958, Cornell Aeronautical Laboratory researcher Frank Rosenblatt had an idea: Could scientists develop a method for encoding information similar to the method of the human brain, which encodes information by connecting approximately one hundred billion neurons with quadrillions — 1015 — of synapses? He decided to try.
Today’s deep networks typically contain around ten layers.
As of this writing, three forms of machine learning are noteworthy: supervised learning, unsupervised learning, and reinforcement learning.
Generative neural networks, can create. First, generative neural networks are trained using text or images. Then they produce novel text or images — synthetic but realistic. The applications of these so‑called generators are staggering. A common training technique for the creation of generative AI pits two networks with complementary learning objectives against each other. Such networks are referred to as generative adversarial networks or GANs. The objective of the generator network is to create potential outputs, while the objective of the discriminator network is to prevent poor outputs from being generated.
When the algorithmic logic that personalizes searching and streaming begins to personalize the consumption of news, books, or other sources of information, it amplifies some subjects and sources and, as a practical necessity, omits others completely. The consequence of de facto omission is twofold: it can create personal echo chambers, and it can foment discordance between them.
Sometimes, operating beyond the bounds of human experience and unable to conceptualize or generate explanations, AI may produce insights that are true but beyond the frontiers of (at least current) human understanding.
AI cannot reflect upon what it discovers. The inability of AI to contextualize or reflect like a human makes its challenges particularly important to attend to.
When AI is employed, we should seek to understand its errors — not so we can forgive them but so we can correct them. Bias besets all aspects of human society, and in all aspects of human society, merits a serious response. Another source of misidentification is rigidity.
AI does not possess what we call common sense.
AI’s brittleness is a reflection of the shallowness of what it learns. Accordingly, the development of procedures to assess whether an AI will perform as expected is vital. The division between the learning and inference phases in machine learning permits a testing regime like this to function. Auditing datasets provides another quality-control check.
As of this writing, AI is constrained by its code in three ways.
- First, the code sets the parameters of the AI’s possible actions.
- Second, AI is constrained by its objective function, which defines and assigns what it is to optimize.
- Finally, and most obviously, AI can only process inputs that it is designed to recognize and analyze.
One day, AIs may be able to write their own code.
Predicting the rate of AI’s advance will be difficult.
Forecasting how swiftly AI will be applied to additional fields is equally difficult. But we can continue to expect dramatic increases in the capacities of these systems.
Whether AI stays narrow or becomes general, it will become more prevalent and more potent.
The world we know will become both more automatic and more interactive (between humans and machines), even if it is not populated with the multipurpose robots of science fiction movies.
Global Network Platforms
We rely on AI to assist us in pursuing daily tasks without necessarily understanding precisely how or why it is working at any given moment.
Without significant fanfare — or even visibility — we are integrating nonhuman intelligence into the basic fabric of human activity. This is unfolding rapidly and in connection with a new type of entity we call “network platforms”: digital services that provide value to their users by aggregating those users in large numbers, often at a transnational and global scale.
Thus, although they are operated as commercial entities, some network platforms are becoming geopolitically significant actors by virtue of their scale, function, and influence. Many of the most significant network platforms originated in the United States (Google, Facebook, Uber) or China (Baidu, WeChat, Didi Chuxing).
In countries where they operate, certain network platforms have become integral to individual life, national political discourse, commerce, corporate organization, and even government functions.
Various individuals, corporations, political parties, civic organizations, and governments will inevitably have differing views on the proper operation and regulation of AI‑enabled network platforms.
Even if our understandings differ, we must aim to understand AI‑enabled network platforms by assessing their implications for individuals, companies, societies, nations, governments, and regions. We must act urgently on each level.
Positive network effects did not originate with network platforms. Prior to the rise of digital technology, however, the occurrence of such effects was relatively rare.
Physical distances and national or linguistic differences are rarely obstacles to expansion: the digital world is accessible from anywhere with internet connectivity, and network platforms’ services can typically be delivered in several languages.
The digital world has transformed our experience of daily life. As an individual navigates throughout the day, he or she now benefits from, and contributes to, vast shoals of data.
The individual comes to rely, often instinctively or subconsciously, on software processes to organize and cull necessary or useful information. AI‑enabled network platforms have accelerated this integration process and deepened the connections between individuals and our digital technology.
As the individual interacts with the AI, and as the AI adapts to the individual’s preferences (internet browsing and search queries, travel history, apparent income level, social connections), a kind of tacit partnership begins to form.
The relationship between an individual, a network platform, and its other users is a novel combination of intimate bond and remote connection.
To a large extent, AI is judged by the utility of its results, not the process used to reach those results.
The prevalence of this type of constant AI companion is likely to increase. Our experience of day‑to‑day reality is being transformed. AI‑enabled network platforms have the capacity to shape human activity in ways that may not be clearly understood — or are even clearly definable or expressible — by the human user.
A network platform operating according to its standard commercial objectives and the demands of its users may, in effect, be transcending into the realm of governance and national strategy.
As AI operates to recommend content and connections, categorize information and concepts, and predict user preferences and goals, it may inadvertently reinforce particular individual, group, or societal choices.
The intersection between network platform and governmental arenas will produce unpredictable and, in some cases, highly contested results. Rather than clear outcomes, however, we are more likely to arrive at a series of dilemmas with imperfect answers.
For societies accustomed to the free exchange of ideas, grappling with AI’s role in assessing and potentially censoring information has introduced difficult fundamental debates.
The dynamics of positive network effects will tend to support only a handful of participants who are leading the technology and the market for their particular product or service.
Many governments will have an incentive to guarantee the continued operation of AI‑driven online services from other countries that have already been incorporated into fundamental aspects of their society. The emerging geopolitics of network platforms comprises a key new aspect of international strategy — and governments are not the only players.
The United States has given rise to a globe-spanning, technologically leading set of privately operated network platforms that rely increasingly on AI. The roots of this achievement lie in academic leadership at universities that attract top global talent, a start‑up ecosystem that enables participants to bring innovations rapidly to scale and profit from their developments, and government support of advanced R&D (through the National Science Foundation, DARPA, and other agencies).
China has similarly supported the development of network platforms that are already global in scale, but, at the same time, are poised to expand even further. While Beijing’s regulatory approach has encouraged fierce competition among domestic technology players (with global markets as the ultimate goal), it has largely excluded (or mandated heavily tailored offerings by) non-Chinese counterparts within China’s borders.
While East and Southeast Asia, the home of companies with global reach, produce key technologies such as semiconductors, servers, and consumer electronics, they are also the home of locally created network platforms.
Europe, unlike China and the United States, has yet to create homegrown global network platforms or cultivate the sort of domestic digital technology industry that has supported the development of major platforms elsewhere.
India, while still an emerging force in this arena, has substantial intellectual capital, a relatively innovation-friendly business and academic environment, and a vast reserve of technology and engineering talent that could support the creation of leading network platforms.
Russia, despite a formidable national tradition in math and science, so far has produced few digital products and services with consumer appeal beyond its own borders.
Shaped primarily by these governments and regions, a multidisciplinary contest for economic advantage, digital security, technological primacy, and ethical and social objectives is unfolding.
One approach has been to treat network platforms and their AI as primarily a matter of domestic regulation. Another approach has been to treat network platforms’ emergence and operations as primarily an issue of international strategy.
For countries and regions that do not produce homegrown network platforms, the choice for their immediate future seems to be between (1) limiting reliance on platforms that could provide leverage to an adversary government; (2) remaining vulnerable — for example, to another government’s potential ability to access data about its citizens; or (3) counterbalancing potential threats against each other.
That AI‑enabled network platforms created by one society may function and evolve within another society and become inextricable from that country’s economy and national political discourse marks a fundamental departure from prior eras.
The push and pull of individuals, companies, regulators, and national governments seeking to shape and channel AI‑enabled network platforms will grow increasingly complex, conducted alternately as a strategic contest, a trade negotiation, and an ethical debate.
Security and World Order
For as long as history has been recorded, security has been the minimum objective of an organized society.
AI holds the prospect of augmenting conventional, nuclear, and cyber capabilities in ways that make security relationships among rivals more challenging to predict and maintain and conflicts more difficult to limit.
No major country can afford to ignore AI’s security dimensions.
Nuclear, cyber, and AI technologies exist.
Progress and competition in these fields will involve transformations that will test traditional concepts of security.
A sober effort at AI arms control is not at odds with national security; it is an attempt to ensure that security is pursued and achieved in the context of a human future.
As entrants to the nuclear, cyber, and AI arenas multiply, the arms-control era still holds lessons worthy of consideration.
Into this world of unresolved strategic paradoxes, new capabilities and attendant complexities are emerging. The first is cyber conflict, which has magnified vulnerabilities as well as expanded the field of strategic contests and the variety of options available to participants. The second is AI, which has the capacity to transform conventional, nuclear, and cyber weapons strategy. The emergence of new technology has compounded the dilemmas of nuclear weapons.
Conventional and nuclear weapons exist in physical space, where their deployments can be perceived and their capabilities at least roughly calculated. By contrast, cyber weapons derive an important part of their utility from their opacity; their disclosure may effectively degrade some of their capabilities.
The attributes that lend cyber weapons their utility render the concept of cyber arms control difficult to conceptualize or pursue.
A central paradox of our digital age is that the greater a society’s digital capacity, the more vulnerable it becomes.
Conversely, in the event of a digital disruption, the low-tech state, the terrorist organization, and even individual attackers may assess that they have relatively much less to lose.
Nations are developing and deploying AI that facilitates strategic action across a wide range of military capabilities, with potentially revolutionary effects on security policy.
The introduction of nonhuman logic to military systems and processes will transform strategy.
Efforts to conceptualize a cyber balance of power and AI deterrence are in their infancy, if that. Until these concepts are defined, planning will carry an abstract quality.
Once they are released into the world, AI‑facilitated cyber weapons may be able to adapt and learn well beyond their intended targets; the very capabilities of the weapon might change as AI reacts to its environment.
Governments of technologically advanced countries should explore the challenges of mutual restraint supported by enforceable verification.
Three qualities have traditionally facilitated the separation of military and civilian domains: technological differentiation, concentrated control, and magnitude of effect. Throughout history, many technologies have been dual-use. Until now, though, none has been all three: dual-use, easily spread, and potentially substantially destructive. AI breaks this paradigm.
A process of mutual education between industry, academia, and government can help bridge this gap and ensure that key principles of AI’s strategic implications are understood in a common conceptual framework.
Leaders of this era can aspire toward six primary tasks in the control of their arsenals, with their broad and dynamic combination of conventional, nuclear, cyber, and AI capabilities.
- First, leaders of rival and adversarial nations must be prepared to speak to one another regularly,
- Second, the unsolved riddles of nuclear strategy must be given new attention and recognized for what they are — one of the great human strategic, technical, and moral challenges.
- Third, leading cyber and AI powers should endeavor to define their doctrines and limits (even if not all aspects of them are publicly announced) and identify points of correspondence between their doctrines and those of rival powers.
- Fourth, nuclear-weapons states should commit to conducting their own internal reviews of their command-and-control and early warning systems.
- Fifth, countries — especially the major technological ones — should create robust and accepted methods of maximizing decision time during periods of heightened tension and in extreme situations.
- Finally, the major AI powers should consider how to limit continued proliferation of military AI or whether to undertake a systemic nonproliferation effort backed by diplomacy and the threat of force.
AI and Human Identity
With the rise of AI, the definitions of the human role, human aspiration, and human fulfillment will change.
To the two traditional ways by which people have known the world, faith and reason, AI adds a third.
For humans accustomed to agency, centrality, and a monopoly on complex intelligence, AI will challenge self-perception.
With perceptions of reality complementary to humans’, AI may emerge as an effective partner for people.
For some, the experience of AI will be empowering. In most societies, a small but growing cohort understands AI.
For managers, the deployment of AI will have many advantages. AI’s decisions are often as accurate or more accurate than humans’, and with the proper safeguards, may actually be less biased.
For the entrepreneur offering new products, the administrator wielding new information, and the developer creating increasingly powerful AI, advances in these technologies may enhance senses of agency and choice.
As AI transforms the nature of work, it may jeopardize many people’s senses of identity, fulfillment, and financial security.
Whatever AI’s long-term effects prove to be, in the short term, the technology will revolutionize certain economic segments, professions, and identities. Societies need to be ready to supply the displaced not only with alternative sources of income but also with alternative sources of fulfillment.
These tensions — between reasoned explanations and opaque decision making, between individuals and large systems, between people with technical knowledge and authority and people without — are not new. What is new is that another intelligence, one that is not human and often inexplicable in terms of human reason, is its source.
Coming of age in the presence of AI will alter our relationships, both with one another and with ourselves.
Today’s near-constant stream of media increases the cost, and thus decreases the frequency, of contemplation.
To make sense of our place in this world, our emphasis may need to shift from the centrality of human reason to the centrality of human dignity and autonomy.
Each society must determine in the first instance the full range of permissible and impermissible uses of AI in various domains.
Reality explored by AI, or with the assistance of AI, may prove to be something other than what humans had imagined. It may have patterns we have never discerned or cannot conceptualize. Its underlying structure, penetrated by AI, may be inexpressible in human language alone.
The AI revolution will occur more quickly than most humans expect. Unless we develop new concepts to explain, interpret, and organize its consequent transformations, we will be unprepared to navigate it or its implications.
AI and the Future
Individuals and societies that enlist AI as a partner to amplify skills or pursue ideas may be capable of feats — scientific, medical, military, political, and social — that eclipse those of preceding periods.
By helping humanity navigate the sheer totality of digital information, AI will open unprecedented vistas of knowledge and understanding.
Accessed by AI, new horizons are opening before us. Previously, the limits of our minds constrained our ability to aggregate and analyze data, filter and process news and conversations, and interact socially in the digital domain. AI permits us to navigate these realms more effectively.
The need for an ethic that comprehends and even guides the AI age is paramount. But it cannot be entrusted to one discipline or field.
At every turn, humanity will have three primary options: confining AI, partnering with it, or deferring to it.
AI will transform our approach to what we know, how we know, and even what is knowable.
The AI era will elevate a concept of knowledge that is the result of partnership between humans and machines.
AI’s dynamism and capacity for emergent — in other words, unexpected — actions and solutions distinguish it from prior technologies. Unregulated and unmonitored, AIs could diverge from our expectations and, consequently, our intentions.
AI’s dynamic and emergent qualities generate ambiguity in at least two respects. First, AI may operate as we expect but generate results that we do not foresee. Second, in some applications, AI may be unpredictable, with its actions coming as complete surprises.
While humans may carefully specify AI’s objectives, as we give it broader latitude, the paths AI takes to accomplish its objectives may come to surprise or even alarm us.
The AI age needs its own Descartes, its own Kant, to explain what is being created and what it will mean for humanity.
Imperfection is one of the most enduring aspects of human experience, especially of leadership. If AI displays superhuman capabilities in some areas, their use must be assimilable into imperfect human contexts.
AI and other emerging technologies (such as quantum computing) seem to be moving humans closer to knowing reality beyond the confines of our own perception.
Human intelligence and artificial intelligence are meeting, being applied to pursuits on national, continental, and even global scales. Understanding this transition, and developing a guiding ethic for it, will require commitment and insight from many elements of society: scientists and strategists, statesmen and philosophers, clerics and CEOs. This commitment must be made within nations and among them. Now is the time to define both our partnership with artificial intelligence and the reality that will result.