Home > Digitalizacija > Reid Hoffman, Greg Beato: Superagency

Superagency

Throughout history, new technologies have regularly sparked visions of impending dehumanization and societal collapse.

As much as cooperation defines us, competition does too. We form groups of all kinds, at all levels, to amplify our efforts, often deploying our collective power against other teams.

You’ll never get the future your want simply by prohibiting the future you don’t want.

Iterative deployment is the term that OpenAI, ChatGPT’s developer, uses to describe its own method in bringing its products into the world.

Humanity has entered the chat

With zero marketing dollars behind it, ChatGPT attracted its first one million users in five days. It was impressively knowledgeable, stunningly versatile and convincingly human.

Even when LLM seems like they possess humanlike common-sense reasoning, they don’t. Instead, they are making statistically probable predictions regarding patterns of language. LLMs have no real capacity for commonsense reasoning, no lived experience, and no grounded model of the world.

It’s easy to be an optimist if your time horizons are long.

Can we continue to maintain control of our lives, and successfully plot our own destinies?

Human agency is a fundamental concept in philosophy, sociology, and psychology. It holds that you, as an individual, have the capacity to make your own choices, act independently, and thus exert influence over your life. A sense of agency can endow your life with purpose and meaning.

As AI systems evolve, their capacity for self-directed learning, problem-solving, and executing complex series of tasks without constant human oversight is increasing.

As Homo techne, we are defined by our capacity and commitment to creating new ways of being in the world through our tool-making. Now we have an opportunity to develop new supertools. AI is increasing your agency. Intelligence itself is now a tool – a scalable, highly configurable, self-compounding engine for progress.

A more technologically driven society also required a more educated populace.

Open AI invited the public to participate in the development process. It described this approach as iterative deployment.

We have different group of people based on their attitude to technology:

Doomers – believe we’re on a path to a future where, in worst-case scenarios, superintelligent, completely autonomous Ais that are not longer well aligned with human values may decide to destroy us altogether.

Gloomwers – are both highly critical of Ais and highly critical of Doomers. They favor a prohibitive, top-down approach, where development and deployment should be closely monitored and controlled by official regulation.

Zoomers – argue that the productivity gains and innovation AI will create far exceed any negative impacts it produces. They want clear runway and complete autonomy to innovate.

Bloomers – their perspective is fundamentally optimistic. They believe AI can accelerate human progress in countless domains. they pursue mass engagement, in real-world conditions – which is what you get with iterative deployment.

Author is in Bloomer camp.

In the twenty-first century, individual agency is more closely aligned with national agency than ever before.

Big knowledge

In 1984, Orwel presented a harrowing vision of God-level techno-surveillance and its dehumanizing effects.

Vance Packard wrote his 1964 bestseller named The Naked Society. He wrote: “My own hunch is that Big Brother, if he ever comes to these US, may turn out to be not a greedy power seeker, but rather a relentless bureaucrat obsessed with efficiency. And he, more than the simple power seeker, could lead us to that ultimate of horrors, a humanity in chains of plastic tape.”[1]

Privacy isn’t the only way to achieve certain ends. Especially in a networked world, a strong public identity creates autonomy and agency too. Public identity equates to discoverability, trustworthiness, influence, power, agency. It’s a form of social capital.

Willingness to live more publicly creates a great deal of collective value.

Humanity has already reached the point where we’re producing more information that we can effectively make use of on our own.

What could possibly go right?

Technology is itself one of humanity’s most proven levers for creating positive change at scale. That’s why solutionism’s inverse problemism, is a real issue we face too.

Emphasizing critique over action, precaution and even prohibition over innovation, problemism can also do real harm to society. When you only focus on what could possibly go wrong, you inevitably discount what could possibly go right.

We must accept some level of risk and uncertainty so that we can action and move forward.

Asking what could possibly go right means committing to action, then iterating and learning from successes, failures, and criticisms alike.

Billions of people forge some of their most meaningful relationships with dogs, cats, and other animals that have a relatively limited range of communicative powers.

While AI models are explicitly not conscious or self-aware, they are, in their own statistically probable way, performatively kind and empathetic in ways that often surpass human norms.

The triumph of the private commons

As long as users derived all the value from Google’s efforts to productively leverage this behavioral data, Zuboff maintains, it was a fair exchange. Zuboff calls this process the “behavioral value reinvestment cycle”.

Zuboff may be wrong when she claims that Big Tech companies “do not establish constructive producer-consumer reciprocities”.

To create a state-of-the-art LLM takes massive amounts of training data. If courts determine that training on data to extract patterns and information, rather than to reproduce or incorporate an original work in recognizable forms, doesn’t fall under fair use, we’ll need novel solutions to manage licensing at such enormous scale.

Rather than extraction operations, we see something more akin to data agriculture.

The commons are property we all share, property that’s owned not by any one person or group, but that’s held, well, in common.

Eilnor Ostrom, winner of Nobel Prize in 2009 in Economics, defined eight principles that characterize successful “common-pool resource” institutions.

Erik Brynjolfsson and Avinash Collis conducted research and they find out that the median compensation Facebook users were willing to accept to give up the service for one month was 48 USD. They found out that the internet is basically a consumer surplus-generating machine. The median amount that would take people to give up search engines for a year was a whooping 17.530 USD, email 8.414 USD and digital maps 3.648 USD.

What is the value of this completely unprecedented access to global Big Knowledge that billions of people now take for granted?

Democratizing access to knowledge and opportunities, the private commons enables individual agency, educational opportunity, social mobility, and, ultimately, professional growth.

In his 1968 essay Garret Hardin said that the more useful or valuable people find a shared resource, the more likely they will collectively ruin it through overuse. Individuals always try to maximize their own gain when taking advantage of a common resource. He believed that the only way to solve the dilemma included private property, or something formally like it, or coercive laws or taxing devices.

Ostrom on the other hand spent years of research that local community could, and often did, effectively manage common-pool resources sustainably without resorting to privatization or government oversight.

Digital commons can function very differently than traditional physical commons. Instead of carefully controlling access to scarce and hard-to-replace resources, an obvious strategy is for digital commons to threat the resources that should be cultivated as much as possible and used proactively.

In the digital world, the tragedy of the commons, if there has to be one at all, it occurs when people try to put limits on how much data we create, how much we share, and who can share it.

AI will soon act as an intelligent interface layer between you and most, or maybe even all, of the services you use.

Testing, testing, 1, 2, infinity

Can machines think? Question that in 1950 was in the mind of Alan Turing. We call the process of finding out if the machine can be mistaken for human the Turing Test.

The AI models you see today are built on years of carefully administered tests designed to measure their performance across multiple dimensions.

Benchmarks have long played a key role in progress throughout the computer industry. AI benchmark is know as SuperGLUE (GLUE stands for General Language Understanding Evaluation). It tests models on eight tasks. Multisentence reading comprehension, word sense disambiguation, coreference resolution and some others.

While testing and regulation both aim to standardize and control, testing elevates the focus from compliance to continuous improvement. It’s regulation, gamified.

As it turns out, a truly effective benchmark can optimize itself into obsolescence, by inspiring performance gains so great the benchmark no longer poses a sufficient challenge to the models it was designed to measure.

How certain do we need to be of LLM performance to trust it, and how do we get there?

AI researchers, developers, and ethicists often emphasize the importance of two related concepts: model interpretability and explainability. Interpretability focuses on the degree to which a human can consistently predict a model’s results. Explainability refers to the how of a model’s decision-making process.

Instead of demanding perfect performance –  an unrealistic standard we don’t apply to humans – we should focus on establishing acceptable error rates and continuously improving overall system reliability.

Chatbot Arena is an open-source platform for evaluating LLMs based on human preferences.

One key aspect of Open AI’s iterative deployment approach is how it enables decentralized hands-on testing at a scale you could never achieve In a lab.

Innovation is safety

Gloomers tend to associate safety with attributes like caution, deliberation, and attentiveness. But the price of development matters too. It’s also important to always consider the global context of technology.

Rapid development also means adaptive development.

The precautionary principle holds that new technologies are “guilty until proven innocent”. Many entrepreneurs, technologists and investors favor an approach known as permissionless innovation. This approach explicitly establishes ample breathing space for innovation, experimentation, and adaptation.

In the early 1990s US policymakers signaled that permissionless innovation would be the norm for the Internet and digital technology in America through a series of policy statements. The Congress passed the Telecommunication Act of 1996, including a provision known as Section 230. Twenty-six words that made the internet. In 1997, Bill Clinton and Al-Gore released the “Framework for Global Economic Commerce. Hands-off, no-new taxes approach to regulating business transactions on the worldwide computer network.

Learning happens much quicker now. Iterative deployment supplements this paradigm with the explicitly prosocial component of measured distribution.

New risks will emerge alongside new capabilities. Instead of settling for nothing less than risk-free models, however, we should make it our goal to understand the risks that occur in real-world conditions and systematically work to manage and reduce them. Iterative deployment is how you do this.

As a general template, the approach we took with automobility makes sense for AI too. Instead of depending on regulators and industry experts to develop and refine AI behind closed doors, in centralized undemocratic ways, we should continue to engage in iterative deployment that helps us better understand how people are using AI, see where issues develop as usage scales, and adjust accordingly.

Informational GPS

GPS was developed in 1973 by the Department of Defense.

Big Knowledge gives us the power to go on road trips with zero planning and synchronize plans with friends along the way. While GPS serves many purposes, across multiple domains, its breakthrough application was turn-by-turn navigation.

Conceptualizing LLMs as a form of informational GPS provides a familiar model.

Since LLMs generate outputs based on statistical probability rather than fixed rules, a single prompt can produce different outcomes each time you input it.

Large language models are, at heart, systems for analyzing, synthesizing, and mapping language flows.

Mobility has always been served as the foundation for self-improvement.

LLM can help upskill beginners very quickly. It has almost like a democratizing effect.

Unlike their human counterparts, LLMs are instantly accessible, infinitely patient, and always willing to answer just one more question.

The openness to instant user feedback immediately distinguished hands-on LLMs like ChatGPT from most earlier from of AI.

When you’re interacting with LLMs, it’s useful to provide as many coordinates as you can:

  • What are you seeking to learn?
  • Is there a specific goal or intent behind that request?
  • What details about you might help the LLM tailor a response?
  • Is there a specific role or personal the LLM itself should assume for this interaction?
  • What factors might make its outputs more relevant to you?

Law is code

Lawrence Lessing said that in the real world four distinct constraints regulate human behavior: laws, norms, markets, and architecture. On the internet, it was the same, except that in the medium’s early years, architecture, in the form of code, played an outsized role. On the internet, code was law.

Law is a command backed up by the threat of a sanction.

Even today, smart contracts that run on blockchains like Ethereum and Solana must be written in deterministic, rules-based code that always produce the same outputs when given specific inputs. However, it’s also now possible to write contracts that incorporate machine-learning algorithms. Contracts as code could become even more flexible and adaptive than human-written contracts (and laws).

While laws and social norms provide the framework, what matters even more is how readily the public embraces them. Laws and norms work because we choose them and consent to them.

Consent of the governed, or the implicit agreement that citizens made to trade some potential freedoms for the order and security states can provide, isn’t a biding agreement. It’s proposition in eternal flux forever being earned and validated.

If the long-term goal is to integrate AI safely and productively into society instead of simply prohibiting it, then citizens must play an active and substantive role in legitimizing AI.

Networked autonomy

When states started driver’s licenses in the early 1900s baseline driving competence began to increase.

We enjoy some liberties because they are hard to regulate. We enjoy other liberties because they’re difficult to exercise.

Life in the land of the free has become an endless odyssey of low-key administrative tyranny and casual surveillance.

As new technologies diffuse through societies, new regulations and new norms follow, and these changes impact our evolving conceptions of freedom.

AI, in turn, will also impact conception of freedom.

As Mustafa Suleyman writes in The Coming Wave, “democratizing access (to highly capable artificial intelligence) necessarily means democratizing risk”.[2]

When nation-states are no longer the only entities capable of launching nation-state-level attacks, there’s an obvious rationale for new levels of regulation and surveillance to reduce the possibility of such occurrences.

John Stuart Mill asserted that individual freedom was essential, not simply as an end in itself, but because of how it can contribute to the overall well-being of society. What Mill understood was that thriving people lead to thriving communities. In essence what he was arguing for was a kind of networked autonomy. Operating individually, the parts are strong.

The United States of A(I)merica

Because of the Luddites the Parliament passed a law, the Frame Breaking Act of 1812, that made destroying machines a crime punishable by death. The last Luddite attack was in 1816. What if history had taken a different turn?

What if country would pass a strict Human Dignity Act for assessing new technologies before introduction. Other countries will follow industrialization in full form. While other countries would embrace automated modes of production and England export market for textile would collapse.

Every country that embraces AI in strategic and well-executed ways will likely see substantial gains in productivity and efficiency.

Just a few years ago, AI development was largely a two-country race: the US versus China.

This is the beginning of a new industrial revolution, it is about production of intelligence, according to Jensen Huang.

AI infrastructure becomes mission critical to national interests.

Instead of a Congress full of lawyers with their legal expertise, we’ll need more legislators with expertise in technology and engineering. We also need elected officials who understand that the people they serve have become accustomed to the agency, choice, and convenience delivered by the technological advances of the past twenty-five years.

Government 2.0 is described by Tim O’Reilly as a model in which government serves as a platform, facilitator, and convener of civic action rather than just a service provider and top-down issuer of laws.

Is the best possible future us envisioning AI as an extension of individual human wills? Shared purpose of engaging with AI is necessary. Shared between nation and individual.

You can get there from here

Few fundamental principles:

  • Designing for human agency is the key for producing broadly beneficial outcomes for individuals and societies alike.
  • When agency prevails, shared data and knowledge become catalysts for individual and democratic empowerment, not control and compliance.
  • Innovation and safety are not opposing forces, but rather synergistic ones.
  • Our collective use of AI will have compounding effects and will lead to a new era of superagency.

The key is iterative deployment in pursuit of the best future that can prevent worse futures.

An exploratory, adaptive, forward-looking mindset literally opens new worlds of solutions to pursue, strategies to enact, an, in the case of AI, intelligences to apply in novel ways and contexts.


[1] In the book on page 31

[2] In the book on page 192

You may also like
Reid Hoffman, Ben Casnocha, Chris Yeh: The Alliance

Leave a Reply