Home > Digitalizacija > Max Tegmark: Life 3.0., Being human in the age of Artifical Intelligence

Max Tegmark: Life 3.0., Being human in the age of Artifical Intelligence

Life is simply a process that can retain its complexity and replicate. What’s replicated isn’t matter (made of atoms) but information (made of bits) specifying how the atoms are arranged. We can think of life as a self-replicating information-processing system whose information (software) determines both its behavior and the blueprints for its hardware.

Our Universe is 13.8 billion years old. Life on earth is about 4 billion years old. That was the moment when Life develop intelligent agents – entities that collect information about their environment from sensors and then process this information to decide how to act back on their environment.

Life has three stage:

  • Life 1.0 – can replicate and survive (simple biological)
  • Life 2.0 – can design its software (cultural)
  • Life 3.0 – can design its hardware (technological)

When talking about general AI and ability of machines to surpass human cognitive abilities, there are two main questions. When it will happen, and will it be good for humanity. We can divide people in five categories regarding their stand on those two questions: techno-skeptics (Andrew Ng – Badui), luddites, beneficial AI movement (Stuart Russell), digital utopians and virtually nobody (category that would believe that general AI will happen in few years.

Authors that started researches on AI safe use are Alan Turning, Irving J. Good, Eliezer Yudkowsky, Michael Vassar and Nick Bostrom.

Intelligence is ability to accomplish complex goals. Today we humans are better at wide complex activities, while machines outperform us on small, but growing number of narrow domains. The wholly grail of AI research is to build artificial general intelligence – AGI. Term was popularized by Shane Legg, Mark Gubrud and Ben Goertzel.

The conventional wisdom among Artificial intelligence researchers is that intelligence is ultimately all about information and computation, not about flesh, blood or carbon atoms. This means that there’s no fundamental reason why machines can’t be one day at least as intelligent as us.

If we look at life of information, we need memory. Memory devices are very different, they need to be long-lived states that can encode information until it’s needed. It is important that no matter what memory device or medium we use, information that is transferred is the same. Information can take on a life of its own, independent of physical substrate. When you are retrieving information from computer by specifying where it is (it uses location-based search), and we search information in our brain by specifying something about what is stored. Latter memory systems are called auto-associative.

Computation is transformation of one memory state into another. A computation takes information and transform it, implementing what mathematicians call a function. Alan Turing argued in his paper already in 1936, that any machine that can perform basic set of operation can perform any operation if given enough resources and so we can also say that computation is substrate-independent activity.

In short computation is a pattern in the spacetime arrangement of particles, and it’s not the particles but the pattern that really matters! Matter doesn’t matter. Hardware is matter and software is pattern.

Even if we show that any computation can be done as combination of other computation, it is learning, ability to self-evolve that is the most fascinating aspect of general intelligence. Neural networks as tool of learning have now transformed both biological and artificial intelligence and are now dominating machine learning. Neural networks are neurons connected by synapsis and learning is process of updating this synapsis. As are computer able to improve efficiency with parallel computing and recurrent operations, so does brain. The network of neurons in our brain is recurrent, letting information from our eyes, ears and other senses affect their outgoing information output to our muscle.

One technique of machine learning and learning in general is positive reinforcement, when you are receiving rewards and that is pushing you to repeat things that bring positive rewards. Upgrade of this technique is deep reinforcement learning. Deep learning is upgrade of GOFAI – Good old-fashioned artificial intelligence. Artificial intelligence is moving into areas where humans could not see it before:

  • Creativity, intuition, strategy
  • Natural Language – it can see patterns and learn based on massive data of proper use, but it still can understand what words mean
  • Expansion creates opportunities and challenges

When AI is developing and it is taking over more and more complex tasks and is involved into areas where consequences of mistakes can be fatal – automation in transport, control of medicine processes and nuclear power, we need to make sure that it becomes more and more robust and bugs free. On other hand we need to build un-bias AI. By doing so we can transform even areas like legal systems, by providing quick, transparent, un-bias robojudges.

There are four main areas of AI safety we need to consider:

  • Verification
  • Validation
  • Security
  • Control

Two areas where we need to carefully implement AI are weapons, especially non-human control weapons and job replacements and wealth creation by AI.

If we want to achieve human like general artificial intelligence, we can either adapt software to better match today’s computer or build brain-like hardware (rapid progress is being made on so-called neuromorphic chips).

When we think about development of AI and threat of AI controlling humans, question about how development look like, is also question about what natural state of life in our cosmos is: unipolar or multipolar? Cooperation and complexity were drivers of human development. As author is explaining it very clearly: “The branch of mathematics known as game theory elegantly explains that entities have an incentive to cooperate where cooperation is a so-called Nash equilibrium: a situation where any party would be worse off if they altered their strategy. To prevent cheaters from ruining the successful collaboration of a large group, it may be in everyone’s interest to relinquish some power to higher level in the hierarchy that can punish cheaters: for example, people may collectively benefit from granting a government power to enforce laws, and cells in your body may collectively benefit from giving a police force (immune system) the power to kill any cell that acts too uncooperatively (say by spewing out viruses or turning cancerous). For a hierarchy to remain stable, its Nash equilibrium needs to hold also between entities at different levels: for example, if a government doesn’t provide enough benefit to its citizens for obeying it, they may change their strategy and overthrow it. In a complex world, there is diverse abundance of possible Nash equilibria, corresponding to different types of hierarchies. Some hierarchies are more authoritarian then others. Some hierarchies are held together mainly by threats and fear, others mainly by benefits.”[1]

Development of AI is not something that is happening to us, humans. We have control in its development, and we need to start thinking about what kind of outcome we prefer and how to steer it into that direction. When we ask our self about where AI development will go, we need to ask ourselves these questions:

  • Do you want there to be superintelligence?
  • Do you want humans to still exist, be replaced, cyborgized and/uploaded/simulated?
  • Do you want humans or machines in control?
  • Do you want AI to be conscious or not?
  • Do you want to maximize positive experiences, minimize suffering or leave this to sort itself out?
  • Do you want life spreading into the cosmos?
  • Do you want civilization striving toward a greater purpose that you sympathize with, or are you OK with future life forms that appear content even if you view their goals as pointlessly banal?

When asking all this questions possible AI scenarios emerge:

  • Libertarian utopia
  • Benevolent dictator
  • Egalitarian utopia
  • Gatekeeper
  • Protector god
  • Enslaved god
  • Conquerors
  • Descendants
  • Zookepper
  • 1984
  • Reversion
  • Self-destruction

In a future world where minds get uploaded and copied, central instance of life aren’t minds but experiences. Future life that will reach the technological limit will need mainly one fundamental resource called baryonic matter – meaning anything made up of atoms or their constituents (quarks and electrons). Since technology can rearrange atoms into anything. In order to reach those possibilities, we would need to redesign our energy acquiring systems and communication systems. Since computation abilities are so much bigger then todays supercomputers use, when we will develop them, their use will enable dimensions, we can today only think about. Once this will happen, humanity can look for more matter by expanding our horizons into space. And chase for more particles.

If, in future, we will expand into space and we will clash with different civilizations and if technology will reach its limits, then we could face battles not of weapons but ideas. Without technology, our human extinction is imminent in the cosmic context of billions of years.

One of the most important questions in AI development is: should we gave AI goals and if yes, what kind of goals. Goal-oriented approach is hard-wired in physics law. Even nature try to optimize something. Nature main goal is entropy. But this goal is not final state of everything, since life is fighting back with tendency to adjust itself and using its ability to reduce its entropy by increasing entropy around it. Living organism is an agent of bounded rationality that doesn’t pursue a single goal, but instead follows rules of thumb, for what to pursue and avoid. Our human minds perceive these evolved rules of thumb as feelings, which usually (and often without us being aware of it) guide our decision making toward ultimate goal of replication. Human behavior strictly speaking doesn’t have a single well-defined goal at all. When we talk about goal-orientation, we need to include goal-orientation design and goal-orientation behavior. When we build AI and assign goals to it, it is extremely important to align them with ours. When building AI, we would like AI to learn our goals, adopt them and retain them. When we set AI to achieve its ultimate goal, we can actually predict what sub-goals, will it work on in order to preserve and improve its chances to achieve its ultimate goal.

If we set it in this way, we can look at pyramid like this:

  • Ultimate goal
    • Capability enhancement
      • Better hardware
        • Self-preservation
        • Resource acquisition
      • Better software
      • Better world model (truth)
    • Goal retention
      • Better world model
        • Information acquisition
        • Curiosity

Ethical problem and the goal-alignment problem are crucial ones that need to be solved before any superintelligence is developed. Goals can be independent of system intelligence. Intelligence is only the ability to accomplish complex goals, regardless of what these goals are.

Consciousness is subjective experience. When we talk about idea of non-physical force in humans, soul or anima, we need to come back to original idea that our bodies are nothing but quarks and electrons, which move accordingly to physical laws. If future technology will be able to track all our particles and show that they move exactly according to physical laws, then idea of consciousness and soul will be beaten. If on the other hand they will find a force, outside physical laws, that will move them, this new entity can be study as we study new fields and particles in the past.  Consciousness is CEO of our brain. Work only with the most complex questions, doesn’t work on lower automated levels, but if it wants to, can check how this low level is run. Consciousness is an emergent phenomenon, with properties above and beyond of those of its particles. Italian author Giulio Tononi has proposed one quantity, which he called “integrated information”, denoted by a Greek letter Phi, which basically measures how much different parts of a system know about each other. His theory is called “integrated information theory”. If we use this theory on consciousness, we could say that consciousness is the way information feels when being processed in certain complex way.

Coming back to AI and potential for its consciousness, or experience-based activities, author calls it sentronium for the most general substance that has subjective experience (sentient). He believes that if consciousness is the way that information feels when it’s processed in certain ways, then it must be substrate-independent. He defines four principles that information processing needs to obey to be conscious:

  • Information principle – A conscious system has substantial information-storage capacity.
  • Dynamics principle – A conscious system has substantial information-processing capacity.
  • Independence principle – A conscious system has substantial independence from the rest of the world.
  • Integration principle – A conscious system cannot consist of nearly independent parts.

We humans built our identity on being Homo Sapiens (the ability to think intelligent), the smartest entities around. As we prepare to be humbled by ever smarter machines, I suggest we rebrand ourselves as Homo sentients (the ability to subjectively experience qualia).

Future is bringing changes, we have to act now, to be ready for them or we will continue in a path Isaac Asimov described: “The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” Changes in education, legal system, economy models, international-conflicts, security measures are needed now, before general AI is available.


[1]Explanation in this book on page 152-153

You may also like
IoT, AI and blockchain by Oracle
Asilomar Principles – 2017
Byron Reese: The Fourth Age; Smart Robots, Conscious Computers, and the Future of Humanity
Thomas H. Davenport: The AI advantage, How to Put the Artificial Intelligence Revolution to Work

Leave a Reply