Superintelligence

By Nick Bostrom - Read: June 16, 2024 - Rating: 7/10

A provocative book on the ethics and pathways of superintelligence, an advanced form of AI extending far beyond human abilities. I often had to pause to reflect on Bostrom's analysis, which opened my eyes to the safety issues surrounding superintelligence — something to be taken much more seriously than we might think.

Bostrom clearly exposes the potential societal, economic, and civilizational impacts that superintelligence might have on our future. Though the reading wasn't easy, I enjoyed his interdisciplinary approach, incorporating mathematics, neuroscience, and philosophy.

My Notes

Supertintelligence: any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.

Superintelligence should not be considered as AGI but as an advanced form, extending beyond an intelligence matching or exceeding human level at cognitive tasks. In other words, it significantly surpasses the performance of humans.

Bostrom distinguishes three types of superintelligence:

  • Speed superintelligence: an intellect that can do all the tasks a human does, but much faster.
  • Collective superintelligence: a system that achieves superior performance by aggregating large numbers of smaller intelligence such as humans and machines.
  • Quality superintelligence: a system as fast as a human mind but qualitatively smarter, reaching possible but non-realized cognitive talents.

The upsides of such superintelligence would be tremendous in both hardware and software. The former would allow an increase in computation and communication speed, along with far greater storage capacity. The latter would offer infinite possibilities for duplicating, editing, and performing important modern tasks such as engineering and programming.

Paths to superintelligence include:

  • AI: machines perform human-level tasks (GOFAI, ML)
  • Whole brain emulation: intelligent software replicating the foundational layers and structure of the biological brain (scan → translation → simulation)
  • Biological cognition: enhancing the functioning of biological brains (diet optimization, smart drugs, gene manipulation)
  • Brain-computer interface: performing high-bandwidth tasks with brain implants

The duration of the takeoff depends on two variables: optimization power and recalcitrance.

  • Optimization power is how quickly an intelligence can enhance its own capabilities
  • Recalcitrance is how resistant that intelligence is to such enhancements (the difficulty to control it as it becomes more capable)

We can define the rate of change as follow: Rate of change in intelligence = Optimization power / Recalcitrance

A high recalcitrance is thus slowing down the path to superintelligence, as it makes AI improvements more difficult.

Recalcitrance is lower for machine intelligence than human intelligence because of the physical and internal properties of artificial computing systems. On the other hand, optimization power can be easily increased with more computers.

It is likely that an intelligence explosion will occur at best very rapidly, and at worst moderately.

Two approaches are possible for a superintelligence to shape the future based on its goals:

  • The Orthogonality Thesis (orthogonality between intelligence and final goals): a superintelligent agent’s intelligence is independent of the nature of its goals. An AI can have any kind of goal no matter how intelligent it is.
    • Predictability through design (from engineers)
    • Predictability through inheritance (from whole brain emulation)
    • Predictability through convergent instrumental reasons (inferring from its instrumental values)
  • The Instrumental Convergence Thesis: regardless of the superintelligence’s final goals, it will certainly pursue some intermediate goals because they are useful in order to attain the final ones.
    • Self-preservation
    • Resource acquisition
    • Cognitive enhancement

Since it is impossible to predict all the different scenarios that might happen in the future, preparing these supertintelligent AIs with some established values can be helpful to ensure good outcomes over bad ones.

  • Reinforcement learning: methods using reward signals to stimulate the system to pursue good values
  • Value accretion: an agent might first experience human life so that it acquires the good values
  • Motivational scaffolding: gradually shaping the system's goals during its learning process
  • Institution design: structuring institutions in ways that favor the alignement of a superintelligence with our values.