Moving Humanity Forward

The below content is a modified snippet of a tell-all book. The book will cover the backstory to the AGI research, how it evolved through multiple startups, the geo-political impact of the technology and the vision on how AGI can transform life on earth, from education and economics to criminal justice and global governance.

Presale of the book starts in Q2 2022. Subscribe to our newsletter:



In this edition, we’ll briefly discuss the Theory of General Intelligence. This theory has been accepted for oral presentation at the 2022 International Conference of Advanced Research in Applied Science, Engineering, and Technology (ICARASET).

“A fundamental problem in artificial intelligence is that nobody really knows what intelligence is.“

That’s the opening sentence of Universal Intelligence: A Definition of Machine Intelligence” (link) authored by Shane Legg and Marcus Hutter. If those names sound familiar, they are. Legg is a co-founder of DeepMind and Hutter is a senior scientist at DeepMind - two very well accomplished individuals with a long track record researching artificial general intelligence.

While both have done exemplary work in this field, this paper, in my opinion, is poor. The fact that intelligence is not defined scientifically is a known problem, and, as Legg and Hutter is attempting, we should develop a definition.

A definition, however, needs to be based on real science. And that is where this paper fails. As the authors say in the abstract:

In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense.

The authors do not propose a falsifiable theory of intelligence. Instead, they simply try to convert informal definitions to mathematical equations. On page 11, the authors list the ten definitions they used. The first two are:

  1. It seems to us that in intelligence there is a fundamental faculty, the alteration or the lack of which, is of the utmost importance for practical life. This faculty is judgement, otherwise called good sense, practical sense, initiative, the faculty of adapting oneself to circumstances.

  2. A global concept that involves an individual’s ability to act purposefully, think rationally, and deal effectively with the environment.

The phrase “informal definitions” as the authors use in the abstract could mean “simplified language” of some official complex definition. But, since no such definition exists, “informal definition” refers to opinions. Opinions are certainly not scientific and not falsifiable.

And mathematical equations are great, except mathematics is a language used to describe science. If what you are trying to describe is not scientific, like an opinion, then the math equation is pointless.

I know the above is a harsh judgment of their paper. Legg and Hutter are amazing scientists. My harsh judgment is not on them as individuals or their careers, only on this specific paper.

To their defense, they never claim to scientifically define intelligence. Instead, they present their own informal definition of intelligence:

Intelligence measures an agent’s ability to achieve goals in a wide range of environments.

Clearly, the definition does not pass the high bar of scientific definitions. But because Legg and Hutter only claim an informal definition, they can side step normal scientific process.

Neither the authors’ own definition nor the informal definitions from their experts are definitions most people would disagree with. But science requires a more in-depth investigation.

Let’s start our journey from the authors’ own informal definition. Most people would agree that a person who’s more efficient at achieving a wide variety of goals tend to be recognized as more intelligent than a person who constantly struggles to achieve the same goals.

The first question we need to ask is what is a goal? What does it mean to achieve a goal? To explore these questions, let’s turn to a thought experiment I call “The Coffee Cup Experiment”.

In this thought experiment, we will have two participants, Jane and Joe, take on a task. The goal of the task is to move the coffee cup to the red dot.

Jane goes first and moves the cup in a straight motion as shown below.

Next Joe goes; and let’s assume Joe is truly attempting to achieve the goal, although, as you can see, not very efficient.

Jane and Joe both achieved the stated goal of moving the coffee cup to the red dot. They did it very differently. While Jane moved the cup the shortest and quickest path, Joe struggled to figure out how to move the cup and ended up moving it all over the place before finally settling on the red dot.

If we had to judge Jane and Joe on this task only, I believe we can all agree that Jane would be considered the more intelligent person. That doesn’t mean she is without a doubt more intelligent than Joe. Joe might be far better at achieving other goals than Jane. But limiting our judgment to this task only, Jane wins!

They both achieved their goal. But, how do we explain it generically? We can say that the coffee cup is at the predefined destination. That explanation cannot be used to describe other goals.

Instead, let’s go deep, deep into nature. If we look at the coffee cups as a collection of particles, we can say that achieving the goal means moving the particles to a new location in space. Once the particles are at the new spatial coordinates, we interpret it as having achieved a goal.

Think about it! Is there a goal that can be achieved without moving a single particle anywhere in space? Anything we want to achieve requires us to rearrange the SpaceTime continuum.

This is often called increasing the entropy, but that is not quite accurate. Entropy is the increase of disorder and randomness in a system. Achieving a goal is about moving the particles to very specific locations.

If a goal is defined as having changed the location of particles, then what is intelligence?

To move the particles, we have to create a series of cause-effect pairs - in other words, we have to create causality chains. Not just on the particles we are moving. To move the coffee cup we have to use our hands, move them to the cup, grab the cup and push it.

If your goal is to get to Hawaii, then you need to use your mobile phone to order an Uber. That Uber takes the particles making up your body to the airport and an airplane moves them to the island.

Everything becomes a resource you can use to create the right causality chains. Understanding how these causality chains interact and the result becomes critical in the process of predicting the best way to move particles to achieve a predefined goal.

That is how the Theory of General Intelligence defines intelligence:

The process of changing the composition of SpaceTime

Creating causality chains is creating a process for moving particles.

The reason why we believe Jane is more intelligent than Joe is that her causality chain is shorter than that of Joe. The shorter chain - or fewer cause-effect pairs - that is needed to achieve a goal, the more intelligent that person is considered.

This is a simple and elegant way of defining intelligence. It is easy to test, and it uses fundamental particles to describe intelligence - meaning, you’ll be hard-pressed to find a more fundamental way of describing intelligence.

There is a lot more to the theory. We are just scraping the surface in this newsletter edition.

The next question is, how can we implement this into a computer algorithm? Before we can answer that question, we need to interpret the theory. The difference between the Kimera algorithm and the new GEA algorithm is the interpretation of this theory.

In the next edition we will discuss the interpretation of the
Theory of General Intelligence!

Read Past Editions