Framework for Global Intelligence

Posted on June 10th, 2018 by Hoagy

Since reading about Lots of serious talk about the future hangs on the creation of super-human intelligence, where we become able to create machines at will whose capabilities in a very broad sense are far greater than any living human, and the massive transformation that this event will wreak.

Part of the difficulty of this is talking about what one means when we say húman intelligence. Humans have many innate abilities, of which perhaps our most developed 'natural' ability is our ability to manipulate language, and in its absence even create it, as in the case of Nicaraguan Sign Language, which developed complex tenses and phrasing in a very short space of time. Nonetheless, most of the achievements of humanity as they are generally understood are not instantiated in any individual but reside in our ability as a population, armed with the sum of human knowledge, to alter the world around us.

If we took a human, or small groups of humans, and placed them in prehistoric forest, they would, presumably for many generations, remain roughly as any other animal - increased tool usage, perhaps the development of fire, almost certainly more complex language - but even if these chosen people were a vertiable pack of Einsteins, brimming with intelligence and creativity, I feel confident in saying that the rapid development of human power as soon in the last 10,000 years would take a significant length of time to commence, with the most important milestones likely to include the development of writing and the discovery of farming. In fact, it seems likely that the intelligence of such people could be almost arbitrarily high

Why is this important? Because to me it suggests a better paradigm for reasoning about the consequences of different levels of machine intelligence than the 'better than human' model. As mathematicians are wont to do, I propose we take all of humanity's combined power to alter its environment: every person, paper, product and preference, and denote it by \$P\$. P, of course, is probabilistic and massively non-linear, containing humanities growth, and also it's probability of wiping itself out in a nuclear holocaust. P also contains the means by which P changes over time, and so we can denote by Pdot the probabilistic differential of P at any particular time. The question then becomes not about intelligence per se, but about what properties of our technological abilities, fully considered, cause our techonological abilities to grouwth at an ever increasing rate, and which properties induce a step change in the expected growth rate.

One thing that this helps to avoid is the argument that if a human of intelligence A can build a computer smarter of intelligence A' > A, then a computer smarter than a human must be able to design a computer smarter than A', and so the humans immediately beocme obsolete in the frontier of intelligence. Firstly, this would only be guaranteed if, rather than a simgle computer of A' intelligence, that is a Pareto improvement over humans

Let us assume that increased computing power is always beneficial in computor design. Humans augmented with access to a computer of a certain strength are able to design faster, more powerful computers. This describes every case of technological improvement in the last century. Since this increased power is always beneficial, we thereby create a human-computer unit with even greater power, able to design a human-computer unit of even greater power, and so on, and so on.

The point where humanity becomes obsolete is the poijnt at which P does not depend at all on its human component - where no practical experiment, piece of intuition or creativity, or executive decision making relies on humanity, which is likely to be a long long way from the point when humans create a smarter