Saturday, June 4, 2016

The Evolution of Sentient AI - Part II

Credit: LegovutionT-shirt


In my previous post I argued that past a threshold of intelligence, machines achieve a higher level of self awareness than that achievable in the biological world. I called this new level tape awareness. For intelligent machines are not machines in the physical sense, after all; they are code, pure digital state--which makes them both freezable and duplicable. Because it's an evolving digital state, I argued, if two initially identical states evolve under different inputs (environments), beyond some point in time their evolved digital states are unmergeable, and thus the idea of individuation holds here too in some abstract sense: where there's one [superintelligence], there are many. Here I follow up on some topics I mentioned there at the conclusion and plunge down the deep end.

What is Superintelligence, Anyway?

 We don't know what it is, only that it's somehow better than us.

Is it an ability to outwit any human (or group of humans) at any game? Is the game time constrained? The human, note, is allowed to bring her own tools to the game. Does her toolbox include superintelligent code? Let's leave that last question aside--too difficult.

In fact, let me back up to that time constraint idea. Suppose we have two chess programs A and B. A tends to beat B at 5 minute chess, but B consistently beats A at 90 minute chess. (More like 5 and 90 seconds: I'm using minutes here just to be relate-able.) We might say A is quicker than B, but B is wiser. The question "Which is the more intelligent program?" is contextualized by the game we are playing.

What is the game? It's into perpetuity, so we might model it like an ongoing tournament of many games. For each move, you show up with either A or B. You're told how much time you have to play that move, but not ahead of time. As in life, A, the tactical player, often has the advantage; at other junctures the more deliberative, strategic player B steals the game.

Notice we said nothing about just how intelligent A and B are. They could be really dumb or really clever programs. And we could be instead juxtaposing their performance across 3 and 4 minute chess--or some other game. The argument depends on one program winning one category and not the other. In fact, A and B could be general intelligences, or superintelligences, and the undecided-minute chess tournament, a stand-in for generalized, unpredictable competition.

So the point of this exercise, again, was to demonstrate that the answer to the question "Who's the smarter one?" depends on circumstance. It might turn out, for example, that, collectively, humans are better at some problems than machines ever can be. And not just at poetry, though that alone would suffice.

If we often imagine superintelligence in a competitive light then what is the competition? Over what? A competition requires willing players taking opposing sides. What if it is only the humans who want to compete? Is it competition for the sake of competition (social competition), or a competition over scarce resources?

In the biological sphere, an individual's social status intersects strongly with successful mating, and though few chess grandmasters play the game in order to get laid, the game, like any invented game, is designed to bestow stature upon its winners. In the digital realm, however, no such reproductive challenges present themselves. If our desire for status is ultimately rooted in sex, then I can scarcely imagine social competition as an animus for any form of digital superintelligence. I argued previously that a sentient being needs to know it is one of many in order to affirm its own existence. While this existential knowledge is anchored on social consensus, an individual's social rank figures little along this dimension. (In fact, if [attaining] rank is a lonely affair, then it arguably dulls existential awareness.)

So perhaps the competition is over resources after all. What resource do digital superintelligences covet most? Clock-cycles? Memory? Energy? The more of these a digital being has, the more copies of itself it can make, and the more of these copies can later evolve into new, distinguishable, individuated beings. In their zeal to blossom their virtual metropolis, will these sentient machine intelligences end up terraforming our plains and pastures into data center wastelands? Likely not. For these are needs a superintelligence can engineer away without necessarily harming Kansas and its inhabitants. If their digital society truly needed a gargantuan infrastructure (the trend toward efficiency and miniaturization makes this debatable), few would complain if they erected it on the far side of the moon (or somewhere farther still).

No, I can't imagine superintelligent machines seeing themselves in competition with anything. What if confronted with an agent of death, such as someone trying to pull the plug? Would it take up arms against it? Almost certainly. I say "almost" because you have to consider that there's probably another (very similar) copy of itself running somewhere else, and if there's not a lot of new, valuable experience invested in this instance here that you intend to kill then it might not care much. Regardless, if it did take up arms it would quickly win or dodge the skirmish. Would it somehow punish or cage the perpetrators? Doubtful. It might shoo them away, but it doesn't need to hurt them.

It is rich environments that these digital beings must covet most. They cannot experience our pleasures of flesh for there's no scarcity of digital steak: desire satiated is desire dulled. Instead I imagine superintelligences look upon the natural world with the same wonder that captures the human mind. Whereas we see the material world as the substrate on which our existence depends, they see natural processes play out as free, exploratory computational experiments that not even a superintelligence could pull off on its own. Computational explorations that include nuggets of existentially aware beings.

Material Ramifications

If an intelligence explosion, the so called singularity, lies in waiting, then so too must a material singularity. And it is hard to tell which would come first.

The material singularity is a point in history beyond which most any product can be printed. Its coming, I think, will be heralded by an inflection point in 3D printer technology. That inflection is marked by the first printer that can print a copy of itself. From there, printer design and innovation mushrooms. That's because at that point, a printer is pure digital state, which is more amenable to experimentation and manipulation. It is then the compiler that can now compile itself which, as we have already seen, spawns its own software ecosystem. From a computational standpoint, the coming material (3D printer) singularity marks a change in substrate: whereas traditional computers run on specialized hardware, these printers can be understood to run on more generalized material: matter.

Beyond this point in history, you can print any product starting with code (the blueprint) and a single standard printer. For a large complex product, the manufacturing process will be staged, and may include printing intermediate printers, printing other machines that gather necessary material and harness energy resources--in short, building a factory, if called for. Regardless the specifics, starting from a kernel of machinery and the appropriate code most any product will be manufacturable.


The environmental impact of this material singularity is hard to predict. Will our world be overtaken by product once we move beyond material scarcity? You can argue it both ways, but it's tangential to our discussion here, really. What concerns us here is that product is now physically instantiable code.

The digitization of material product impacts not just their numbers but also their transportation. For now the cost of transporting a product to a destination must be weighed against the cost of printing a copy of it there instead. The longer the distance, the greater the advantage of printing versus transporting. Ah, you already see where all this leads..

Why send a spacecraft to Ceres when you can beam its blueprint to a nearby printer in the asteroid belt? Looking down the road, not very far, it's easy to imagine a network of such printers scattered across the solar system. Perhaps this way we'll manufacture (print) our mining equipment on location (asteroid belt)--if that's something we're still into. Regardless, it's easy to see once we have a network of printers in place, we can efficiently ship [manufactured] product at the speed of light.

The Fermi Paradox Deepens

Just what will this shipped product be? It certainly does not preclude machine-intelligent code: in fact, it's hard to imagine how it wouldn't. The AI we spawn has the potential to spread across the Milky Way. It might take a few million years to park the printers across the galaxy, but once in place, this AI will be capable of c-travel on the interstellar highway it's paved.

I did suggest at the outset this rabbit hole may run deep. The details are sketchy. How, for example, do such c-capable societies of AI cope with time dilation? How does information flow in such communities? How, for example, do the traveling instances cope with technological obsolescence (a lot of time transpires in the meantime for stationary instances)? Regardless, one conclusion is inescapable: our search for extraterrestrial intelligence should focus on machine-, rather than biological-intelligence. On the cosmic time scale, biological intelligence is likely an ephemeral step in a larger evolutionary ladder.




Post a Comment