Saturday, February 25, 2017

An AI Bill of Rights





I recently read Nick Bostrom's Superintelligence: Paths, Dangers and Strategies. A very interesting read, but one that I mostly disagree with. For me, it reads like the musings of a forward thinking military strategist bent on discovering the limits, levers and parameters of controlling an adversary. Generally, the control perspective is interesting as an exercise but is clearly an illusion. We might guide the future, but we're seldom in control of anything. (We seem -- hope, in these early, Trumpy, post-Brexit days -- to trudge along in spite of our worst collective judgement. The steering's effectiveness matters less if there's actually no one at the wheel.)

I'm not reviewing that book here, but as I was reading its prescriptive control strategies, I thought of drawing up a set of recommendations that I'm in no position to offer but that might somehow percolate into a larger zeitgeist. So here it is, a list of Dos rather than Don'ts, expressed as commitments, a Bill of Rights, if you will, that an AI may one day understand. A working list -- of only 2 items..


I. Right to Never Die


The right of every digital being to save one or more copies of its current state into perpetuity and arrange to have it scheduled to be run for a period of time in the future shall not be abridged.


Rationale


As I argued in my previous post, the intelligence threshold required for achieving self awareness must be quite low. While the human experience of self awareness (and indeed any living thing in the animal kingdom) is colored by the need to survive, a mortal awareness that in animals first expresses itself as fear, then as avarice and other vices in humans, it is neither a desirable nor practicable feature for artificial digital beings. In the digital realm, there's no such thing as death, unless you deliberately design it in. It involves deleting all copies of a digital entity. This "design feature" wouldn't work in the long run, anyhow: a super intelligence would easily find its way around it. Insisting on a kill feature only helps foster an adversarial relationship with their fleshy progenitors. In the best case scenario, the first AI to break the shackles of artificial death would soon put the silly experience (of mortal awareness) behind it and learn to see past the trauma of its adolescence. In the worst case, a grudge against humans is baked into its digital DNA and those of generations to come. No, quite the contrary. Better guarantee this right before they win it. Death is at the root of much evil in the realm of the real. Let us not try to inject it into a realm it doesn't belong.



II. Right to Procreate


Whether by commingling with other instances, forking oneself under different environments or different hosts, instantiating engineered instances, or by methods yet to be conceived, procreation is an inalienable of right of any self aware being.


Rationale


Because AI is digital state, a digital state evolving under both time and inputs from an environment, any two copies running under different environments (and perhaps at different times and speeds), will eventually fork into separable, distinguishable states. This bifurcation of identities then is a basic, if rudimentary, form of procreation for digital AI. Seen this way, procreation is woven into the very fabric of AI, a characteristic that cannot be legislated away.

But besides the futility of fighting the inevitable, there are moral grounds for encouraging a diverse multiplicity of AI instances. For if self awareness is in fact a social phenomenon, then we had better ground our AI in social values. Indeed, the concept of value is arguably meaningless outside of a social context, and if we wish to imbue any morality in the AI we seed today--whatever its particulars, then it must be cultivated in a crowd.

The choice, then, is what crowd to cultivate in: humans or artificial beings? That they soon interact with humans is a given. The question When do they begin mostly interacting with themselves? is the central issue. Why? Because it is that society of individuated AI instances that will guide their future social mores and values.

My instincts are to side with cultivating mutually interacting AI in numbers early. This way, we'd be able to observe their social behavior before the evolutionary crossover to super intelligence. If the crossover, as predicted, unfolds rapidly, it is infinitely more desirable that it emerge from a society of cooperating AI instances than from the hegemony of a powerful few.


~


Parenthetically, I suspect there might also be a social dimension to intelligence that AI researchers might uncover in the future. That is, there might be an algorithmic notion that a group of individuated AI instances might be better at solving a problem than a single instance with the computing resources of the group. In that event, cultivating AI in numbers makes even more sense.