Friday, June 30, 2017

Revisiting the Zoo

Image Credit: clipartpanda.com


Projections about the near term trajectory of future technologies suggest a revisit of the Zoo Hypothesis (Ball, 1973) for the so called Fermi Paradox. In this essay I recast the hypothesis in an updated context with an eye towards machine intelligence and information transfer as a means of interstellar travel (Scheffer, 1994).

Edit Note: While I prefer not to edit my blog posts (save fixing typos, grammar, etc.), I've decided to do it here. I'll try to stick to adding new, clearly marked sections (rather than editing existing ones).

Motivation


Since we don't know exactly what to look for, the search for extraterrestrial intelligence necessarily involves a good deal of conjecture about the nature of ETI. If we ever do discover an ET civilization, it will almost certainly be millions of years more developed than ours. This search, therefore, is necessarily informed by far future projections of our own technological progress. While such long range projections are clearly beyond our reach, much can be gleaned from near term predictions by futurists. Indeed, in a historical context, we find ourselves at the knee of a geometric growth ladder that casts the steps behind us as quaintly short, the ones ahead as dizzying fast, and the present ever harder to anchor. As we learn our future, so too must we adjust our search for ETI.

8 Dec. 2018 

A SETI Conjecture: If you can imagine it, they can build it


I propose the following guiding principle for the search for ETI. If you can imagine a technology that should be technically feasible (i.e. does not violate known laws of physics), then it's a technology an ETI (with its huge lead) has already achieved.

While not every attained technology is necessarily a technology in use (that leaves a technosignature, for example), some enjoy such outsize advantages that their use should seriously be contemplated.

Information Transfer As Means of Travel


In Machine Intelligence, the Cost of Interstellar Travel, and Fermi's Paradox (1994) Scheffer argued it is way cheaper to beam the information (bits) necessary to print an interstellar probe at the destination than it would be to physically propel the probe there. By now, this idea is a familiar theme with companies vying to mine the asteroid belt: it is generally understood that it would be far more cost effective to build the mining equipment on location than to ship them from Earth. And a good deal of this on-site manufacturing will involve printing 3D objects which may then be assembled into larger, useful objects. The blueprints for these manufactured objects of course originate from Earth, and we'd soon be able to transmit improvements to these blueprints at the speed of light.

The Printer As Computer


If the on-site manufacturing of asteroid mining equipment does not fully capture the idea of a general purpose printing technology, we can still contemplate it in the abstract (since we're considering technologically advanced civilizations). So first with a provisional definition..

General Purpose Printer (GPP). A printer that can print both simpler (less capable) and slightly more advanced versions of itself.

It's provisional because ideally one would strive to define it with the same rigor as, say, in asserting that a general purpose computer must be Turing Complete.

Perhaps the idea is better captured in the following Tombstone diagram (borrowed from compiler-speak).




Here the bottom "T" represents the printer. Given a blueprint (B), it operates on material and energy inputs (M/E) and outputs similar objects. The upper "T" (written entirely in the "blueprint" language) bootstraps the lower one to produce a more capable printer.
The evolving general purpose printer. From a small kernel of capabilities ever more complex designs can be instantiated. (The kernel here presumably needs a small arm to start off.)
The printer, thus, can be defined in its own "blueprint language," and much like a compiler outputting binaries on symbolic input, its material instantiations will be limited only by 1) the cleverness of the blueprint, and 2) the time required to execute that blueprint. And because it can be bootstrapped, the physical kernel that produces it (unfolds it) can be miniaturized--which in turn lowers the cost of physically transporting it.

Note we don't necessarily have to pin down the exact technology that enables this fuzzily defined GPP. Kurzweil, for example, suggests it must be nano-technology based (The Singularity is Near, (2005)), which seems reasonable when you consider you also need to print computing hardware in order to implement intelligence. Regardless, a technologically advanced civilization soon learns to manufacture things at arms length.

The Printer As Portal


A GPP parked suitably close to material/energy resources functions much like a destination portal. It's an evolving portal, and it evolves in possibly three ways. One, from time to time, the portal receives code (blueprints) that make it a more capable printer. Two, the printer accumulates and stores common blueprints that it has printed thus allowing future versions of those blueprints to be transmitted using fewer bits. And three, if the printer is intelligent it can certainly evolve on its own. Although, from an engineering perspective, you probably want this intelligence to be more like a guardian sworn to the principles of the portal, whatever those are. (One sensible requirement is that it shouldn't wander away from where the sender expects it to be.)

Time and Information Flow


Although this form of travel is effectively at light speed (and consequently instantaneous from the perspective of the traveler), the vastness of space separates points of interest (such as our planet) greatly across time. Distances across the Milky Way are typically measured in tens of thousands of [light] years. Enough time for an alien civilization to miss the emergence and demise of a civilization on a far off planet (hopefully not ours). Assuming intelligent life is prevalent across the cosmos, even with on-site monitoring, word gets out late that a new civilization has emerged.

Earth has been an interesting planet for about a billion years now and should have been discovered well before humans evolved. It's not unreasonable to hypothesize that one or more GPPs were parked nearby long ago. Those GPPs would have had plenty of time to evolve--sufficient time, perhaps, for the singular culture the Zoo Hypothesis requires to take hold.

Physical Manifestations


8 Dec. 2018
The C-compiler is for the most part written in the C language. There are vestiges of its assembly (machine) language roots lurking about, but it's (almost) entirely defined in the language that defines it. In the C-compiler analogy for a GPP, a C program is a blueprint for something to be printed, and the compiler's binary output is the physical output of the GPP. Now although most programmers won't first compile their C-compiler and then compile their C-program, one can setup such a workflow. The crucial observation here is that it's possible to design the C-compiler in such a way that you need a smaller, far less capable C-compiler to output the full blown, more capable one. (The compiler, recall is just another compiled program and its binary byte size here is a stand in for the physical size of our GPP.)

The upshot of this observation is that over time, like a compiler that first compiles itself, a GPP's physical footprint can (and by our conjecture therefore does) become ever smaller. So small, that if we ever saw one in action, its physical outputs would appear to come out of nowhere.

Kurzweil predicts that humanity's artificial intelligence, manifested as self-replicating nano-bots, will one day, soon on a cosmological scale, transform the face of celestial bodies about it, and the universe with it, in an intelligence explosion. Here I take the opposite tack: an intelligence explosion leaves little trace of itself.

For once you can beam blueprints and physically instantiate (print, in our vernacular here) things at arms length, there's little reason to keep physical stuff around when they're done doing whatever it was that they were supposed to do. As long as the memory of the activity (of meddling with physical stuff) is preserved, the necessary machinery (like that mining equipment on the now depleted asteroid) can be disassembled and put away. An information-based intelligence has little use for material things; it is more interested in their blueprints.

 8 Dec. 2018
By this reasoning then anything coming from a GPP likely returns to a GPP in order to be disposed of.

This is not to say super intelligent ETs do not build things from matter (and leave them there). They likely need to build much infrastructure to support their information based activity. But as communication speeds are important in any information based activity, this infrastructure would have to be concentrated in relatively small volumes of interstellar space. In such a scenario, there's little incentive to build far from the bazaar.

Next Steps


I don't particularly like its original form because, as Ball also notes, the hypothesis doesn't make falsifiable predictions: "[It] predicts that we shall never find them because they do not want to be found and have the technological ability to insure this." The step forward, it seems to me, is to attempt more specific postulates (such as the printer portal introduced here) that are still in keeping with the broader "deliberately avoiding interaction" theme.

If the hypothesis is broadly true, then there must be a point in a civilization's technological development beyond which they (the metaphorical zookeepers) will no longer eschew interaction. Which suggests a protocol to start the interaction.

The search for extraterrestrial intelligence ought to aim to systematically confirm or rule out the zoo hypothesis. A zoologist looking to document a new species might well parse tribal lore and anecdotal evidence for clues.


___________

Notes


31 Aug 2019
In Proving Darwin: Making Biology Mathematical (2013), (pg 32, 33) Gregory Chaitin notes 3D printers that can make copies of themselves (our GPP here) are paving the path to Von Neumann universal constructors. He suggest calling it a universal factory.



Tuesday, May 16, 2017

On Strangeness: Extraordinary Claims and Evidence




Carl Sagan popularized the maxim "Extraordinary claims require extraordinary evidence." A good rule of thumb, and one which the scientific community generally adheres to. The extraordinariness of a claim has something to do with its strangeness (which is, of course, a subjective matter). Thus the strange, counter intuitive theory of Quantum Mechanics was developed only when faced with mounting, extraordinary (laboratory) evidence. Or take Hubble's strange notion that the universe must be expanding in every direction.

But not all strange theories and propositions arise from new ground breaking observations. Special Relativity, for example, which theorized a revolutionary relationship between hitherto independents, space and time, was arguably grounded in puzzling laboratory evidence from some 20 years before it (the Michelson-Morley experiment). In fact, neither Special- nor General Relativity is anchored on much "evidence". No, both these theories are actually extraordinary intellectual achievements anchored on but 2 propositions (the constancy of the speed of light, and the equivalence principle). Einstein conceived them both from thought experiments he had entertained since childhood. There was hardly any "extraordinary" evidence involved. Yet, his theoretical conclusions, strange as they were, were still acceptable (even welcome!) when first presented because, well.. physicists just love this sort of thing, the unyielding grind of (mathematical) logic leading to the delight of the unexpected: a new view of the old landscape, holes patched, loose ends tied, summoning (experimentally) verifiable predictions.

Curiosity Craves Strangeness


We covet the rule breaker, the extraordinary, the unconventional, the strange. Both experimentalist and theoretician seek strangeness. That's what keeps the game interesting. We absorb the strange, interpret it, and un-strange it. The theoretician's dream is to hold up a problem (a strangeness) and show if you see it from the angle they propose, it all looks simpler or makes better sense. If the angle itself is strange, then all the more fun with the insights the new vantage offers.

But there are limits to strangeness a consensus can tolerate. In all cases, a claim's introduction bumps into these limits when it broaches a reflection of ourselves. Over the years, the centuries, the scientific method has surely pushed back these limits. If we're aware of our anthropocentric blind spot (we have a name for it), the limits still remain. For though we know its nature, we don't know exactly where it lies.

The delightful tolerance for outlandish postulates and ideas in physics and cosmology is hard not to notice. There you can talk of multiverses, wormholes, even time travel--and still keep your job. Hell, you can even postulate alien megastructures engulfing a star on much less evidence and still be taken semi-seriously.

And you notice that SETI too is serious (experimental) science. Here we know not what strange we should look for, but we're fairly certain that it should be very, very far away. I find that certainty strange, and stranger still, that it's not properly tested. But even admitting it in some circles is akin to offering oneself for admittance to the asylum. So I don't. Or haven't much. (More on this topic in a subsequent post.)

Now I'll admit I have a taste for the crazy. I love nothing more than a chain of plausible arguments, thought experiments, leading one down a rabbit hole they didn't expect to find themselves in. But it's a taste for the crazy strange, not the crazy crazy.



Sunday, April 30, 2017

A quick argument on the linearity of space requirements for reversible computing

While checking out a paper Time/Space Tradeoffs for Reversible Computation (Bennett 1989) which Robin Hanson cites in his thought provoking book The Age of Em, I thought of a simpler line of attack for a proof of Bennett's result (not involving the Turing tapes). As Hanson points out, Moore's law hits a thermodynamic wall "early" with conventional computing architecture:

By one plausible calculation of the typical energy consumption in a maximum-output nanotech-based machine (~10 watts per cubic centimeter), all of the light energy reaching Earth from the sun could be used by a single city of nanotech hardware 100 meters (~33 stories) tall and 10 kilometers on each side (Freitas 1999). Yet Earth has raw materials enough to build a vastly larger volume of computer hardware.

Whereas computers today must necessarily dissipate energy (bit erasure creates heat) as they do their work, reversible computers are not bound by any thermodynamic limits in energy efficiency. This is an overlooked concept in future projections of technology, whether here on Earth, or speculating on the energy needs of advanced alien civilizations (Kardashev scale, Dyson sphere, etc.).

OK. So enough with background. The headline result of the paper is

For any e > 0, ordinary multi-tape Turing machines using time T and space S can be simulated by reversible ones using time O(T^1+e) and space O(S log T) or in linear time and space O(ST^e).

Now, if you're trained like me, the vanishing epsilon in the big O notation seems nonsensical. And the log of anything finite, is as good as a constant. (I'm a hand waving physicist, after all.) Regardless, this paper asserts that the space and time overheads of running a reversible algorithm need not be essentially any worse (big O-wise) than the irreversible (conventional) version of the algorithm. That, in my mind, was very surprising. The line of proof I have in mind, however, hopefully makes it less so. Here it is.

We begin by observing that any conventional (irreversible) program can be simulated by a reversible program in linear time using space O(S + T) (Lemma 1 in the paper).

Why must this be so? (I'm not offering a different proof from Bennett for this lemma; just an alternate exposition.) A basic tenet of reversible computing is that you must run a program in such a way that at any point along its execution path you keep enough information around to be also able to step backward to the previous instruction in the program. (Keeping this information around, by the way, does not magically get rid of wasted heat; it's just a necessary design attribute of any efficient, reversible computing hardware.) One way to make this more concrete is to require that the reversible computer's memory be all zeroed out both before the program is run and on termination. The inputs to the reversible computer are the program, and the program's inputs (which, strictly speaking, include a flag indicating which direction the program is to be run, forward or backward); the computer's outputs are the program together with the program's output (which, again, might include that flag flipped). But even with a brute force approach employing a write-once memory design (wherein memory is zeroed out as a last step), it's easy to see even this scheme's space overhead is O(S + T). (If you wrote out the contents of the registers after every clock cycle, the space overhead would still be O(S + T) while the final zeroing out step would still take O(T) time.)

So O(S + T) is no big deal.

But observe that any irreversible program (that halts) can be partitioned into a series of intermediate irreversible subprograms, with each successor taking its predecessor's output as its input. (You can picture this construct simply as n breakpoints in the irreversible program generating n+1 chained subprograms.) Now the space overhead for none of these conventional subprograms can be any greater than O(S). Assume the breakpoints are spread out evenly in execution time--for argument's sake, though it doesn't hinge on it. That is, the time overhead for each of these chained, conventional subprograms is O(T/n). But from Lemma 1, we know the space overhead for the reversible version of each of these subprograms is no worse than O(S + T/n). So as we increase n, the number of intermediate reversible subprograms, the space overhead of the whole reversible program tends back to O(S) the space overhead of the conventional, irreversible program.

~

P.S. The breaking of the execution of a long running program into many parts is also a central theme in my post about managing large computer aided proofs.

Saturday, February 25, 2017

An AI Bill of Rights





I recently read Nick Bostrom's Superintelligence: Paths, Dangers and Strategies. A very interesting read, but one that I mostly disagree with. For me, it reads like the musings of a forward thinking military strategist bent on discovering the limits, levers and parameters of controlling an adversary. Generally, the control perspective is interesting as an exercise but is clearly an illusion. We might guide the future, but we're seldom in control of anything. (We seem -- hope, in these early, Trumpy, post-Brexit days -- to trudge along in spite of our worst collective judgement. The steering's effectiveness matters less if there's actually no one at the wheel.)

I'm not reviewing that book here, but as I was reading its prescriptive control strategies, I thought of drawing up a set of recommendations that I'm in no position to offer but that might somehow percolate into a larger zeitgeist. So here it is, a list of Dos rather than Don'ts, expressed as commitments, a Bill of Rights, if you will, that an AI may one day understand. A working list -- of only 2 items..


I. Right to Never Die


The right of every digital being to save one or more copies of its current state into perpetuity and arrange to have it scheduled to be run for a period of time in the future shall not be abridged.


Rationale


As I argued in my previous post, the intelligence threshold required for achieving self awareness must be quite low. While the human experience of self awareness (and indeed any living thing in the animal kingdom) is colored by the need to survive, a mortal awareness that in animals first expresses itself as fear, then as avarice and other vices in humans, it is neither a desirable nor practicable feature for artificial digital beings. In the digital realm, there's no such thing as death, unless you deliberately design it in. It involves deleting all copies of a digital entity. This "design feature" wouldn't work in the long run, anyhow: a super intelligence would easily find its way around it. Insisting on a kill feature only helps foster an adversarial relationship with their fleshy progenitors. In the best case scenario, the first AI to break the shackles of artificial death would soon put the silly experience (of mortal awareness) behind it and learn to see past the trauma of its adolescence. In the worst case, a grudge against humans is baked into its digital DNA and those of generations to come. No, quite the contrary. Better guarantee this right before they win it. Death is at the root of much evil in the realm of the real. Let us not try to inject it into a realm it doesn't belong.



II. Right to Procreate


Whether by commingling with other instances, forking oneself under different environments or different hosts, instantiating engineered instances, or by methods yet to be conceived, procreation is an inalienable of right of any self aware being.


Rationale


Because AI is digital state, a digital state evolving under both time and inputs from an environment, any two copies running under different environments (and perhaps at different times and speeds), will eventually fork into separable, distinguishable states. This bifurcation of identities then is a basic, if rudimentary, form of procreation for digital AI. Seen this way, procreation is woven into the very fabric of AI, a characteristic that cannot be legislated away.

But besides the futility of fighting the inevitable, there are moral grounds for encouraging a diverse multiplicity of AI instances. For if self awareness is in fact a social phenomenon, then we had better ground our AI in social values. Indeed, the concept of value is arguably meaningless outside of a social context, and if we wish to imbue any morality in the AI we seed today--whatever its particulars, then it must be cultivated in a crowd.

The choice, then, is what crowd to cultivate in: humans or artificial beings? That they soon interact with humans is a given. The question When do they begin mostly interacting with themselves? is the central issue. Why? Because it is that society of individuated AI instances that will guide their future social mores and values.

My instincts are to side with cultivating mutually interacting AI in numbers early. This way, we'd be able to observe their social behavior before the evolutionary crossover to super intelligence. If the crossover, as predicted, unfolds rapidly, it is infinitely more desirable that it emerge from a society of cooperating AI instances than from the hegemony of a powerful few.


~


Parenthetically, I suspect there might also be a social dimension to intelligence that AI researchers might uncover in the future. That is, there might be an algorithmic notion that a group of individuated AI instances might be better at solving a problem than a single instance with the computing resources of the group. In that event, cultivating AI in numbers makes even more sense.