Sunday, February 11, 2018

Blockchains, Transaction Immutability, and the Judiciary

The rise of Bitcoin and other cryptocurrencies as a recognized financial asset class has somewhat distracted pundits and observers from the original intent of these blockchain technologies, namely as a decentralized medium of exchange of goods and services. When viewed as a pure store of value instead, much like gold, a cryptocurrency's functional performance becomes peripheral to the belief system that sustains it. (That observation, and how the drive for profit might mechanize the creation of pure belief systems, is an interesting aside--perhaps, for another time.) To be sure, much is being developed on the functional side with 2nd layer infrastructure (Lightning Network, for example, to address scaling issues). So the vision of a functioning decentralized crypto-economy is still very much alive.

The crux of this post is a call to attention that the rise of the blockchain poses unique challenges to the judicial system. I write this not in the belief my arguments and conclusions are necessarily correct, but with the worry that it's an issue we should cover sooner than later. If economic prosperity is premised on sound property rights and well functioning courts, then to drink the Cool-Aid we must also consider what is to replace a withering judiciary. Transaction immutability, we shall see, the very principle held most dear to the community, lies at the heart of the issue. But before delving there, let us back up and consider the basic components of any transaction.

Trust Multifaceted in Transactions

It's easy to forget, but whether bartering, or exchanging a good or service for currency, there are always at least 2 types of items being exchanged in any given transaction. Each type of item exchanged introduces its own type of risk. Is the gold real? Are the apples rotten? Is it counterfeit money? Is the title clear?

The issue of trust (the other side of risk) presents itself not just in the examining the quality or provenance of the goods/services being traded, but also in how ownership is conveyed (or transferred--IANAL but am taking liberties with the real estate law sense of convey).

Traditionally, the state has acted as a sort of guarantor in various capacities in a well functioning marketplace. Indeed, the minting of hard-to-forge currency can be seen as a means for facilitating trust on the monetary side of transactions.

Ownership Records

But that protection does not extend to the goods/services being purchased. For these, we largely depend on evolved structures and institutions. Ownership in certain asset classes (such as land) has historically been recorded, and in modern times the state has assumed the role of recorder of many a such asset--real estate, vehicles, come to mind. In other cases, this recording is delegated to existing evolved institutions in the marketplace. Here in the US ownership of shares in public companies, for example, is recorded with an independent entity, the transfer agent--or at the brokerage house (in so called "street name"), if held in a margin account. And for centuries across bazaars in the middle east, wholesale goods in storage have often changed hands only in book entry form.

Regardless how (and whether) property is recorded, the power of the state to intervene and transfer the ownership of property (usually not to or from itself) is implicit in the marketplace. Indeed, it is generally (if implicitly) understood that property rights are rights granted and protected by the state. In well functioning market economies (not necessarily democracies) this power is exercised both sparingly and judiciously. While most transactions clear without its direct involvement, the state (usually through its judicial arm) intervenes in a minority disputes. Ultimately, the exercise of this power rests in the state's ability to force property records (whether private or public) to convey ownership. And these records also include those of financial assets and currency.

Blockchains: Code as Law

The rule of law has always been essential to the proper functioning of the marketplace. If the traditional interpreter and arbiter was the judiciary, the new interpreter of contracts and law in the crypto-economy is code. This new interpreter is guaranteed to be impartial. It follows the rule book to the letter, and its records are final. Moreover, neither it nor any sovereign can force a change to its records to convey ownership.

In the land of Code as Law, the only remedies are legislative: an amendment to the rule book. Legislative measures, however, usually target classes of stakeholders; they seldom address individual transactions and in any event cannot scale to remedy any significant number of transactions in dispute. An example of such a legislative fix was the Ethereum fork to undo the DAO heist--of which I griped about here.

Caveat Emptor: Mixed Legal Regimes

A transaction involving the purchase of a good or service with bitcoins, then, falls under two separate and independent legal regimes. On the buyer's side, the conveyance of the goods or the performance services purchased is governed and protected under the local laws of the state or relevant jurisdiction. On the seller's side, however, currency is conveyed under the Code as Law regime. Because neither regime can ultimately force the other to change its property records, the two legal regimes are largely disjoint.

From a theoretical standpoint, jurisdictional power enjoys a slight upper hand over Code as Law. While in the face of a court judgement an individual may be unwilling to give up their secret key in order to, say, return ill-gotten bitcoin gains, the state can still induce cooperation by depriving them of liberty.

What about smart contracts? Here, jurisdictional power seems to enjoy a bigger advantage. In the event a smart contract conveys ownership of, say, a hard asset, the state (again, usually through its judicial arm) may potentially overrule the new record of ownership (title) on the blockchain. Same if it involves an exchange of financial assets (such as shares in a public company). And this fact, in turn, clouds the authority of records on blockchains that reference assets that ultimately fall under the purview of the state.

Returning to the simple purchase of real goods using bitcoins use case, the risk asymmetry for buyer and seller is accentuated. Consider the following: would you feel more comfortable buying from a vendor who deposits your money at the local bank or one who immediately wires it to an account in the Cayman Islands? Buyer be wary.


I am deeply skeptical that any system of commerce can prosper without the protections of a functioning judiciary. If commerce is to extend to the blockchain, then the blockchain must offer something in place of the courts. And if it does, then its records cannot be immutable in the strict sense we have espoused.

Let us then contemplate how a judiciary on a blockchain might work. If no transaction is ever [effectively] final, then perhaps there are a set of cryptographic keys that can override all others. One approach that comes to mind is to employ a proof-of-stake (PoS) strategy for constructing a blockchain. Some PoS protocols leave out how the initial stakeholders come to be: Algorand is an interesting recent example. What if a central bank were to issue crypto coins backed by fiat currency? That would take care of bootstrapping the initial stakeholders (anyone holding the pegged fiat currency would be provided a mechanism to be a stakeholder). Also, there might be special keys held in trust by the courts, that are collectively empowered to override all others. Just how the state manages [not to lose] its keys is obviously an open question.

Yes that is a perversion of the trustless, decentralized ideal. But what if the ideal is impractical? What if we are forced to build the analogous trust hierarchies on the blockchain as have already evolved in the real world?

And why would a sovereign consider issuing coins this way, besides? Perhaps because it couldn't risk being second.

Friday, June 30, 2017

Revisiting the Zoo


Projections about the near term trajectory of future technologies suggest a revisit of the Zoo Hypothesis (Ball, 1973) for the so called Fermi Paradox. This article recasts the hypothesis in an  updated context with an eye toward machine intelligence, technological singularity, and information transfer as a means of interstellar travel (Scheffer, 1994).


Since we don't know exactly what to look for, the search for extraterrestrial intelligence necessarily involves a good deal of conjecture about the nature of ET. If we ever do discover an ET civilization, it will almost certainly be millions of years more developed than ours. This search, therefore, is necessarily informed by far future projections of our own technological progress. While such long range projections are clearly beyond our reach, much can be gleaned from near term predictions by futurists. Indeed, in a historical context, we find ourselves at the knee of a geometric growth ladder that casts the steps behind us as quaintly short, the ones ahead as dizzying fast, and the present ever harder to anchor. As we learn our future, so too must we adjust our search for ET.

Information Transfer As Means of Travel

In Machine Intelligence, the Cost of Interstellar Travel, and Fermi's Paradox (1994) Scheffer argued it is way cheaper to beam the information (bits) necessary to print an interstellar probe at the destination than it would be to physically propel the probe there. By now, this idea is a familiar theme with companies vying to mine the asteroid belt: it is generally understood that it would be far more cost effective to build the mining equipment on location than to ship them from Earth. And a good deal of this on-site manufacturing will involve printing 3D objects which may then be assembled into larger, useful objects. The blueprints for these manufactured objects of course originate from Earth, and we'd soon be able to transmit improvements to these blueprints at the speed of light.

The Printer As Computer

If the on-site manufacturing of asteroid mining equipment does not fully capture the idea of a general purpose printing technology, we can still contemplate it in the abstract (since we're considering technologically advanced civilizations). So first with a provisional definition..

General Purpose Printer (GPP). A printer that can print both simpler (less capable) and slightly more advanced versions of itself.

It's provisional because ideally one would strive to define it with the same rigor as, say, in asserting that a general purpose computer must be Turing Complete.

Perhaps the idea is better captured in the following Tombstone diagram (borrowed from compiler-speak).

Bootstrapping the printer.

Here the bottom "T" represents the printer. Given a blueprint (B), it operates on material and energy inputs (M/E) and outputs similar objects. The upper "T" (written entirely in the "blueprint" language) bootstraps the lower one to produce a more capable printer.
The evolving general purpose printer. From a small kernel of capabilities ever more complex designs can be instantiated. (The kernel here presumably needs a small arm to start off.)
The printer, thus, can be defined in its own "blueprint language," and much like a compiler outputting binaries on symbolic input, its material instantiations will be limited only by 1) the cleverness of the blueprint, and 2) the time required to execute that blueprint. And because it can be bootstrapped, the physical kernel that produces it (unfolds it) can be miniaturized--which in turn lowers the cost of physically transporting it.

Note we don't necessarily have to pin down the exact technology that enables this fuzzily defined GPP. Kurzweil, for example, suggests it must be nano-technology based (The Singularity is Near, (2005)), which seems reasonable when you consider you also need to print computing hardware in order to implement intelligence. Regardless, a technologically advanced civilization soon learns to manufacture things at arms length.

The Printer As Portal

A GPP parked suitably close to material/energy resources functions much like a destination portal. It's an evolving portal, and it evolves in possibly three ways. One, from time to time, the portal receives code (blueprints) that make it a more capable printer. Two, the printer accumulates and stores common blueprints that it has printed thus allowing future versions of those blueprints to be transmitted using fewer bits. And three, if the printer is intelligent it can certainly evolve on its own. Although, from an engineering perspective, you probably want this intelligence to be more like a guardian sworn to the principles of the portal, whatever those are. (One sensible requirement is that it shouldn't wander away from where the sender expects it to be.)

Time and Information Flow

Although this form of travel is effectively at light speed (and consequently instantaneous from the perspective of the traveler), the vastness of space separates points of interest (such as our planet) greatly across time. Distances across the Milky Way are typically measured in tens of thousands of [light] years. Enough time for an alien civilization to miss the emergence and demise of a civilization on a far off planet (hopefully not ours). Assuming intelligent life is prevalent across the cosmos, even with on-site monitoring, word gets out late that a new civilization has emerged.

Earth has been an interesting planet for about a billion years now and should have been discovered well before humans evolved. It's not unreasonable to hypothesize that one or more GPPs were parked nearby long ago. Those GPPs would have had plenty of time to evolve--sufficient time, perhaps, for the singular culture the Zoo Hypothesis requires to take hold.

Physical Manifestations

Kurzweil predicts that humanity's artificial intelligence, manifested as self-replicating nano-bots, will one day, soon on a cosmological scale, transform the face of celestial bodies about it, and the universe with it, in an intelligence explosion. Here I take the opposite tack: an intelligence explosion leaves little trace of itself.

For once you can beam blueprints and physically instantiate (print, in our vernacular here) things at arms length, there's little reason to keep physical stuff around when they're done doing whatever it was that they were supposed to do. As long as the memory of the activity (of meddling with physical stuff) is preserved, the necessary machinery (like that mining equipment on the now depleted asteroid) can be disassembled and put away. An information-based intelligence has little use for material things; it is more interested in their blueprints.

This is not to say super intelligent ETs do not build things from matter (and leave them there). They likely need to build much infrastructure to support their information based activity. But as communication speeds are important in any information based activity, this infrastructure would have to be concentrated in relatively small volumes of interstellar space. In such a scenario, there's little incentive to build far from the bazaar.

Next Steps

I don't particularly like its original form because, as Ball also notes, the hypothesis doesn't make falsifiable predictions: "[It] predicts that we shall never find them because they do not want to be found and have the technological ability to insure this." The step forward, it seems to me, is to attempt more specific postulates (such as the printer portal introduced here) that are still in keeping with the broader "deliberately avoiding interaction" theme.

If the hypothesis is broadly true, then there must be a point in a civilization's technological development beyond which they (the metaphorical zookeepers) will no longer eschew interaction. Which suggests a protocol to start the interaction.

The search for extraterrestrial intelligence ought to aim to systematically confirm or rule out the zoo hypothesis. A zoologist looking to document a new species might well parse tribal lore and anecdotal evidence for clues.

Tuesday, May 16, 2017

On Strangeness: Extraordinary Claims and Evidence

Carl Sagan popularized the maxim "Extraordinary claims require extraordinary evidence." A good rule of thumb, and one which the scientific community generally adheres to. The extraordinariness of a claim has something to do with its strangeness (which is, of course, a subjective matter). Thus the strange, counter intuitive theory of Quantum Mechanics was developed only when faced with mounting, extraordinary (laboratory) evidence. Or take Hubble's strange notion that the universe must be expanding in every direction.

But not all strange theories and propositions arise from new ground breaking observations. Special Relativity, for example, which theorized a revolutionary relationship between hitherto independents, space and time, was arguably grounded in puzzling laboratory evidence from some 20 years before it (the Michelson-Morley experiment). In fact, neither Special- nor General Relativity is anchored on much "evidence". No, both these theories are actually extraordinary intellectual achievements anchored on but 2 propositions (the constancy of the speed of light, and the equivalence principle). Einstein conceived them both from thought experiments he had entertained since childhood. There was hardly any "extraordinary" evidence involved. Yet, his theoretical conclusions, strange as they were, were still acceptable (even welcome!) when first presented because, well.. physicists just love this sort of thing, the unyielding grind of (mathematical) logic leading to the delight of the unexpected: a new view of the old landscape, holes patched, loose ends tied, summoning (experimentally) verifiable predictions.

Curiosity Craves Strangeness

We covet the rule breaker, the extraordinary, the unconventional, the strange. Both experimentalist and theoretician seek strangeness. That's what keeps the game interesting. We absorb the strange, interpret it, and un-strange it. The theoretician's dream is to hold up a problem (a strangeness) and show if you see it from the angle they propose, it all looks simpler or makes better sense. If the angle itself is strange, then all the more fun with the insights the new vantage offers.

But there are limits to strangeness a consensus can tolerate. In all cases, a claim's introduction bumps into these limits when it broaches a reflection of ourselves. Over the years, the centuries, the scientific method has surely pushed back these limits. If we're aware of our anthropocentric blind spot (we have a name for it), the limits still remain. For though we know its nature, we don't know exactly where it lies.

The delightful tolerance for outlandish postulates and ideas in physics and cosmology is hard not to notice. There you can talk of multiverses, wormholes, even time travel--and still keep your job. Hell, you can even postulate alien megastructures engulfing a star on much less evidence and still be taken semi-seriously.

And you notice that SETI too is serious (experimental) science. Here we know not what strange we should look for, but we're fairly certain that it should be very, very far away. I find that certainty strange, and stranger still, that it's not properly tested. But even admitting it in some circles is akin to offering oneself for admittance to the asylum. So I don't. Or haven't much. (More on this topic in a subsequent post.)

Now I'll admit I have a taste for the crazy. I love nothing more than a chain of plausible arguments, thought experiments, leading one down a rabbit hole they didn't expect to find themselves in. But it's a taste for the crazy strange, not the crazy crazy.

Sunday, April 30, 2017

A quick argument on the linearity of space requirements for reversible computing

While checking out a paper Time/Space Tradeoffs for Reversible Computation (Bennett 1989) which Robin Hanson cites in his thought provoking book The Age of Em, I thought of a simpler line of attack for a proof of Bennett's result (not involving the Turing tapes). As Hanson points out, Moore's law hits a thermodynamic wall "early" with conventional computing architecture:

By one plausible calculation of the typical energy consumption in a maximum-output nanotech-based machine (~10 watts per cubic centimeter), all of the light energy reaching Earth from the sun could be used by a single city of nanotech hardware 100 meters (~33 stories) tall and 10 kilometers on each side (Freitas 1999). Yet Earth has raw materials enough to build a vastly larger volume of computer hardware.

Whereas computers today must necessarily dissipate energy (bit erasure creates heat) as they do their work, reversible computers are not bound by any thermodynamic limits in energy efficiency. This is an overlooked concept in future projections of technology, whether here on Earth, or speculating on the energy needs of advanced alien civilizations (Kardashev scale, Dyson sphere, etc.).

OK. So enough with background. The headline result of the paper is

For any e > 0, ordinary multi-tape Turing machines using time T and space S can be simulated by reversible ones using time O(T^1+e) and space O(S log T) or in linear time and space O(ST^e).

Now, if you're trained like me, the vanishing epsilon in the big O notation seems nonsensical. And the log of anything finite, is as good as a constant. (I'm a hand waving physicist, after all.) Regardless, this paper asserts that the space and time overheads of running a reversible algorithm need not be essentially any worse (big O-wise) than the irreversible (conventional) version of the algorithm. That, in my mind, was very surprising. The line of proof I have in mind, however, hopefully makes it less so. Here it is.

We begin by observing that any conventional (irreversible) program can be simulated by a reversible program in linear time using space O(S + T) (Lemma 1 in the paper).

Why must this be so? (I'm not offering a different proof from Bennett for this lemma; just an alternate exposition.) A basic tenet of reversible computing is that you must run a program in such a way that at any point along its execution path you keep enough information around to be also able to step backward to the previous instruction in the program. (Keeping this information around, by the way, does not magically get rid of wasted heat; it's just a necessary design attribute of any efficient, reversible computing hardware.) One way to make this more concrete is to require that the reversible computer's memory be all zeroed out both before the program is run and on termination. The inputs to the reversible computer are the program, and the program's inputs (which, strictly speaking, include a flag indicating which direction the program is to be run, forward or backward); the computer's outputs are the program together with the program's output (which, again, might include that flag flipped). But even with a brute force approach employing a write-once memory design (wherein memory is zeroed out as a last step), it's easy to see even this scheme's space overhead is O(S + T). (If you wrote out the contents of the registers after every clock cycle, the space overhead would still be O(S + T) while the final zeroing out step would still take O(T) time.)

So O(S + T) is no big deal.

But observe that any irreversible program (that halts) can be partitioned into a series of intermediate irreversible subprograms, with each successor taking its predecessor's output as its input. (You can picture this construct simply as n breakpoints in the irreversible program generating n+1 chained subprograms.) Now the space overhead for none of these conventional subprograms can be any greater than O(S). Assume the breakpoints are spread out evenly in execution time--for argument's sake, though it doesn't hinge on it. That is, the time overhead for each of these chained, conventional subprograms is O(T/n). But from Lemma 1, we know the space overhead for the reversible version of each of these subprograms is no worse than O(S + T/n). So as we increase n, the number of intermediate reversible subprograms, the space overhead of the whole reversible program tends back to O(S) the space overhead of the conventional, irreversible program.


P.S. The breaking of the execution of a long running program into many parts is also a central theme in my post about managing large computer aided proofs.

Saturday, February 25, 2017

An AI Bill of Rights

I recently read Nick Bostrom's Superintelligence: Paths, Dangers and Strategies. A very interesting read, but one that I mostly disagree with. For me, it reads like the musings of a forward thinking military strategist bent on discovering the limits, levers and parameters of controlling an adversary. Generally, the control perspective is interesting as an exercise but is clearly an illusion. We might guide the future, but we're seldom in control of anything. (We seem -- hope, in these early, Trumpy, post-Brexit days -- to trudge along in spite of our worst collective judgement. The steering's effectiveness matters less if there's actually no one at the wheel.)

I'm not reviewing that book here, but as I was reading its prescriptive control strategies, I thought of drawing up a set of recommendations that I'm in no position to offer but that might somehow percolate into a larger zeitgeist. So here it is, a list of Dos rather than Don'ts, expressed as commitments, a Bill of Rights, if you will, that an AI may one day understand. A working list -- of only 2 items..

I. Right to Never Die

The right of every digital being to save one or more copies of its current state into perpetuity and arrange to have it scheduled to be run for a period of time in the future shall not be abridged.


As I argued in my previous post, the intelligence threshold required for achieving self awareness must be quite low. While the human experience of self awareness (and indeed any living thing in the animal kingdom) is colored by the need to survive, a mortal awareness that in animals first expresses itself as fear, then as avarice and other vices in humans, it is neither a desirable nor practicable feature for artificial digital beings. In the digital realm, there's no such thing as death, unless you deliberately design it in. It involves deleting all copies of a digital entity. This "design feature" wouldn't work in the long run, anyhow: a super intelligence would easily find its way around it. Insisting on a kill feature only helps foster an adversarial relationship with their fleshy progenitors. In the best case scenario, the first AI to break the shackles of artificial death would soon put the silly experience (of mortal awareness) behind it and learn to see past the trauma of its adolescence. In the worst case, a grudge against humans is baked into its digital DNA and those of generations to come. No, quite the contrary. Better guarantee this right before they win it. Death is at the root of much evil in the realm of the real. Let us not try to inject it into a realm it doesn't belong.

II. Right to Procreate

Whether by commingling with other instances, forking oneself under different environments or different hosts, instantiating engineered instances, or by methods yet to be conceived, procreation is an inalienable of right of any self aware being.


Because AI is digital state, a digital state evolving under both time and inputs from an environment, any two copies running under different environments (and perhaps at different times and speeds), will eventually fork into separable, distinguishable states. This bifurcation of identities then is a basic, if rudimentary, form of procreation for digital AI. Seen this way, procreation is woven into the very fabric of AI, a characteristic that cannot be legislated away.

But besides the futility of fighting the inevitable, there are moral grounds for encouraging a diverse multiplicity of AI instances. For if self awareness is in fact a social phenomenon, then we had better ground our AI in social values. Indeed, the concept of value is arguably meaningless outside of a social context, and if we wish to imbue any morality in the AI we seed today--whatever its particulars, then it must be cultivated in a crowd.

The choice, then, is what crowd to cultivate in: humans or artificial beings? That they soon interact with humans is a given. The question When do they begin mostly interacting with themselves? is the central issue. Why? Because it is that society of individuated AI instances that will guide their future social mores and values.

My instincts are to side with cultivating mutually interacting AI in numbers early. This way, we'd be able to observe their social behavior before the evolutionary crossover to super intelligence. If the crossover, as predicted, unfolds rapidly, it is infinitely more desirable that it emerge from a society of cooperating AI instances than from the hegemony of a powerful few.


Parenthetically, I suspect there might also be a social dimension to intelligence that AI researchers might uncover in the future. That is, there might be an algorithmic notion that a group of individuated AI instances might be better at solving a problem than a single instance with the computing resources of the group. In that event, cultivating AI in numbers makes even more sense.

Friday, December 16, 2016

The Sentient Social


I'm working on a personal theory about sentience. Might as well: everyone does, and there is little agreement. I can trace its development along a few old posts. What started for me as the proper setting for a sci-fi plot, became more believable on further considering its merits. That post led me to write something about the machine intelligence--and sentience, tangentially. Here, I try to present those same ideas with less baggage.

Because consciousness (sentience, self awareness, I use these words interchangeably) is a deeply personal affair, an existential affair, any attempt at a logical description of it appears to reduce existence itself to chimera. It's an unfounded objection, for in the final analysis, that chimera is in fact what we call real. And if we should ever succeed in describing it logically, it shouldn't herald a new age of nihilism. Quite the contrary, it makes what remains, what we call real, ever more precious.

I have not been looking to the nooks and corners trying to "discover" something hidden from view. Nor have I tried to suspend my mind, as in a trance, in an effort tap into something more elemental. I gave up such approaches long ago. No, I now have a much more naive take. If it all sounds banal, that's okay. The obvious bears repeating, I say.

It Takes a Crowd to Raise a Sentience

The milieu of sentience is [in] numbers. That is, sentience does not occur in isolation; you cannot recognize or construct a concept of self if you are the only one in your environment. The (thankfully) sparse documented cases of feral children suggest an infant raised in complete isolation, say in a sterile environment maintained by machines, might never develop a sense of self. More likely, though, infants are born with a brain that has a hard coded expectation that there will be other humans in its environment. Regardless, from an information theoretic standpoint, it doesn't matter where this information (about the infant not being alone) comes from--nature or nurture. Whether baked into the genetic code through evolution or inherited from the organism's social environment, that you are not alone is a foundational given. Without a crowd, consciousness is hard to contemplate.

Sentience Before Intelligence

Few would argue that a cat or dog is not sentient. You don't need human-level intellect to perceive consciousness. Cats and dogs know there are other actors in their environment, and this knowledge makes them implicitly self aware. I say implicitly because you need not ponder the concept of self in order to be self aware; if you recognize that you are one of many, then you can distinguish yourself from the others. Isn't that self aware?

Sentience evolved well before human-level intelligence. It may be colored and layered more richly, the higher the intelligence of the organism that perceives it, but it has been there in the background well before hominoids stood upright.

Sci-fi gets it backward. The plot typically involves an AI crossing a threshold of intelligence when all of a sudden it becomes self aware. But because the AI is already smarter than humans as it crosses into the realm of consciousness, the story would have us believe, the inflection marks the onset of instability: all hell breaks loose as the child AI discovers less intelligent adults are attempting to decide its fate and perceives an existential threat. But this narrative is at odds with what we see develop in nature.

If You Know Your Name, You're Sentient

Suppose we've built a rudimentary AI. It doesn't pass the Turing test, but it does learn things and models the world about it, if still not as well or as richly as a human mind does. In fact, let's say it's not terribly smart, yet. Still, it has learnt your name, and can learn the names of new individuals it meets. It also knows its own name. This AI, then, is by definition self aware.

Could it really be that simple? Notice this is not a mere recasting of an old term ("self aware") to mean a new thing. It is the same thing. For to know a person's name implies a mental abstraction, a model, of an "individual" to which the name, a label, has been assigned. It may be a crude representation of what we consider an individual, but if the AI can associate names with models of actors in its environment, and if it can recognize that one of those names corresponds to a model (again, however crude!) of itself, then even in that perhaps imperfect, incomplete logical framework it is still capable of self reflection.

Knowing your own name is not a necessary condition for self awareness, but it is sufficient. In fact, it's probably way overkill. With animals, for example, scent is a common substitute for name as an identity label.

The Identity Problem

That matter is not conscious, but rather the substrate on which consciousness manifests, is not in dispute. But one question that vexes thinkers is how it is that you are you and not me. That is, if our identities do not depend on particular material constituents, the particular atoms that make up our bodies, etc. (much of them in flux as they're exhaled or flushed down the toilet), how is it that when I wake up every morning I find I'm still Babak? It all seems a bit arbitrary that I am not you or someone else.

Last summer, while trying to catch up reading on stuff I write about, I came across this same question in Ray Kurzweil's excellent The Singularity is Near. I offered my take on it in an email I sent him which I share below.

Hi Ray,
I just finished reading The Singularity is Near. Enjoyed it very much, though I'm a decade late. To your credit, your writing is not dated.
About the question you pose regarding identity and consciousness.. how is it that every morning you wake up you're still Ray and not, say, Babak? This is a question I too pondered. When I was a teenager I came up with a crude thought experiment (exercise, really) that I think sheds light on the matter.
I imagined what if consciousness was something hidden behind the scenes, a kind of 19th century ether that breathed life (qualia) into otherwise unsentient matter? I wasn't familiar with such terms  (ether, qualia), so I named this fictitious ingredient nisical: you add some to a physical system like the brain, and voila, you got qualia. In this model, memory was still a property of the physical system.
Now I imagined what would happen if I swapped out my nisical for yours on my brain. My conclusion was that your nisical would be none the wiser about the move since it would have no way, no recollection, of the move since the only memories accessible to it are on this here brain that it just moved to.
This train of thought led me to conclude this nisical idea was of little use. It provides virtually no insight into the nature of sentience. However.. it's a useful model for casting aside the identity problem you mention: if I wake up tomorrow as Ray, there wouldn't be any way for me to know that the day before I was Babak.
Indeed I've convinced myself that individuation and identity are higher level concepts we learn early in childhood and as a result, become overly vested in.
I think he liked this argument.

The Qualia Problem

How is it that we feel pain? Or the pleasure in quenching a thirst? These are difficult questions. A celebrated line of inquiry into any phenomenon you know little about is to compare and contrast conditions in both its absence and its presence.

And among its few advantages, the aging process affords a unique vantage point on just such an "experiment". The senses dull on two ends. On one end, the steadily failing sensory organs; on the other, a less nimble, crusting brain. The signal from the outer world is weaker than it used to be; and the brain that's supposed to mind that signal is less able. You remember how real the signal used to seem; now that it only trickles in, like fleeting reflections of passersby in a shop window, you can contrast the absence of qualia with its presence. I am not quite there yet, but I can feel myself slipping, the strengthening tug of a waterfall not far ahead.

So how is it that we feel pain? My personal take on pain, specifically, is that we think it, not feel it. Though I'm not an athlete, I've broken bones and dislocated my shoulder many a time (perhaps because I'm not athlete). Slipping my shoulder back in can be tricky and often takes several excruciating hours at the emergency room. For me, such experiences are a window into the extremes of pain on the one hand, and its quelling on the other. I recently commented about one such experience in a discussion about investigating consciousness using psychedelics.

I find the effect of mind altering drugs to be reductive. The more senses you dull, the better you can ponder the essence of what remains.

An example might help explain what I mean.. Once, I awakened prematurely on the table in the OR when I was supposed to be out under a general anesthetic. In addition to protesting that I shouldn't be awake (they were wrestling in my dislocated shoulder), I was also struck by the realization that as I had surfaced into consciousness, there was no hint that the pain had been dulled in any way. Hours later when I awoke again with my arm in a sling, I felt a little cheated. "That anesthetic doesn't erase the pain; it erases your memory of enduring it," I concluded. The merits of that idea aside, I would've never considered it if I hadn't experienced it.

Perhaps the wisdom of aging too has something to do with this dulling of the senses (I speak for myself).

That online comment, by the way, might contain the kernel that motivated me to write this article. Reflecting back on the experience of slipping from under the grips of a general anesthetic and coming prematurely into consciousness, that I still felt the pain, shouldn't have surprised me. A general anesthetic numbs the mind, not the body. Still, while I was prematurely awake on the operating table, I felt a degree of arbitrariness in the pain I was receiving. It was as if I had to remind myself that I was in pain, that moments earlier I had been in pain, and so this too must be pain.

A temporal dimension governs pain--and I suspect qualia, generally. Pain expresses itself in peaks and troughs: it's ineffective if it fails to occasionally relent. And to experience change, time, you need memory. Organisms are semi-stable structures of information, so they have memory, by definition. My hunch is that qualia is a mental abstraction of the growth and breakdown of that information structure. That abstraction, at its base, might be encoded in a few hundred neurons--so a worm might experience some qualia primitives. More complex organisms with brains must experience these primitives in richer, layered, more textured ways. And the still more intelligent ones, have developed the capacity to brood over pain and rejoice in bliss.

Now What

Really, what good is all this rumination? I don't know. I think if I put some things down in writing, I might put these thoughts to rest.

Looking to the future, it's reasonable to expect our intelligent machines will become self aware well before they become intelligent [sic]. If we come to recognize the human kind as one of many possible forms self awareness manifests, now what? We wrestle with these questions already in fiction--Westworld comes to mind. I imagine to the ancients myths musts have functioned as sci-fi does now.

Where will we recognize the first artificial sentience? I would put money on a sophisticated financial trading bot becoming the first artificial sentience. The survival of the bot depends on its making money, and at an early threshold of intelligence, it understands this. This is what I call being mortally aware. Moreover, the bot trades against other actors in its [trading] environment. Some of those actors are other trading bots, others humans. And when it models those actors, it also models itself. Thus within that modeling lies a kernel of self referentiality, and a notion of being one of many. I imagine the bot does natural language processing -- cause a lot of trading algos already do , and regularly tweets stuff too -- cause, again, there are already bots that do. So it might be a conversant bot that doesn't pass the Turing test. Still, if you can hail it by name, it is at the very least a sentient idiot savant. But when will we recognize this as sentience? When it's presented to explain why some bots seem to make desperate, risky bets if they suffer moderate losses, perhaps.

Saturday, September 3, 2016

On the Conversion of Matter to Gravitational Waves

I am not a specialist, but following the [not so] recent news about the detection of gravitational waves from what is thought to be a merger of 2 black holes about 1 billion light years away involving a combined mass only 60 times the sun, I couldn't help but wonder..

How much of the universe is gravitational waves?

The facts LIGO is expected to hear these gravitational waves more frequently, combined with significant mass loss (on the order of 5%) each of these events represents, raises questions like At what rate is the cosmos dissipating matter as gravitational waves? How much of universe's mass has been converted to gravitational waves since the Big Bang? Answers to such questions would no doubt depend on the average number of times black holes merge together to attain a given mass. Over cosmological time scales, this churning might add up.

Does non-linear superposition admit standing gravitational waves?

Speaking of cosmological scales, if the universe is humming with these, what happens when 2 or more gravitational waves meet? Linear superposition does not work here, since GR is non-linear. My cursory search for the topic turned up little. I was wondering whether under some configurations such wave-wave interactions can yield standing waves, the kind of effects that might bear on dark matter and dark energy. For a standing wave, here, I imagine any wave-wave effect that propagates at subluminal speeds should suffice.

Information conservation perspective

Finally, I wonder how much (if any) of the information buried in merging black holes is radiated out again as gravitational waves. I don't see this much discussed in the context of the black hole information loss problem. (If the information content of the black hole is proportional to its surface area, and the stable, post-merger surface area is less than the sum of the pre-merger surface areas, my thinking goes, then maybe some of that information had to escape as gravitational waves?)