Monday, August 12, 2019

Lonely Runner Redux



The other day I attempted to prove under rather lax conditions that

There exists a time t s.t. all the moving runners are exactly at a distance 1/2 from the still runner.

Turns out, I can't (and I think my claim is provably false). The trouble is with any runner who's speed is a factor of 2. But I think I can show that the strategy works absent such a runner. So let me update the claim.

If the runners' speeds are odd relative primes then there exists a time t s.t. all the moving runners are exactly at a distance 1/2 from the still runner.

August 31, 2019

(The words "relative primes" in the claim above were thus crossed-out.)

Proof. At time t = 1/2 any runner with odd speed is directly across the starting point on the circle of unit circumference.


February 26, 2020

Note to self: The strategy mentioned at the bottom of this post might still work for cases with an even-speed runner in the mix. So that I don't forget (and maybe encourage myself to give it another go when I find the time), here's how I think you can approach it:

We might be able to approximate an even-speed runner as an odd-speed runner on a different time scale. For example, if we multiply all the runners' speeds by 99 (an odd number), then the timescale becomes 1/99th of before, and the odd-speed runners remain odd. To make the even-speed runners odd we add one to the result of multiplying their speeds by by 99.


(You can skip the rest below.)

In fact, let us describe an algorithm for calculating this time. As before, we consider time in discrete steps. Consider pmax, the maximum runner speed in the set {p1, p2, .., pn}.

Scale the track to be of length 2pmax. while keeping the runners' speeds constant. (You can think of this as our changing the units of time.) Consider the track to consist of 2pmax bins of length 1.

As before, label the bins from 0 to 2pmax -1. The position (bin) bi of any runner with speed pi at time t is

bi = t pmod 2pmax

Observe at time t = 1, and every odd time thereafter, the fastest runner (with speed pmax) is directly across the starting point (at bin pmax).

Similarly, for each runner i, find the smallest positive solution to

 pmax = ti pmod 2pmax
  
Because the pi's are odd coprimes there always exist solutions for ti . And these ti 's are always odd. Note as with the fastest runner, any odd multiple of ti is also a solution.

The time t1/2 when all runners are directly across the starting point then is


t1/2 = ti             ; ti ti-1, ti ti-2, ..

where the product is taken over distinct ti's. And after renormalizing our track length to 1

T1/2 = (1/ 2pmax) ti




Examples:

1. {3, 5, 7, 11} gives T1/2 = 11/22 =  1/2

(Here, the distinct ti's are 1 and 11.)


2. {7, 11, 13, 15} gives T1/2 = (13 . 15) / (2 . 15) =  13/2


~

So the runner with speed 2 is special.

Now one thing to note is that if this argument is correct, then we can arrive at approximate solutions for the case where there is an even speed runner, by scaling the speeds to a high odd factor and then tweaking these values to be odd coprimes.


Addenda:


August 15, 2019

The claim can be relaxed so that the only requirement is that the runners' speeds be odd. This is because adding a runner to the race whose speed divides the speed of any the original runners does not change the method of calculating T1/2 . So the updated claim is

If the runners' speeds are odd relative primes then there exists a time t s.t. all the moving runners are exactly at a distance 1/2 from the still runner.
August 29, 2019

Not so sure about this last claim that they just be odd. The argument hinges on the fastest runner's speed pmax not being divisible by any of the other runners' speeds the pi 's.

PS I risk making a fool of myself posting thoughts on line. (I've decided it doesn't matter.) Anyway, I must be getting this whole problem wrong, because for any runner with odd speed at time t=1/2  they're directly across the starting point on the circle of unit circumference. So all this calculation, much a do about nothing.


Thursday, August 8, 2019

Proving the Lonely Runner Problem Discretely

The other day I came across the Lonely Runner Conjecture from Tamás Görbe's tweet. Here's my attempt at an outline of a proof. In fact, I'm attempting a stronger result that is hinted by Czerwiński, S. (2012). "Random runners are very lonely". Journal of Combinatorial Theory, Series A. 119: 1194–1199 (arXiv.org)


Addenda:
August 12, 2019
This article contains numerous errors. Here's an update.


August 10, 2019
This is wrong. I know why it's wrong, but don't why it's wrong [sic]. I'll post an update





Consider the discrete [re]formulation of the problem where the runner's speeds take on distinct integral values and one of the runners stays still. We aim to prove

There exists a time t s.t. all the moving runners are exactly at a distance 1/2 from the still runner.

Now in addition to quantizing the speed of our runners, let us restrict t to discrete values also. In this view then, the circle consists of a ring of bins, and at each step (incrementing t by 1) each runner hops as many bins forward as their integral speed.

(Observe in the original formulation of the problem, we are free to scale t (as in to whatever units) as we wish. Thus it is important to keep in mind that it is only the relative magnitudes of the speeds of the runners that matters.)

Let us have N = pi bins in the ring where the pi's in the product are the distinct integral values of the runners' speeds. (It makes the argument seem clearer if you think of these pi's as co-prime, but our argument here doesn't hinge on it.)

Label the bins in the ring with numerals, starting with 0 (zero) for the starting bin (whence the race began). At any given time t, the bin bi for the i'th runner with speed pi is given by

bi = tpi  mod N

The runners then meet at bin 0 the starting point every N steps. Without loss of generality, assume N is divisible by 2. (There are multiple ways we can transform the case where N is odd, to an even one--but conceptually the simplest way is to just throw in another runner with speed 2.)

Now consider modulo N/2. Notice the runners also meet at bin N/2 at every time t = N(k+1/2)  (for all k in Z). Thus, the distance to the still runner at such times, after normalizing the ring's length to 1 is 1/2.

Also, note not all runners need to be running in the same direction for this result to hold. (Which in turn means the result holds from any runner's perspective.)

So this gives exact solutions for distance 1/2 as

t =  (k + 1/2)pi ,  k any integer
Note, at this point, t can be loosened to taken on fractional values.
~

What do you think? (This seems too easy. I must be missing something.)

Tuesday, August 6, 2019

Fuzzchain








In an earlier post, some 2 years ago, I thought out loud about how one might record the voluminous output of a computer assisted proof on a blockchain. It's a thought I've kept returning to and which I'm still developing, hopefully with the help of others (but if someone runs with this or some variation on it, that's okay too). I'm calling it Fuzzchain--and not just because its outlines are still fuzzy.

So I started sketching this out as a blog post, but quickly realized it would need to be an evolving document. I had begun with an outline:

  • Scale/distribute computational bandwidth (minimize duplication)
  • Reliably record the result of a long running computation
  • Random access the state of a [completed] long running computation
  • Compress the voluminous output of a long running computation (equivalent to previous bullet)
  • Embue fuzz, the chain's native coin, with intrinsic (yet market driven) computational value.

The design of the proposed blockchain is discussed in parts.

  • Overview of protocol philosophy, design elements
    • hint-of-work (HoW)
    • chain state hash
    • work proposals
    • falsification incentives
    • block value function
    • block payload data
    • ecosystem evolution
  • Outline of protocol
    • programs written in a Turing-complete, but stack-based language
    • format for serializing execution checkpoints of programs
    • block elements
    • existing technologies
    • block linking rules
  • Discussion

But that kind of exposition sounds too normative. So I abandoned using that outline, but am including it here because it hints at what it's about. Here's my working document. Hope it sparks some ideas.



Sunday, February 11, 2018

Blockchains, Transaction Immutability, and the Judiciary

The rise of Bitcoin and other cryptocurrencies as a recognized financial asset class has somewhat distracted pundits and observers from the original intent of these blockchain technologies, namely as a decentralized medium of exchange of goods and services. When viewed as a pure store of value instead, much like gold, a cryptocurrency's functional performance becomes peripheral to the belief system that sustains it. (That observation, and how the drive for profit might mechanize the creation of pure belief systems, is an interesting aside--perhaps, for another time.) To be sure, much is being developed on the functional side with 2nd layer infrastructure (Lightning Network, for example, to address scaling issues). So the vision of a functioning decentralized crypto-economy is still very much alive.

The crux of this post is a call to attention that the rise of the blockchain poses unique challenges to the judicial system. I write this not in the belief my arguments and conclusions are necessarily correct, but with the worry that it's an issue we should cover sooner than later. If economic prosperity is premised on sound property rights and well functioning courts, then to drink the Cool-Aid we must also consider what is to replace a withering judiciary. Transaction immutability, we shall see, the very principle held most dear to the community, lies at the heart of the issue. But before delving there, let us back up and consider the basic components of any transaction.


Trust Multifaceted in Transactions


It's easy to forget, but whether bartering, or exchanging a good or service for currency, there are always at least 2 types of items being exchanged in any given transaction. Each type of item exchanged introduces its own type of risk. Is the gold real? Are the apples rotten? Is it counterfeit money? Is the title clear?


The issue of trust (the other side of risk) presents itself not just in the examining the quality or provenance of the goods/services being traded, but also in how ownership is conveyed (or transferred--IANAL but am taking liberties with the real estate law sense of convey).

Traditionally, the state has acted as a sort of guarantor in various capacities in a well functioning marketplace. Indeed, the minting of hard-to-forge currency can be seen as a means for facilitating trust on the monetary side of transactions.


Ownership Records


But that protection does not extend to the goods/services being purchased. For these, we largely depend on evolved structures and institutions. Ownership in certain asset classes (such as land) has historically been recorded, and in modern times the state has assumed the role of recorder of many a such asset--real estate, vehicles, come to mind. In other cases, this recording is delegated to existing evolved institutions in the marketplace. Here in the US ownership of shares in public companies, for example, is recorded with an independent entity, the transfer agent--or at the brokerage house (in so called "street name"), if held in a margin account. And for centuries across bazaars in the middle east, wholesale goods in storage have often changed hands only in book entry form.

Regardless how (and whether) property is recorded, the power of the state to intervene and transfer the ownership of property (usually not to or from itself) is implicit in the marketplace. Indeed, it is generally (if implicitly) understood that property rights are rights granted and protected by the state. In well functioning market economies (not necessarily democracies) this power is exercised both sparingly and judiciously. While most transactions clear without its direct involvement, the state (usually through its judicial arm) intervenes in a minority disputes. Ultimately, the exercise of this power rests in the state's ability to force property records (whether private or public) to convey ownership. And these records also include those of financial assets and currency.


Blockchains: Code as Law


The rule of law has always been essential to the proper functioning of the marketplace. If the traditional interpreter and arbiter was the judiciary, the new interpreter of contracts and law in the crypto-economy is code. This new interpreter is guaranteed to be impartial. It follows the rule book to the letter, and its records are final. Moreover, neither it nor any sovereign can force a change to its records to convey ownership.

In the land of Code as Law, the only remedies are legislative: an amendment to the rule book. Legislative measures, however, usually target classes of stakeholders; they seldom address individual transactions and in any event cannot scale to remedy any significant number of transactions in dispute. An example of such a legislative fix was the Ethereum fork to undo the DAO heist--of which I griped about here.


Caveat Emptor: Mixed Legal Regimes


A transaction involving the purchase of a good or service with bitcoins, then, falls under two separate and independent legal regimes. On the buyer's side, the conveyance of the goods or the performance services purchased is governed and protected under the local laws of the state or relevant jurisdiction. On the seller's side, however, currency is conveyed under the Code as Law regime. Because neither regime can ultimately force the other to change its property records, the two legal regimes are largely disjoint.

From a theoretical standpoint, jurisdictional power enjoys a slight upper hand over Code as Law. While in the face of a court judgement an individual may be unwilling to give up their secret key in order to, say, return ill-gotten bitcoin gains, the state can still induce cooperation by depriving them of liberty.

What about smart contracts? Here, jurisdictional power seems to enjoy a bigger advantage. In the event a smart contract conveys ownership of, say, a hard asset, the state (again, usually through its judicial arm) may potentially overrule the new record of ownership (title) on the blockchain. Same if it involves an exchange of financial assets (such as shares in a public company). And this fact, in turn, clouds the authority of records on blockchains that reference assets that ultimately fall under the purview of the state.

Returning to the simple purchase of real goods using bitcoins use case, the risk asymmetry for buyer and seller is accentuated. Consider the following: would you feel more comfortable buying from a vendor who deposits your money at the local bank or one who immediately wires it to an account in the Cayman Islands? Buyer be wary.


Remarks


I am deeply skeptical that any system of commerce can prosper without the protections of a functioning judiciary. If commerce is to extend to the blockchain, then the blockchain must offer something in place of the courts. And if it does, then its records cannot be immutable in the strict sense we have espoused.

Let us then contemplate how a judiciary on a blockchain might work. If no transaction is ever [effectively] final, then perhaps there are a set of cryptographic keys that can override all others. One approach that comes to mind is to employ a proof-of-stake (PoS) strategy for constructing a blockchain. Some PoS protocols leave out how the initial stakeholders come to be: Algorand is an interesting recent example. What if a central bank were to issue crypto coins backed by fiat currency? That would take care of bootstrapping the initial stakeholders (anyone holding the pegged fiat currency would be provided a mechanism to be a stakeholder). Also, there might be special keys held in trust by the courts, that are collectively empowered to override all others. Just how the state manages [not to lose] its keys is obviously an open question.

Yes that is a perversion of the trustless, decentralized ideal. But what if the ideal is impractical? What if we are forced to build the analogous trust hierarchies on the blockchain as have already evolved in the real world?

And why would a sovereign consider issuing coins this way, besides? Perhaps because it couldn't risk being second.







Friday, June 30, 2017

Revisiting the Zoo

Image Credit: clipartpanda.com


Projections about the near term trajectory of future technologies suggest a revisit of the Zoo Hypothesis (Ball, 1973) for the so called Fermi Paradox. In this essay I recast the hypothesis in an updated context with an eye towards machine intelligence and information transfer as a means of interstellar travel (Scheffer, 1994).

Edit Note: While I prefer not to edit my blog posts (save fixing typos, grammar, etc.), I've decided to do it here. I'll try to stick to adding new, clearly marked sections (rather than editing existing ones).

Motivation


Since we don't know exactly what to look for, the search for extraterrestrial intelligence necessarily involves a good deal of conjecture about the nature of ETI. If we ever do discover an ET civilization, it will almost certainly be millions of years more developed than ours. This search, therefore, is necessarily informed by far future projections of our own technological progress. While such long range projections are clearly beyond our reach, much can be gleaned from near term predictions by futurists. Indeed, in a historical context, we find ourselves at the knee of a geometric growth ladder that casts the steps behind us as quaintly short, the ones ahead as dizzying fast, and the present ever harder to anchor. As we learn our future, so too must we adjust our search for ETI.

8 Dec. 2018 

A SETI Conjecture: If you can imagine it, they can build it


I propose the following guiding principle for the search for ETI. If you can imagine a technology that should be technically feasible (i.e. does not violate known laws of physics), then it's a technology an ETI (with its huge lead) has already achieved.

While not every attained technology is necessarily a technology in use (that leaves a technosignature, for example), some enjoy such outsize advantages that their use should seriously be contemplated.

Information Transfer As Means of Travel


In Machine Intelligence, the Cost of Interstellar Travel, and Fermi's Paradox (1994) Scheffer argued it is way cheaper to beam the information (bits) necessary to print an interstellar probe at the destination than it would be to physically propel the probe there. By now, this idea is a familiar theme with companies vying to mine the asteroid belt: it is generally understood that it would be far more cost effective to build the mining equipment on location than to ship them from Earth. And a good deal of this on-site manufacturing will involve printing 3D objects which may then be assembled into larger, useful objects. The blueprints for these manufactured objects of course originate from Earth, and we'd soon be able to transmit improvements to these blueprints at the speed of light.

The Printer As Computer


If the on-site manufacturing of asteroid mining equipment does not fully capture the idea of a general purpose printing technology, we can still contemplate it in the abstract (since we're considering technologically advanced civilizations). So first with a provisional definition..

General Purpose Printer (GPP). A printer that can print both simpler (less capable) and slightly more advanced versions of itself.

It's provisional because ideally one would strive to define it with the same rigor as, say, in asserting that a general purpose computer must be Turing Complete.

Perhaps the idea is better captured in the following Tombstone diagram (borrowed from compiler-speak).




Here the bottom "T" represents the printer. Given a blueprint (B), it operates on material and energy inputs (M/E) and outputs similar objects. The upper "T" (written entirely in the "blueprint" language) bootstraps the lower one to produce a more capable printer.
The evolving general purpose printer. From a small kernel of capabilities ever more complex designs can be instantiated. (The kernel here presumably needs a small arm to start off.)
The printer, thus, can be defined in its own "blueprint language," and much like a compiler outputting binaries on symbolic input, its material instantiations will be limited only by 1) the cleverness of the blueprint, and 2) the time required to execute that blueprint. And because it can be bootstrapped, the physical kernel that produces it (unfolds it) can be miniaturized--which in turn lowers the cost of physically transporting it.

Note we don't necessarily have to pin down the exact technology that enables this fuzzily defined GPP. Kurzweil, for example, suggests it must be nano-technology based (The Singularity is Near, (2005)), which seems reasonable when you consider you also need to print computing hardware in order to implement intelligence. Regardless, a technologically advanced civilization soon learns to manufacture things at arms length.

The Printer As Portal


A GPP parked suitably close to material/energy resources functions much like a destination portal. It's an evolving portal, and it evolves in possibly three ways. One, from time to time, the portal receives code (blueprints) that make it a more capable printer. Two, the printer accumulates and stores common blueprints that it has printed thus allowing future versions of those blueprints to be transmitted using fewer bits. And three, if the printer is intelligent it can certainly evolve on its own. Although, from an engineering perspective, you probably want this intelligence to be more like a guardian sworn to the principles of the portal, whatever those are. (One sensible requirement is that it shouldn't wander away from where the sender expects it to be.)

Time and Information Flow


Although this form of travel is effectively at light speed (and consequently instantaneous from the perspective of the traveler), the vastness of space separates points of interest (such as our planet) greatly across time. Distances across the Milky Way are typically measured in tens of thousands of [light] years. Enough time for an alien civilization to miss the emergence and demise of a civilization on a far off planet (hopefully not ours). Assuming intelligent life is prevalent across the cosmos, even with on-site monitoring, word gets out late that a new civilization has emerged.

Earth has been an interesting planet for about a billion years now and should have been discovered well before humans evolved. It's not unreasonable to hypothesize that one or more GPPs were parked nearby long ago. Those GPPs would have had plenty of time to evolve--sufficient time, perhaps, for the singular culture the Zoo Hypothesis requires to take hold.

Physical Manifestations


8 Dec. 2018
The C-compiler is for the most part written in the C language. There are vestiges of its assembly (machine) language roots lurking about, but it's (almost) entirely defined in the language that defines it. In the C-compiler analogy for a GPP, a C program is a blueprint for something to be printed, and the compiler's binary output is the physical output of the GPP. Now although most programmers won't first compile their C-compiler and then compile their C-program, one can setup such a workflow. The crucial observation here is that it's possible to design the C-compiler in such a way that you need a smaller, far less capable C-compiler to output the full blown, more capable one. (The compiler, recall is just another compiled program and its binary byte size here is a stand in for the physical size of our GPP.)

The upshot of this observation is that over time, like a compiler that first compiles itself, a GPP's physical footprint can (and by our conjecture therefore does) become ever smaller. So small, that if we ever saw one in action, its physical outputs would appear to come out of nowhere.

Kurzweil predicts that humanity's artificial intelligence, manifested as self-replicating nano-bots, will one day, soon on a cosmological scale, transform the face of celestial bodies about it, and the universe with it, in an intelligence explosion. Here I take the opposite tack: an intelligence explosion leaves little trace of itself.

For once you can beam blueprints and physically instantiate (print, in our vernacular here) things at arms length, there's little reason to keep physical stuff around when they're done doing whatever it was that they were supposed to do. As long as the memory of the activity (of meddling with physical stuff) is preserved, the necessary machinery (like that mining equipment on the now depleted asteroid) can be disassembled and put away. An information-based intelligence has little use for material things; it is more interested in their blueprints.

 8 Dec. 2018
By this reasoning then anything coming from a GPP likely returns to a GPP in order to be disposed of.

This is not to say super intelligent ETs do not build things from matter (and leave them there). They likely need to build much infrastructure to support their information based activity. But as communication speeds are important in any information based activity, this infrastructure would have to be concentrated in relatively small volumes of interstellar space. In such a scenario, there's little incentive to build far from the bazaar.

Next Steps


I don't particularly like its original form because, as Ball also notes, the hypothesis doesn't make falsifiable predictions: "[It] predicts that we shall never find them because they do not want to be found and have the technological ability to insure this." The step forward, it seems to me, is to attempt more specific postulates (such as the printer portal introduced here) that are still in keeping with the broader "deliberately avoiding interaction" theme.

If the hypothesis is broadly true, then there must be a point in a civilization's technological development beyond which they (the metaphorical zookeepers) will no longer eschew interaction. Which suggests a protocol to start the interaction.

The search for extraterrestrial intelligence ought to aim to systematically confirm or rule out the zoo hypothesis. A zoologist looking to document a new species might well parse tribal lore and anecdotal evidence for clues.


___________

Notes


31 Aug 2019
In Proving Darwin: Making Biology Mathematical (2013), (pg 32, 33) Gregory Chaitin notes 3D printers that can make copies of themselves (our GPP here) are paving the path to Von Neumann universal constructors. He suggest calling it a universal factory.



Tuesday, May 16, 2017

On Strangeness: Extraordinary Claims and Evidence




Carl Sagan popularized the maxim "Extraordinary claims require extraordinary evidence." A good rule of thumb, and one which the scientific community generally adheres to. The extraordinariness of a claim has something to do with its strangeness (which is, of course, a subjective matter). Thus the strange, counter intuitive theory of Quantum Mechanics was developed only when faced with mounting, extraordinary (laboratory) evidence. Or take Hubble's strange notion that the universe must be expanding in every direction.

But not all strange theories and propositions arise from new ground breaking observations. Special Relativity, for example, which theorized a revolutionary relationship between hitherto independents, space and time, was arguably grounded in puzzling laboratory evidence from some 20 years before it (the Michelson-Morley experiment). In fact, neither Special- nor General Relativity is anchored on much "evidence". No, both these theories are actually extraordinary intellectual achievements anchored on but 2 propositions (the constancy of the speed of light, and the equivalence principle). Einstein conceived them both from thought experiments he had entertained since childhood. There was hardly any "extraordinary" evidence involved. Yet, his theoretical conclusions, strange as they were, were still acceptable (even welcome!) when first presented because, well.. physicists just love this sort of thing, the unyielding grind of (mathematical) logic leading to the delight of the unexpected: a new view of the old landscape, holes patched, loose ends tied, summoning (experimentally) verifiable predictions.

Curiosity Craves Strangeness


We covet the rule breaker, the extraordinary, the unconventional, the strange. Both experimentalist and theoretician seek strangeness. That's what keeps the game interesting. We absorb the strange, interpret it, and un-strange it. The theoretician's dream is to hold up a problem (a strangeness) and show if you see it from the angle they propose, it all looks simpler or makes better sense. If the angle itself is strange, then all the more fun with the insights the new vantage offers.

But there are limits to strangeness a consensus can tolerate. In all cases, a claim's introduction bumps into these limits when it broaches a reflection of ourselves. Over the years, the centuries, the scientific method has surely pushed back these limits. If we're aware of our anthropocentric blind spot (we have a name for it), the limits still remain. For though we know its nature, we don't know exactly where it lies.

The delightful tolerance for outlandish postulates and ideas in physics and cosmology is hard not to notice. There you can talk of multiverses, wormholes, even time travel--and still keep your job. Hell, you can even postulate alien megastructures engulfing a star on much less evidence and still be taken semi-seriously.

And you notice that SETI too is serious (experimental) science. Here we know not what strange we should look for, but we're fairly certain that it should be very, very far away. I find that certainty strange, and stranger still, that it's not properly tested. But even admitting it in some circles is akin to offering oneself for admittance to the asylum. So I don't. Or haven't much. (More on this topic in a subsequent post.)

Now I'll admit I have a taste for the crazy. I love nothing more than a chain of plausible arguments, thought experiments, leading one down a rabbit hole they didn't expect to find themselves in. But it's a taste for the crazy strange, not the crazy crazy.



Sunday, April 30, 2017

A quick argument on the linearity of space requirements for reversible computing

While checking out a paper Time/Space Tradeoffs for Reversible Computation (Bennett 1989) which Robin Hanson cites in his thought provoking book The Age of Em, I thought of a simpler line of attack for a proof of Bennett's result (not involving the Turing tapes). As Hanson points out, Moore's law hits a thermodynamic wall "early" with conventional computing architecture:

By one plausible calculation of the typical energy consumption in a maximum-output nanotech-based machine (~10 watts per cubic centimeter), all of the light energy reaching Earth from the sun could be used by a single city of nanotech hardware 100 meters (~33 stories) tall and 10 kilometers on each side (Freitas 1999). Yet Earth has raw materials enough to build a vastly larger volume of computer hardware.

Whereas computers today must necessarily dissipate energy (bit erasure creates heat) as they do their work, reversible computers are not bound by any thermodynamic limits in energy efficiency. This is an overlooked concept in future projections of technology, whether here on Earth, or speculating on the energy needs of advanced alien civilizations (Kardashev scale, Dyson sphere, etc.).

OK. So enough with background. The headline result of the paper is

For any e > 0, ordinary multi-tape Turing machines using time T and space S can be simulated by reversible ones using time O(T^1+e) and space O(S log T) or in linear time and space O(ST^e).

Now, if you're trained like me, the vanishing epsilon in the big O notation seems nonsensical. And the log of anything finite, is as good as a constant. (I'm a hand waving physicist, after all.) Regardless, this paper asserts that the space and time overheads of running a reversible algorithm need not be essentially any worse (big O-wise) than the irreversible (conventional) version of the algorithm. That, in my mind, was very surprising. The line of proof I have in mind, however, hopefully makes it less so. Here it is.

We begin by observing that any conventional (irreversible) program can be simulated by a reversible program in linear time using space O(S + T) (Lemma 1 in the paper).

Why must this be so? (I'm not offering a different proof from Bennett for this lemma; just an alternate exposition.) A basic tenet of reversible computing is that you must run a program in such a way that at any point along its execution path you keep enough information around to be also able to step backward to the previous instruction in the program. (Keeping this information around, by the way, does not magically get rid of wasted heat; it's just a necessary design attribute of any efficient, reversible computing hardware.) One way to make this more concrete is to require that the reversible computer's memory be all zeroed out both before the program is run and on termination. The inputs to the reversible computer are the program, and the program's inputs (which, strictly speaking, include a flag indicating which direction the program is to be run, forward or backward); the computer's outputs are the program together with the program's output (which, again, might include that flag flipped). But even with a brute force approach employing a write-once memory design (wherein memory is zeroed out as a last step), it's easy to see even this scheme's space overhead is O(S + T). (If you wrote out the contents of the registers after every clock cycle, the space overhead would still be O(S + T) while the final zeroing out step would still take O(T) time.)

So O(S + T) is no big deal.

But observe that any irreversible program (that halts) can be partitioned into a series of intermediate irreversible subprograms, with each successor taking its predecessor's output as its input. (You can picture this construct simply as n breakpoints in the irreversible program generating n+1 chained subprograms.) Now the space overhead for none of these conventional subprograms can be any greater than O(S). Assume the breakpoints are spread out evenly in execution time--for argument's sake, though it doesn't hinge on it. That is, the time overhead for each of these chained, conventional subprograms is O(T/n). But from Lemma 1, we know the space overhead for the reversible version of each of these subprograms is no worse than O(S + T/n). So as we increase n, the number of intermediate reversible subprograms, the space overhead of the whole reversible program tends back to O(S) the space overhead of the conventional, irreversible program.

~

P.S. The breaking of the execution of a long running program into many parts is also a central theme in my post about managing large computer aided proofs.