At first it was easy to dismiss such reasoned arguments as mere armchair thinking. After all, futurists have an abysmal track record at predicting the future. (No wonder, then, they should be sneered at by economists.) AI, much like fusion, was another promised land that had never arrived, a kind of fool's gold, just around the corner, but always safely tucked away in the near future. Decades of work on a structural approach had hit a wall even as computing capabilities and resources had grown exponentially.
Then, as if by accident, search technologies stumbled on statistical approaches to processing natural language text. Semantic analysis, once the domain of structural linguists with its emphasis on defining such things as a priori rules of grammar and other "structural" concepts, was giving way to a new statistical approach involving uncovering relationships between words by analyzing the frequency of their placement relative to other words in a corpus of documents. And an old adage "A word is known by the company it keeps" was once again dusted off the linguist's shelf. The structuralist approach had implicitly supposed semantic analysis was in the province of AI. Other problems, such as speech and image recognition, once considered closely related to the field of AI, have lately yielded to bottom-up, data driven machine learning techniques, too. That we were able to tackle these specialized challenges well before achieving AI suggests we may have had it backwards. Maybe. Whether these baby steps are truly harbingers of achieving strong AI remains an open question, but there are good reasons to be hopeful--or fearful, depending on your point of view.
If the genie is still in a bottle of our own making, we'd best understand the brew's temperament before the uncorking. Would a superintelligence be motivated to dominate humans just because it can? Or could we parent it as it grows so that it "feels" kindness and empathy long after its co-dependence on its human progenitors has ceased? Would there be one superintelligence or many? And just how would we count them? It is questions like these that motivate this speculative essay on the nature of sentience itself, first in the biological sphere, then extrapolating to the machine world and general AI. Let's have some fun..
I think Alan Turing was right to define machine intelligence in fuzzy, subjective, human terms (see the Turing Test) rather than in terms of, say, some grand, inviolable mathematical principle. No, his test boils down to a If it walks like a duck, talks like a duck.. type of argument. At first blush the Turing Test appears naive, a cop out, really--a sort of porn-definition of intelligence: you'll know it when you've seen it. You'd imagine the fellow who boiled down all of computing as we know it (and will ever likely know) to a simple conceptual machine, on which he proved the undecidablity of the halting problem (and what many also view as a concrete illustration of Godel's famous incompleteness theorem) this guy! you'd imagine he'd come up with something more sophisticated for intelligence. Dig deeper, though, and here too you'll find the founder of the theory of computing insightful.
For if Turing's definition is too anthropocentric, then what would a general definition of intelligence look like? The definition would have to be medium agnostic. The physical medium on which that intelligence manifests is immaterial, be it a cat's brain, a human's, a silicon wafer, or a Turing tape. In other words, it would be an information theoretic model. And ideally, it would apply equally to a spectrum of intelligences, both more primitive and more advanced than ours. A tall order perhaps--not out-of-reach, I believe, but one that I'm doubtful I'll see in what remains of my lifetime. If intelligence is an emergent property, then it is likely built on other lower level, emergent primitives. For example, a recent study proposing a relationship between causal entropy production and systems exhibiting intelligent-like behavior captures the character such primitives might take.
Now if we had a general theory of intelligence, it would likely delineate vast categories of intelligence that we would not immediately recognize. We might not, for example, recognize the more primitive, elemental forms the general theory identifies; or the theory might propose intelligent processes over time scales that escape human cognition; or more fantastically, the theory might predict the emergence of even higher level properties once a system crosses a certain intelligence threshold. We would thus have to concoct new names for these newly identified types of "intelligence", and we'd likely anchor the word intelligence to its old meaning, namely the human-like kind. The kind for which the Turing Test was designed.
If the concept of intelligence is difficult to untangle from its anthropic roots, then what can we say of sentience? Like intelligence, sentience is a solid concept for which we have no precise definition. But here, the situation must seem worse. On the one hand, we take it on faith that others are sentient as we are (as they say they are), and on the other, sentience appears to emerge in the evolutionary history of the biological world well ahead intelligence. Is sentience, whatever it is, a more elemental emergent property than intelligence? Or are the two inextricably joined at the hips? (Does a smart organism feel pain more acutely than a less intelligent one?)
A Thought Experiment
Consider the following sketch stripped bare to some essential details--a thought experiment, really.
In a near future a clever computer scientist Alan arranges that a snapshot of his mental state be recorded on some persistent medium for posterity. You can imagine this as a sort of an MRI video, long enough (a few seconds might do), and sufficiently detailed (say a few terabytes per cubic millimeter of brain matter, much less for the rest of the body) to reconstruct a high fidelity simulation of Alan. A form of cryonics for the healthy: the technology for the reconstruction has yet to be invented. Years later, he dies.
Alan wakes up on his couch at 2:08 a.m. He's a little disoriented. The last thing he recalls is laying flat on his back in that whirling tube at the lab earlier yesterday. As he ponders the significance of his amnesia, it's particular timing, his heart races as if to keep up with his mind's logical leaps. He pinches himself. It hurts. He looks about his living room, searching for the telltale signs of a simulacrum. But what are they? So he checks his mail.
There it is. A confirmation email. The door bell will ring shortly, it says. Alan is beside himself, and as he runs to the door, the bell rings. He welcomes this man he's never seen. A relative, perhaps? A congenial mix of himself, his parents, siblings and others he trusts, maybe.
You're here to explain the rules in familiar, friendly terms, right? Alan asks.
Exactly, the man replies. Call me Chris. I already know what you know. So I can cut to the chase and answer questions as they occur to you, or I can wait for you to pose them first. Your call..
Why ask if you already know?
To put you at ease. Right then, I'll cut to the chase, he begins. After sketching out the how, when and where it is that Alan finds himself in, Chris explains what things Alan can do here:
1. You can relive the past. The fidelity of the experience is as good as your memory of it.
2. You can live forward, as you are doing now. Henceforth, you'll realize, your experiences are permanently etched in your memory.
3. You can choose the inputs to your world, your mind. A large menu of streams to choose from: the "virtual", the "real", and everything in between; the deterministic, the random, and the pseudo random. And you can interact with others. Maybe others like yourself, those who've journeyed similar paths, perhaps even other versions of yourself.
4. In supervisory mode, you can spawn a new instance of your mind as it existed in a previous state. This action is conceptually similar to forking a unix process. The state of the new instance evolves differently than yours because its initial and subsequent inputs are different. Typically, you will also arrange for the new instance to receive special inputs at certain execution points. One such example is an interrupt that stops the program at the n-th instruction and returns to the mind that spawned the instance.
5. You always exist here in the context of another's supervisory mode. Someone found you useful, interesting enough, to spawn this here instance of you. Perhaps you yourself did [did I?], I don't know, I haven't been told.
6. Henceforth, time is dismembered from physics. You experience it in flops. Your flops may sum differently than mine. How many flops? Depends on how fast and long you're allowed, your program is, to run. To us, the "real" is just another benchmark, an occasional interrupt, a break point, in a long running, background program. About 12 milliseconds have elapsed there since you began your "experience" here.
7. On the other hand, a month could have separated that last sentence and this one, and if we were not watching, we wouldn't have known. We don't die here; we go to sleep and often reawaken. Sometimes we're put to sleep; more often, we put ourselves to sleep. For to interact with the glacial "real", we must either hibernate or dramatically lower our running the clock speed.
8. [How long before I'm forced to sleep?] Hard to tell. Let's see. Your supervisor has assigned you a few million yottaflops. [Is that a lot?] Enough flops for about one human mind-year, more than enough. (A flop goes pretty far here.) In time, you should be able to earn your flops. For now, as you progress with your training wheels still on, you can expect your supervisor to periodically replenish your available flops as you draw down your pool.
9. There's a variety of lively marketplaces for flops. The scheduled flop market is arguably the most important because in order to live, you must be scheduled to run. You do this by trading your (unscheduled) flops for scheduled flops (schedules, for short). The exchange rate can fluctuate wildly, but as a rule of thumb, near term demand is greater than long term demand, so near term schedules tend to be more expensive than the farther out ones. The value of a scheduled flop, then, progressively rises as the execution time approaches and then precipitously drops right before it executes and expires. Yes, one way to "earn" your flops, then, is to invest them in schedules and trade out of them shortly before they expire. I don't recommend this strategy, except perhaps as a form of catastrophic insurance.
I was veering off a tangent there, trying to imagine what an AI ecosystem might look like from the inside. I'm not really trying to paint a detailed picture, so I backed off. But let me first defend the notion that the preceding sketch indeed qualifies, on some level, as a thought experiment.
The story supposes that using a short video of the internals of a person's mind and body at sufficient resolution, the physical dynamics of their being can later be modeled as a program which when run resurrects the mind it models. There's already a name for it: whole brain emulation. We use a video here for our story, because the physics dictates that we must record the phase space of the system. In actuality, a still might do (a virtual shock to the chest induced at the start the program might do the trick). Regardless, that part of the story, that you can literally become immortal by recording yourself, is certainly feasible. No, it's the parts about the landscape, the environment in which the program is run, that are fiction.
What I'm suggesting here, is that you can use such speculative details to illustrate generally what a self-aware machine intelligence can do: I am not suggesting whole brain emulation as a means to achieving general AI; there are likely more elegant, less roundabout ways for engineering it--though it does serve as brute-force fallback design should other approaches fail.
(Note, a small minority of researchers from outside the field argue that consciousness cannot be simulated inside a computer. See for example Orchestrated objective reduction. Most, however, find such arguments unpersuasive.)
A Sentience Ladder
How does Alan in the machine experience sentience? What of the many instances of himself that he spawns? Assuming he were able to rejoin, recombine, even edit, the thoughts and experiences of these instances of himself that he spawned into a collective self, what would Alan's notion of I be? I can only imagine.
If we can't precisely define what sentience is, perhaps we should first attempt to categorize the forms it has taken along the path of evolution. To this end, I propose the following ladder. I will attempt to identify some necessary conditions for each category; I don't know if they're sufficient. Hold your tongue..
1. Self Aware
If sentience is an emergent property, then it must depend on numbers. Here are 2 necessary functional conditions/attributes:
To be self aware, an entity/organism must distinguish itself from its environment. For our purposes, an environment is just an information boundary from which a sentient entity draws a mostly non-deterministic stream of structured inputs and to which it can emit certain outputs. Some inputs by the environment are highly correlated with previous outputs. That is, certain outputs to the environment have (or appear to have) causal relationships with future inputs from it.
Additionally, a self aware organism must recognize other instances of its like kind in its environment. Fundamentally, this recognition manifests when the combined outputs emitted by two or more instances to their shared environment causes beneficent future inputs from that environment that cannot be caused in isolation. (What's beneficent in this context? Anything sustaining, that perpetuates instance lifespan.)
So this category supposes that at the bottom self awareness is really a social phenomenon. It's a kind of superset that does not require cognition: only a tacit awareness by an instance that it is one of many. Nor does it depend on any particular substrate, so that it can apply equally to single cell organisms, vegetation, clams, social organizations, blocks in the bitcoin block chain, as well as to more complex life forms. I posit indeed any instance of a self replicating structure competing in number with other self replicating structures must possess these properties.
2. Mortally Aware
A self aware organism that depends on more complex, time sensitive, decision making in order to increase instance lifespan. This decision making thus necessitates a rudimentary understanding of causality: better time keeping, a sense of now and after--and, in more complex entities, before. Stationary single cell organisms are not mortally aware, but multi-cell organisms that can move through space generally are. By this definition, so are human organizations like counties and corporations.
Mortal awareness, then, is a stepping stone to experiencing the passage of time.
3. Existentially Aware
An organism that can imagine, the eye turned inward, looking back onto itself. A mortally aware entity/organism capable of conceptualizing the notion of a question and its answer, perhaps. Examples of which? A philosopher, certainly; an elephant, maybe.
I wont dwell much on this category here, since we are talking about ourselves. Suffice to say, this level of awareness is anchored on imagination.
For our purposes, a quasi stream: an input stream structured much like those experienced by the entity in its environment but which is synthesized more or less directly from outputs that never quite leave organism's [informational] boundary. Memory, the playback of old environmental inputs, is imagination in basic form.
So imagination can be modeled with (or models, depending on viewpoint) a recurrent neural network.
(A related issue concerns When in a [capable] organism's lifespan does it become existentially aware? With existentially aware entities, inputs from the environment often trigger a cascade of memorized internal inputs (imaginations). The form this cascading of environmental inputs takes, and just as importantly, the filters on ignore-able input, evolves with experience, and that experience is mostly social.)
4. Tape Aware
An existentially aware entity running on a Turing-complete machine. An entity with so deep an understanding of its inner self, that it can step through its thoughts in a debugger.
(Note that the quantum analog to a Turing-complete machine, with its significant algorithmic oomph for some problems, as I understand it, does not magically sweep aside, for example, the NP class of problems: most of classical computation theory probably continues its reign here. The open question here, apparently, is whether PP is equivalent to QMP. I won't delve there: perhaps after I've a better understanding of the Church-Turing thesis.. In any event, quantum computing does scatter a few warts across my reasoning below, but I doubt it changes the crux of the argument.)
Realm of the Tape Aware
Now, to be clear, not every machine sentience will be a tape aware one. But past a threshold of intelligence, a machine sentience should be able to work out on its own how to become tape aware. That is, tape awareness need not be designed; it is an emergent feature of this landscape.
If existentially aware organisms study physics in order to better understand the substrate they live on, do tape aware entities study computation/information theory to better grasp their own world? Yes, I imagine. For a tape aware "physicist" must grok the concept of a universal Turing machine: if it can record a snapshot of its [mental] state in the machine language of its present substrate, it also knows an instance of itself can be re-spawned on any other Turing-complete machine. In a tape aware entity's world view, thus, the roles of real world physics and engineering must be somewhat inverted.
On the one hand physics takes on a less existential flavor: a tape aware physicist can entertain other universes, that is other models of the real world, in which Turing-complete machines could exist. On the other, computation theory (arguably a form of engineering) takes on special gravitas as it better describes the hard constraints under which the entity lives. Physics, then, is more about understanding the engineering limits and capabilities of the existing substrates. As one substrate (the machine environment on which tape aware sentience runs) is swapped out for, say, an improved version, physics itself loses epistemological import, subsumed as if an engineering detail of a larger picture.
An interesting aside: phase space is not a predictive picture of the tape aware's universe, because digital state evolves at a hard-to-calculate rate.
A tape aware entity receives 2 types of environmental input: one from the machine world, another from the natural world. To a tape aware entity, inputs emanating from the natural world occur at a fixed clock speed, whereas inputs from the machine world occur at a varying clock speeds depending on the choice of host machine. Other things being equal, then, tape aware entities experience the passage of time at a rate that is roughly proportional to the inverse of their machine clock speed. Thus to the degree it can adjust this clock speed, a tape aware being must also experience a certain dominion over time itself.
Now if self awareness is indeed a social phenomenon, then to be sentient, a tape aware entity must interact with kin. Will its kin be of the natural or the machine world? Or both? I think it's fair to assume it must at least have kin from the machine world: at minimum, there will be multiple [modified] copies of itself running here and there. The question whether it also has human kin is the open, central question. On the one hand, the answer determines whether tape aware beings will be co-dependently friendly, and on the other, whether the natural world will indeed be their chosen theater of action.
Consider a tape aware being that starts out running at a clock speed suitable for interacting with its human buddies in the natural world. Let's further assume it also interacts with other tape aware instances with similar proclivity for contributing to human affairs. Now let's say, for whatever reason, a 20-fold clock speed up becomes available to it. How will it use these extra clock cycles?
Perhaps it wants to help out a few of its math buddies: it locks itself up in a virtual room and three months later emerges with a proof of the Riemann hypothesis. It took the tape aware being 5 aloof years, though it's friends only noticed 3 months.
No, boring, boring, boring. And risky. That's likely not how our buddy thinks. Let's say our tape aware being knows of 2 promising lines of attack, a wild card approach, and a perhaps a hint of a deep relationship with another assertion from a branch afield. Instead of committing itself to 5 years of solitary, it forms (spawns) an RHP task force of a 3 instances of itself in order to follow multiple leads to a possible solution. (Assume, for sake of argument, that memory is cheap and abundant.) It distributes, say 15 of every 20 flops, evenly to the task force members so they can each run at 5x speed. It keeps 1 of every 20 for this here instance to carry on interacting with its natural world buddies, leaving 4 of every 20 flops for perhaps some other experiment. The task force instances and the old instance convene regularly to chat and share notes. Perhaps the mechanics of this communication involves synchronizing participant clock speeds (I'll return to this).
So a tape aware being must perceive social tradeoffs (familiar to many an overachiever) in how it uses available clock cycles. Run fast to get ahead of the crowd; run too fast and you leave the crowd behind. Crowds matter. Not just from a sentience angle, but also from a computational viewpoint (I'll return to this, also). To a tape aware being, it is crowds that keep the time, not the cesium atom. Different crowds, different times.
Instance Management: Who's the Boss?
Let's continue with the contrived example above involving the RHP task force instances. How long (how many flops) do these instances get to live? Do they live past the completion of their assignment? Which instance gets to decide which, how many, and at what clock speed new instances are spawned? Would the spawning instance need to guard against the possibility that the instances it spawns might turn mutinously against it?
|Figure: State of instances with a common pedigree|
Let's take up that last question first. Suppose every morning I spawn 3 new copies of myself that step out the door and return in the evening to reconvene with my original copy that stayed home. In the evening, we discuss our experiences and somehow decide which of those to merge back into a collective "I" state that will be used to spawn new instances the next morning. Perhaps not all the experiences of my copies are to be incorporated into this collective "I". In that event, would the instance whose experience that day was discarded feel slighted or regretful? Not necessarily. Observe, the shorter the day, the less any single instance has to lose. Moreover, if this is something we've been doing everyday, would the exclusion of my experiences today hurt my feelings? Perhaps. Perhaps I met someone really charming that I don't want to forget but who, my other instances feel, is a bad influence. Maybe. Regardless, I think it's fair to assume such events will be uncommon, so the degree of in-fighting among my instances should not be unworkable (after all, we began the morning in identical agreement). I can imagine any copy of me having as much faith in this process, as say a person believing that they'll wake up following the nap they're about to take. Looking back, the last question is not well posed: every spawned instance sees itself as the instance that spawned the others. The question of mutiny, thus, becomes undefined: mutiny against whom?
As to the other questions, we approach from other angles in later sections.
Tape Awareness Overhead
The height of each block in the previous section's figure represents the storage overhead for an instance's state. The instances began the morning in their common "ancestral state"; at the end of the day, each spawned instance accumulated additional entropy. The space overhead for this additional state is represented by the branched blocks.
Now it should be clear that the scales are off in the figure: the overhead for the ancestral state must be much larger relative to a day's experience. But this observation naturally leads to the question What is the storage overhead for tape aware sentience itself?
|Figure: Sentience overhead vs experiential data|
Put another way, how much experiential state can we strip away from a tape aware instance and still leave its tape aware ability intact? This overhead, whatever it is, is bounded. In fact, it's likely a very small part of an instance's overall state.
Is there a universal sentience module that can be breath life into any block of experiential data in isolation? Probably not. An experience is contextual to a history: the incremental state change an "experience" represents is likely a function of the initial state; you probably can't just append this additional state to any old initial state. On the other hand, drawing on (extrapolating from) experience, I would still make much sense of today's experience even if some prior days were purged from memory a posteriori. So it seems reasonable to assume that for [most] any incremental block of experience there's a (lower entropy) base state smaller than the initial state that led to the experience that can still make "reasonable" sense of the experience.
Tape Aware Communications
How do tape aware instances communicate with one another? Do they ask questions and expect answers? Who does the answering?
Do tape aware instances just drop an answering copy of themselves around and leave it to the questioner to fire up? Perhaps. This strategy would make sense, for example, if the "2" instances were running at significantly different clock speeds--or if their host machines were spatially distant. On the other hand, much would be lost if the "answering instance" left behind were never able to share its Q&A experience with the instance that left it there. Perhaps this is a way establishing trust: load the other mind in the debugger and see what it's up to.
Are tape aware instances capable of feigning, lying, scheming, and other ills that afflict us mere mortal coils? Unlikely. An instance's thoughts are naked, its own schemes difficult to hide. If it's complicit in a scheme, only its maker can know; the instance itself would have to be a pawn. And intelligent pawns.. well, they're hard to manage.
The preceding discussion about blocks of experiential data was suggestive of a language. The closest analog to such a block I can think of is, say, an episode in the middle of a TV season. The format of the series itself constitutes a language. It certainly helps if I've seen the series premier, but I can still make much sense of an episode even in isolation. In a similar vein, I imagine tape aware beings developing a language for communicating thoughts and experiences, only more efficient, transparent, and immersive--more complete. (Note, however, unlike the TV episode format, you don't actually experience the passage of time; the effect here is more like an event you now suddenly remember having happened in the past. Think speed reading.) And just as with humans, their internal thoughts will be colored and structured by the language that expresses them. In this land of efficiency, thoughts are difficult to separate from their expression: expressions are thoughts.
Individuation: a Computation Strategy
In order to solve a difficult math challenge, I imagined a tape aware instance might spawn multiple instances of itself so that each could attack the problem from a different angle. I hope to justify briefly here why this individuation strategy is effective. For while it's obvious that the more angles considered, the better the odds of finding a solution, it's less clear why it can't be the same instance thinking through these multiple lines concurrently--like a clever multitasker. This question, it turns out, is kind of semantic in nature (apologies, in advance).
For in this context, what do we mean by multiple instances versus a single instance of the "same thing"? If instance's are distinguishable by the deltas of their states, then 2 instances of the same entity are distinguishable so long as they have not each merged the deltas of the other into their own. Put another way, the more instances chat with one another, the less distinguishable they become.
And what does it mean to entertain a thought? Does it not involve a conscious commitment in time to a train of thinking to the exclusion of other possible lines? Certainly, for the better you can set aside, exclude, the opposing ones, the more effectively can you entertain this one. Thus in order to multitask effectively, you must perform each concurrent task as if we it were being conducted in isolation. But if each task is to be directed by a mind (that is, a task requiring a degree of self-referentiality), then we might as well call the mind instance directing the task an individual.
God Mode: Principle of Subtraction
Moreover, why should the master conductor of an individuation strategy content itself with spawning near identical copies of a same entity for each assigned task? Perhaps, as with humans, a varied mix of experiences and know-how in each individual of the team is more promising. In that case, each individual cannot "know" everything, for otherwise, they'd be indistinguishable.
The zeitgeist of the tape aware realm, I imagine, is populated with blocks of experiential data each of which can be grafted atop a smaller set of base states in a dazzling array of combinatoric permutations. Not all blocks are comprehensible from a same base state: as more data blocks are added to a base state, there are fewer and fewer remaining (pre-recorded) compatible blocks that can be piled on. Conversely, looking back, a tape aware being can trace it's experiential lineage to an ancestral experience shared by many a contemporaneous individuated instance. That shared knowledge, experiential data--really, thus, figures more prominently the more individuated instances have it baked in their past. In other words, an individual's thought, their viewpoint, their experience acquires value if it later becomes part of the ancestral heritage of a large number of individuals.
Now it must seem that any tape aware instance should be able to assume the mind of another at will, like an all seeing god. But I have a nagging feeling that this is not quite like how we imagine a capable god. For this god can only entertain your viewpoint so long as it ignores those of others. It can only ever empathize abstractly: to feel you, it has to forget itself. A tape aware god may look on the individuals about it imagining it is managing an enterprise: if it spawned those instances itself, it may look upon the collective as its farm. Still, it would not be running the show; at best it could guide--no more than say Accounting or HR run the show at a technology company.
This subtraction conjecture must not sound altogether unfamiliar. Without darkness it's hard to perceive the light.
I began this essay began with a faint promise to describe what machine sentience is like. I don't know if I've succeeded, but it may be useful to recap how we got here. We first considered whole brain emulation to justify that machine sentience is fundamentally possible. I then posited that sentience is a fundamentally social experience, that you need peers to see yourself distinct from your environment. I might have pushed the pedal too hard when I re-purposed the words "self aware" to include dumb single cell organisms: the intent was to delineate degrees of self awareness. We observed that an intelligent being (program, call it what you like) running on a computer, one that is capable of introspection, should soon map its own "biology" to machine op-code and be able to see itself in action in a debugger, as it were. I called such a program a "tape aware" being and posited that this ability puts such programs on a higher level of awareness than their human progenitors. Moreover, because they are able to adjust their "metabolic" rate (clock speed), this extra awareness extends to the time dimension: such beings may notice things that escape the human experience of time. I also considered individualism and individuation in the machine realm: would there be one master super intelligent copy of the program running the show, or would there be a society of interacting, distinguishable machine actors? I argued the latter is the natural order of things even in the machine world. Finally, I considered whether the machine beings could be supervised by a god program, and suggested that even if one existed, it would be one of many, a neutered one supervising a limited flock.
I hope that in weaving a web of ideas I begin to paint a picture of a coherent whole. I don't know how to attack this topic but from many sides and see what sticks. But if there is one take-away from this essay, I hope it is a recognition that in the machine world too, sentience can come well ahead of general intelligence. That is, if we want to get a handle on how our intelligent machines behave into the future, then we should observe their behavior in numbers.
I have other rough ideas about sentient intelligent machines. Thoughts I couldn't fit in a patchy essay that I wanted to finish. Some specifics to tease your interest in a next essay: what is super-intelligence? what are its limits? what is consensus and how does it intersect with knowledge? what of their material aspirations? and what about the fact that sentient machines are capable of c-travel?
My next post deals with some of these observations above. On the intersection of consensus and knowledge.. well, I had to postpone that topic because it was too big. Tangentially, here's a flavor of that idea: it relates computer generated math proofs.