Wednesday, September 15, 2010

Toronymous

[A short story in progess..]

The twenty teens were destructive. Constructive, really, as the saying goes, but that only came behind its ruinous wake. For just as the web's ecological structure, it's business model, as they liked to say in the day, was beginning to solidify, just when it seemed the Wild West had finally been tamed, just when click-through rates were hailed the most accurate barometer of economic activity, the rug was about to be pulled from underneath.

The dark web, as it was later called (and then forgotten), like the web before it, developed mostly in the shadows, and by the time the captains of the new [old] order could see it coming, it was already too late. It represented not just new technology but also a movement. It was embodied in a word, an electronic device, really: the Toronymous.

In 2014, the small Taiwanese router manufacturer NextHop first introduced the device. Technically, it was hardly groundbreaking: a mashup of off-the-shelf hardware, and open source software that many a hobbyist could build themself, now packaged in a smooth, reassuring, charcoal black encasing bearing an orange lizard logo--available at Walmart.

To be sure, there would have been many other similar devices on the store shelf at this time. Bundling home and business routers with extra smarts and storage capacity was already a booming growth category. These new smart routers (recall, the vernacular "smart" connoting snooty comes a few years later), not only serviced their owners inside the network, but also served users outside the network (the public): the router, in other words, was also one or more websites.  They were touted to do many things: a thermostat manufacturer, for example, provided a simple plugin that allowed the temperature be set remotely.

But more significantly, following a number of high profile divorce suits in which Facebook data were subpoenaed, people had begun to see a need to take physical possession of their digital contributions to the web. These routers now allowed their owners to host their own blogs, blurbs, and albums on a device they physically owned and could always unplug.

A number of geeky developments had set the stage. From small beginnings, W3C work on a secure, web-based, push/pull information exchange protocol had yielded a set of basic building blocks--collectively called DOSN (pronounced "Dawson")--for constructing (among other things) distributed, implementation-agonsitc, social networks. This simple, Spartan "standard" had attracted a good deal of mindshare in the community: developing DOSN-based, social networky apps was considered sexy. What was cool about doing apps this way was that different implementations now had a way to talk to one another. And these applications had now found a new home in those shiny routers sitting on the store shelf.

The movement had caught on. A dark web had emerged. From the inside, it looked very much like the ordinary web outside. Only, who could see what was now determined by you and your friends, not some central clearing house. In the dark web, you would trust certain people with certain information. To be sure, your friends could leak the information you shared with them--but that is how it had always been and would be. Now, however, using steganographic tools, it was usually possible to determine who had leaked the information by examining the version of the leaked artifact.

Facebook page views, meanwhile, for the first time in the company's history, started trending lower, and the company's stock price sank following two consecutive quarters of declining growth. The market had been caught off guard, and there were now no shortage of pundits predicting the next business model headed to the dust bin.

The growth of the ad-free, dark web, however, had thus far not come at Google's expense. Indeed, the benevolent giant was making forays into the smart router market with its own Linux-based Droid Route (DR) operating system. Google had seen no decline in overall traffic as the dark web had emerged. To the surprise of many, it turned out a great many darkies, as they liked to call themselves, were not so private after all. They still shared a great deal of information about themselves publicly--which the search engines were only too happy to index.

Google's advertising model had evolved. Broadly, its ad placements were determined from two inputs: the content the user was viewing, and "anonymized" information about that user. The content side of this equation was safe. The Personally Unidentifiable Identity (PUI, pronounced "pew-ee") end of the business, however, was increasingly under attack. Privacy groups had long bemoaned the lack of oversight in this burgeoning industry, and time and again, security experts had demonstrated how to de-anonymize supposedly anonymized information. Google, it was said, knew more about you than any other government or commercial entity on the planet. This concentration of informational power worried many, and some were even considering legislative measures and remedies that defined what, how and when personal information could be harvested.

But the browser makers had already begun chipping away at the ability of PUI outfits to harvest personal information about users. Better cookie / persona management, HTTP request header sanitization (e.g. user-agent, and referrer), ad-block mode, and a slew of other out-of-the-box improvements had made life for the PUIs more difficult. A cat and mouse game had begun--with the cat casting an ever wider net, as the mouse got better at evading it.

Still, the PUIs' ace in the hole was the user's IP address. At the end of the day, whether dynamically or statically assigned, a user's IP address was an anchor from which much information could be gleaned, cross-correlated against databases of user browsing habits, pieced and assimilated into existing "anonymized" user dossiers maintained by the PUI.

Others however saw a giant industry standing on its last leg. Take away the IP address, and they got nothing, they argued. Already a growing number hobbyists and technically savvy users were modding their smart routers to do this by installing Tor/Privoxy gateways.

What distinguished NextHop from its peers however was that it was the first to introduce this mod out-of-the-box. Toronymous was a fantastic, if short-lived, marketing success.  And the story of how Mr. Lang managed to engineer on-demand manufacturing capacity, of course, is still a subject of study for students of business. For example, instead of scaling manufacturing capacity by making more of an existing model, he would craft a new model suited to the manufacturing location at hand. (And so it was that NextHop next introduced the Toronymous X series, and as if the pun needed explaining, this branding pattern was followed by Toronymous Rex, and then simply the Toronymous Rx series.)

Mr. Lang was right to milk this brand as fast as he could, for he never even owned it. Tor, the open source project responsible for a key software component used in the device, had sent the company a cease-and-disist over their use of Toronymous . NextHop at first rebuffed the claim, but when Lang learned his trademark applications at the Patent and Trademark Office were going nowhere, he approached the group hoping to license the mark. It was not to be, but Lang somehow managed to keep the license negotiations going, all the while Toronymous sales continued.  A Chinese manufacturer, meanwhile, having caught on to NextHop's branding game, introduced the T-Rex. More copycats followed with other variations on the name.

More interesting than its etymology, however, is the movement Toronymous later came to represent. The big, established home/business router manufacturers were the last to embrace the game changing trend towards anonymous browsing. Much of the establishment in America thought anonymous browsing should be illegal, anyway. The public, however, demanded anonymous browsing, and so great was the flood of email citizens sent their representatives that a grand coalition of liberals and conservatives of many stripes in Congress aligned against any legislative measure that would make Toronymous-like devices illegal. That left the fate of Toronymous in the safe hands of the glacial court system.

Toronymity was making the transition from grassroots to mainstream. Or rather, it was the other way around. The early devices had a button which when pressed, glowed an orange icon depicting three overlapping stick figures representing "community mode". In this mode, the device was also a Tor relay. Users were advised to run their devices with the icon glowing. The basis of anonymity, the online help page explained, was safety in numbers and running the router this way helped increase both the online privacy of the owner and the community at large. For some, running in community mode was a way to thumb your nose at power; for others it felt more like pledging money to public television--sharing communal burdens, only now a lot more cheaply. Either way, pressing that button had a feel-good effect for most anyone who had bought the device. It turned consumers into activists.

The world was changing. The router was getting fatter by the day, and more and more storage and computing power was drifting to the edges, to the end user. As Facebook had demonstrated before, people spent increasingly more time on social networks than the web at large. This dark web had emerged as a tier-accessed, individuated, grassroots social network.  It lacked an all-seeing eye.  In fact, no one could see but a small part of it.

Now, to be sure, the web itself was not going dark. Far from it.  The public web was still growing as if there had never been a dark web.  But the dark web was expanding even faster, filled with photos, videos of family and friends, and other information shared discriminately across smaller circles. And as it grew, some private information, whether by intention or accident, whether leaked or released, would make the transition to the public realm. By the time the dark web was an order of magnitude larger than the public web, this constant unidirectional leakage had caused the growth rate of the two webs to converge to a same number. The dark web, in other words, was where the vast majority of web content originated.

Business wasn't quite sure what to make of this new medium: not even the porn industry had come up with a scalable business exploit for it. There were two seemingly insurmountable problems, from a business perspective, with this dark web. One, it was one-to-one: it was relationship-based, and relationships take much too long to develop. Two, it required an authentic human voice, which in turn made working these dark networks labor-intensive.

Meanwhile, a precipitous drop in the Nasdaq PUI Index signaled the coming collapse of a once legitimate industry based on trading, packaging, and selling dossiers of clandestinely gathered personal information. A commentator on a popular financial network lamented, "I don't imagine people quite realize how much this toronymity is costing them. The slide in the PUI [index] alone marks a half-trillion dollar of wealth destroyed."

Maybe. But in a sense that informational wealth had been returned back to its rightful owners. A new world order was in the making.  Or rather an old world was in the remaking. For the dark web heralded the return of the individual, the guild, and the community at the expense of that historically younger institution, the corporation.





Tuesday, January 5, 2010

Avatar: what to make of 3D projection?


My son and I went to see Avatar on the big screen. In 3D. I don't usually go to the theater (I'm rather attached to the "rewind" button on my remote), but this was an experience we couldn't replicate at home. I was very impressed with how far the technology had come along. Now, looking back, I'm wondering how soon this technology will make it into every living room. Not very soon, I'd venture.

Update 1/11/10: Perhaps I was obtuse when I wrote this. I never considered how this technology could be used in gaming and VR programs in which the scene responds to user input. That could turn out to be very interesting, indeed. But regarding its use in more passive applications like movies and static video, I still think this new medium has little to offer.

The main obstacle to fast adoption is that you need special glasses to view such a display device; conversely, without the glasses, the viewing is horrible. That's one downside of this stereoscopic 3D display technology. So what's on the upside? What's the great value-add that would make putting up with the glasses worthwhile?

Immersion. The 3D experience feels more real than the 2D one. It takes the viewer a half-step further into the screen. A step closer into a virtual reality, a simulacrum. But is it [closer]?

While watching Avatar, I was surprised at how often I would mentally step back (unconsciously) from the screen and watch the walls of the theater instead. It was as if my mind preferred to frame the experience inside the cinema, instead of inside the movie itself. What was going on? I later mused.

In order to experience the 3D immersion, you need to surrender your eyes to the movie. Surrender, in the sense that once you have mentally stepped into the screen, your eyes must follow the action on the screen; they cannot wander about in the simulacrum. You must place yourself and your eyes at the mercy of the camera. It is as if the camera were one of those birds in Avatar you're riding, with your head and eyes fixed in a brace: you only have a narrow field of view ahead. (And unlike the characters in the movie, you cannot control the bird.) You must resist the expectation of freedom the mind is so accustomed to in a 3 dimensional world; for as soon as you try to exercise that freedom you are awakened from the illusion, and you perhaps find your eyes wandering off on the walls of the movie house.

Not only must you not forget to keep your eyes on the screen once immersed a half-step into it, you must also try to keep your eyes from trying to focus on projected objects that are meant to be out-of-focus. For example, a petal descending inches before the "camera lens" may have been intentionally left out-of-focus so that it obstructs less of the background. But if curiosity begets you and you try to focus on the petal, I speculate one of two things might happen: (a) your failure to focus breaks the illusion, the suspension of disbelief that maintains the psychological immersion, or (b) you maintain the immersion but blame your tired eyes for not being able to focus.

So it would appear current 3D projection technology requires of the viewer some of the same mental rigor that is necessary to ride a bird in Pandora.

And glasses, aside, do we give up anything when we switch to this 3D medium? I wonder. Quite a lot, I imagine. For the traditional motion picture is less of a technology than it is of a language, an art form, cultivated over generations. Much of that language is a play on the medium's limitations. The composition of the picture, think of golden ratios, for example, is only realized against the bounds defined by the edges of the screen. Moreover, as our minds have become more introspective, more self-reflective, we have developed a more self-aware narrative, the camera behind the camera, the eye that sees the eye that's seeing. A meta language that describes itself and sees its reflection. A way of thought that cherishes its ability to step back and see itself--in a sense, an ability to step out of an immersing experience, the opposite of immersion. (It's this cultivated mental ability that makes the sports bar possible.) This new 3D medium, on the other hand, is like a mirror that breaks when it sees itself in it.



In summary I'm not particularly fond of the 3D technology on offer for two reasons. One, there is little extra information that can be gleaned from it that was not already present in its 2D version (I doubt there is any detail that would have been lost on a viewer watching the flat version of Avatar). And if it is not about the information delivered, then it must be about how it's delivered. Which leads us to Two, the experience itself: impressive as it is, it adds little value once its novelty has worn off. That's because we already know how to immerse ourselves in so many mediums: the novel, the play, the radio, not to mention the 2D motion picture. The technology offers little that viewer's mind cannot already synthesize from its "flatter" 2D version.

As for Avatar, itself, aye.. the story line itself presents an artful play on the stereoscopic medium's own limitations. How fitting that the viewer is made to identify with a paraplegic protagonist! And even more fitting that the plot itself involves the very concept of immersion: the medium's inability to visually frame itself is compensated by the eye-behind-the-eye theme in the narrative. A beautiful production. But to draw a line from here to the everyday use of this new medium, I think, is overreaching. It'll be more like a difficult brush few can master how and when to use.