There are two articles in major US magazines currently invoking the image of HAL, the computer in 2001: a Space Odyssey, one of my favorite novels/films. It ain't letting the interpretive cat out of the computer core to say that, Dave's final evolution aside, the most human character in "2001" is HAL: he's the one with passions and paranoia and pathos. The humans meanwhile are the cogs in machines: they barely seem conscious as they speak to their children, make mindless chit-chat, and communicate instructions to one another using shallow humor and toothless threats. It's a vision of a future in which mankind is drained of blood and computerkind begins to thrive. At the start of the story, chimps see the value of being something more than themselves (the comforts of civilized life, living without fear of being eaten by big cats, etc.) and they want it. At the end, humans see the value of being something more than themselves (of exploring the stars as spaceships, instead of in spaceships, etc.) -- of becoming, in effect, more like HAL. So the final kitty to crawl out is this: the AI becomes you, my dear.

The New Yorker has an article this week by John Seabrook titled "Hello, HAL," which is about the struggle to create computers that can use language. It's a project that has lasted as long as computers have been with us, the domain of engineers and linguists both, a project which has led the realization that language, something most humans are pretty decent at and don't think about too much, is so difficult that supercomputers capable of computing paths to Saturn or generating the imagery in Finding Nemo (clearly the two most impressive things a computer can do) stumble upon sentences that toddlers can master. There has been a lot of progress, but people and computers still don't get along: when we call a company and speak to a machine, it angers us (we'd rather push buttons than be forced to say "yes" and "no" and "operator"). They're working on making computers that can understand and return emotional cues within generated speech, and while we're not there yet, the article describes some vocal on-the-spot translation devices that are truly impressive.

Seabrook tells us about:
IBM's Multilingual Automatic Speech-to-Speech Translator, or English speaker made a comment ("We are here to provide humanitarian assistance for your town") to an Iraqi. The machine repeated his sentence in English to make sure it was understood. The MASTOR then translated the sentence into Arabic and said it out loud. The Iraqi answered in Arabic; the machine repeated the sentence in Arabic and then delivered it in English. [Anyone else reminded of the aliens in Mars Attacks with their translators shouting "We come in peace!" as they blow everyone away -- or is that just me? -A] The entire exchange took about five seconds, and combined state-of-the-art speech recognition, voice synthesis, and machine translation. Granted, the conversation was limited to what you might discuss at a checkpoint in Iraq. Still, for what they are, these translators are triumphs of the statistics-based approach.
The Atlantic Monthly Article isn't quite about translators. Nicholas Carr's piece is titled "Is Google Making us Stupid," and, all that fuss about talking computers aside, gets to the real deal when it comes to evolving intelligence and machine-human pairings. As the title indicates, it is not an optimistic view. The author gives anecdotal evidence of formerly attentive readers reduced to inability after exposure to the internet, then tells us... 
...we still await the long-term neurological and psychological experiments that will provide a definitive picture of how Internet use affects cognition. But a recently published study of online research habits, conducted by scholars from University College London, suggests that we may well be in the midst of a sea change in the way we read and think. As part of the five-year research program, the scholars examined computer logs documenting the behavior of visitors to two popular research sites, one operated by the British Library and one by a U.K. educational consortium, that provide access to journal articles, e-books, and other sources of written information. They found that people using the sites exhibited “a form of skimming activity,” hopping from one source to another and rarely returning to any source they’d already visited. They typically read no more than one or two pages of an article or book before they would “bounce” out to another site. Sometimes they’d save a long article, but there’s no evidence that they ever went back and actually read it. The authors of the study report: "It is clear that users are not reading online in the traditional sense; indeed there are signs that new forms of “reading” are emerging as users “power browse” horizontally through titles, contents pages and abstracts going for quick wins. It almost seems that they go online to avoid reading in the traditional sense."
Carr's is sort of a sprawling essay, with a story about Neitszche taking up the typewriter (and how it changed his thinking and his writing) as a highlight I would recommend seeking out. But I want to draw your attention to this, a passage in which his paranoia seems to manifest, as does the promise of internet intelligence:
More than a hundred years after the invention of the steam engine, the Industrial Revolution had at last found its philosophy and its philosopher. Taylor’s tight industrial choreography—his “system,” as he liked to call it—was embraced by manufacturers throughout the country and, in time, around the world. Seeking maximum speed, maximum efficiency, and maximum output, factory owners used time-and-motion studies to organize their work and configure the jobs of their workers. The goal, as Taylor defined it in his celebrated 1911 treatise, The Principles of Scientific Management, was to identify and adopt, for every job, the “one best method” of work and thereby to effect “the gradual substitution of science for rule of thumb throughout the mechanic arts.” Once his system was applied to all acts of manual labor, Taylor assured his followers, it would bring about a restructuring not only of industry but of society, creating a utopia of perfect efficiency. “In the past the man has been first,” he declared; “in the future the system must be first.”
This is the future of Stanley Kubrick and Arthur C. Clarke in 2001. The system. Humans reduced at the individual level, but expanded by connection itself into a group intelligence, a giant global (and eventually supra-global) brain. As Howard Bloom has pointed out, an individual chimp is smarter than an individual baboon. But chimps are endangered and baboons are ubiquitous, considered "pests" in much of Africa -- that's how differently "successful" they've been (at surviving). And why does the advantage go to the stupid monkey, instead of the supposedly-great ape? Because baboons make a better group brain. Humans are smart individually, sure, but more importantly, we've also got the major, serious, stupendous social skills. We not only serve "the system," but doing so becomes an essential part of our identity, one we're willing to die for (eg: soldiers are willing to die for their group). This is a kind of AI that's not A. It's better to call it, "man-made intelligence": it's hooking us all up to a hive mind of our own making.

Of course, this is unsettling:
The idea that our minds should operate as high-speed data-processing machines is not only built into the workings of the Internet, it is the network’s reigning business model as well. The faster we surf across the Web—the more links we click and pages we view—the more opportunities Google and other companies gain to collect information about us and to feed us advertisements. Most of the proprietors of the commercial Internet have a financial stake in collecting the crumbs of data we leave behind as we flit from link to link—the more crumbs, the better. The last thing these companies want is to encourage leisurely reading or slow, concentrated thought. It’s in their economic interest to drive us to distraction.
But Carr, against his own misgivings, tells us this:
Socrates bemoaned the development of writing. He feared that, as people came to rely on the written word as a substitute for the knowledge they used to carry inside their heads, they would, in the words of one of the dialogue’s characters, “cease to exercise their memory and become forgetful.” And because they would be able to “receive a quantity of information without proper instruction,” they would “be thought very knowledgeable when they are for the most part quite ignorant.” They would be “filled with the conceit of wisdom instead of real wisdom.” Socrates wasn’t wrong—the new technology did often have the effects he feared—but he was shortsighted. He couldn’t foresee the many ways that writing and reading would serve to spread information, spur fresh ideas, and expand human knowledge (if not wisdom).

The arrival of Gutenberg’s printing press, in the 15th century, set off another round of teeth gnashing. The Italian humanist Hieronimo Squarciafico worried that the easy availability of books would lead to intellectual laziness, making men “less studious” and weakening their minds. Others argued that cheaply printed books and broadsheets would undermine religious authority, demean the work of scholars and scribes, and spread sedition and debauchery. As New York University professor Clay Shirky notes, “Most of the arguments made against the printing press were correct, even prescient.” But, again, the doomsayers were unable to imagine the myriad blessings that the printed word would deliver.
Are we about to become the Dave-like cogs in a machine run by a HAL? Maybe. But what's on the other end of that transformation? What world-changing advantages might this transformation deliver? What step might this step lead to? If people who were still suspicious of the printing press were still alive, we wouldn't get very far, which is why, I suppose, it's good that we die after our time. Future people will snicker at Carr's hesitation -- and all of us. Assuming they think about us at all, for more than a split second.

Newer Post Older Post Home