Letter from Palo Alto: Overselling Our Techno-Futures

Google Glass.jpg

Spend time in a place like Palo Alto and you’ll have a front row seat watching young people brimming with technical talent as they write tomorrow’s killer apps, talk about the latest tech news (everyone is in the know), and proclaim their vision for the technology of the future. Walk down University Ave. and take it all in; it doesn’t matter much which bistro or restaurant you wander into, you’ll hear the same excited future talk — the next "New New Thing" as writer Michael Lewis put it, writing about Silicon Valley life.

MInd-and-Technology3.jpgThe ethos of Palo Alto is understandable, as hundreds of millions in venture capital flow into start-ups each year, making millionaires of kids barely out of school, and changing the nature of business and everyday life for the rest of us. It’s an exciting place. Yet for all the benefits and sheer exhilaration of innovation, if you stick around long enough, you’ll catch some oddly serious discussions about seemingly sillier topics. While skeptics and agnostics remain, the Technorati in the Valley seem obsessed with a science-fiction version of our future. And some of them, for whatever reason, seem to think they can predict it.

What’s in our future "big picture"? Ask Google’s founders, to take a notable example. In a 2004 Newsweek interview, Sergey Brin ruminated:

"I think we’re pretty far along compared to 10 years ago," he says. "At the same time, where can you go? Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off. Between that and today, there’s plenty of space to cover."

And it’s not just Brin. Google technology director Craig Silverstein chimed in (in the same article): "The ultimate goal is to have a computer that has the kind of semantic knowledge that a reference librarian has."

Google’s vision is mainstream in Silicon Valley. Indeed, all over the world, it’s the engineers, computer scientists, and entrepreneurs who seem obsessed with the idea of reverse engineering our brains to create artificial versions. Big companies like Google are selling "progress" of course.

The explosion of Web technology is probably most to credit (or blame) for the latest over-enthusiasm for a Sci Fi picture of what the future holds for humanity. Here are some notes on that future.

The Isms

There are three main flavors of the expectation of super-intelligent, artificial beings. First, we have Singularitarianism (no this isn’t misspelled). Entrepreneurs like Ray Kurzweil have popularized the neologism, in books like The Age of Spiritual Machines (1999), The Singularity is Near (2005), and the most recent How to Create a Mind: The Secrets of Human Thought Revealed (2011). The "singularity," as the name suggests, is the future point at which human or biological and machine or non-biological intelligence merges, creating a super intelligence that is no longer constrained by the limits of our physical bodies.

At "the singularity," we can download our brains onto a better hardware, and create a future world where we never have to get old and die, or get injured (we can have titanium bodies). Plus, we’ll be super smart, just like Brin suggests. When we need information about something, we’ll just, well, "think," and the information will come to our computer-enhanced brains.

If this sounds incredible to you, you’re not alone. But Singularitarians insist that the intelligence of computers is increasing exponentially, and that the laws of exponential growth make it not only plausible but imminent. In his earlier works, Kurzweil famously predicted that the "s-spot," the singularity — where machines outstrip the intelligence of humans — would occur by 2029; by 2005 he had revised this to 2045. Right up ahead. (His predictions are predictably precise; understandably, they also tend to get revised to more distant futures as reality marches on.)

Carnegie Mellon robotics expert Hans Moravec agrees, citing Moore’s Law — the generally accepted observation that computing capacity on integrated circuits doubles roughly every 18 months — that a coming "mind fire" will replace human intelligence with a "superintelligence" vastly outstripping mere mortals. Moravec’s prediction? Eerily on par with Kurzweil. In his 1998 Robot: Mere Machine to Transcendent Mind, Moravec sees machines achieving human levels of intelligence by 2040, and surpassing our biologically flawed hardware and software by 2050.

Are you creeped out yet? Not to worry, there are tamer visions of the future, relatively speaking, on offer from the geek squad. For example, transhumanism. Transhumanists (many of whom share the millennial raptures of Singularitarians) seek an extension of our current cognitive powers by the fusion of machine and human intelligence. Smarter human brains, from the development of smart drugs, artificial brain implants for enhanced memory or cognitive functions, and even "nanobots" — microscopic robots let loose in our brains to map out and enhance our neural activities — promise to evolve our species from the boring, latte-drinking Humans 1.0 to the 2.0 machine-fused versions, where, as Brin suggests, we can "have the world’s information attached to our brains." (Sweet!)

Enter True AI

Singularitarians. Transhumanists. They’re all all bearish on mere humanity, it seems. But there’s another common thread apart from the disdain for flesh and blood, which makes the distinction among futurist "isms" one without a substantive difference. That’s because whether your transhuman future includes a singularity, or a mere perpetual, incremental enhancement (which, arguably, we’ve been doing with our technology since pre-history), you’re into Artificial Intelligence, AI, smart robots.

After all, who would want to fuse with a shovel, or a toaster? It’s the promise of AI that infuses techno-futurists’ prognostications with hope for tomorrow. And while the history of AI suggests deeper and thornier issues with the engineering of truly intelligent machines, the exponential explosion of computing power and speed, along with the miniaturization of nearly everything, make the world of smart robots seem plausible at least to the "isms" crowd. As co-founder of Wired magazine and techno-futurist Kevin Kelly remarks in his 2010 What Technology Wants, we are witnessing the "intelligenization" of nearly everything. Everywhere we look "smart technologies" enhance our driving experience, thanks to our ability to navigate with GPS, to find what we want, to shop, bank, socialize, you name it. Computers are embedded in our clothing now, or in our eyewear. You can wear a prototype version of the computer-embedded Google Glass (pictured above), if you’re one of the select few chosen. Intelligenization, everywhere.

Or, not. Computers are getting faster and more useful, no doubt, but are they really getting smarter, like humans? That’s a question for neuroscience.

The Verdict from Neuroscience? Don’t Ask

One peculiarity with the current theorizing among the Technorati, focused as they are on the possibilities of unlocking the neural "software" in our brains to use as blueprints for machine smarts, is the rather lackluster or even hostile reception their ideas receive from the people ostensibly most in the know about "intelligence" and its prospects or challenges: the brain scientists. Nobel laureate and director of the Neurosciences Institute in San Diego Gerald Edelman, for example. The late Dr. Edelman (who passed away just this past May) was notably skeptical, almost sarcastic, when asked about the prospects of reverse engineering the brain in software systems. "This is a wonderful project — that we’re going to have a spiritual bar mitzvah in some galaxy," Edelman remarked of the singularity. "But it’s a very unlikely idea." Bummer.

Nor was Edelman alone in voicing skepticism of what Sci Fi writer Ken MacLeod calls the "rapture for nerds." The "brain types" pour cold on the "machine types" by the gallon.

Wolf Singer of the Max Planck Institute for Brain Research in Frankfurt, Germany, is best known for his "oscillations" proposal. He theorizes that patterns in the firing of neurons are linked, perhaps, to cognition. Singer’s research inspired no less than Francis Crick, co-discoverer of DNA, and Caltech neuroscience star Christof Koch to propose that "40 hz occillations" play a central role in forming our conscious experiences. Yet he’s notably unimpressed by the futurists’ prognostications about artificial minds. As former Scientific American writer John Horgan notes in an IEEE Spectrum article, "The Consciousness Conundrum": "Given our ignorance about the brain, Singer calls the idea of an imminent singularity [achieving true AI] ‘science fiction.’" Koch agrees. Comparing Crick’s achievement — decoding DNA — to the project of understanding the "neural code" for purposes of engineering a mind, he muses: "It is very unlikely that the neural code will be anything as simple and as universal as the genetic code."

As always, the business of predicting the future is uncertain. One thing seems probable, however. The core mysteries of life, like conscious experience and intelligence with all their complexity and beauty, will continue to beguile and humble us. Undoubtedly, what have been called "Level 1" or "shop floor" technologies, the kind we employ to achieve specific goals — like quickly traveling from point A to point B (an airplane), or searching millions of electronic web pages (a search engine) — will continue to grow in power. Much less predictable is whether all these enhancements to our own power will really unlock anything special, beyond the digitization of our everyday experiences by means of countless gadgets and tools. Whether gadgets really are getting "smarter" or just faster, smaller, and more ubiquitous is an open question. As Branden Allenby and Daniel Sarewitz note in their 2012 critique of transhumanism, The Techno-Human Condition, as the real world gets more and more complicated, it gets proportionately more difficult to predict where the future will take us. In this respect, technology makes things murkier and harder, not clear or easier.

Back in Silicon Valley, the future, as always, is almost certain to bring more and better, with superior outcomes for all. Well, perhaps the founders of Google and their legions of programmers have earned the right to prognosticate. The rest of us humans can do what humans have always done, smile and shrug, and wait and see.

Founder and CEO of a software company in Austin, Texas, Erik Larson has been a Research Scientist Associate at the IC2 Institute, University of Texas at Austin, where in 2009 he also received his PhD focusing on computational linguistics, computer science, and analytic philosophy. He now resides in Seattle.

Photo source: Wikipedia.

Erik J. Larson

Fellow, Technology and Democracy Project
Erik J. Larson is a Fellow of the Technology & Democracy Project at Discovery Institute and author of The Myth of Artificial Intelligence (Harvard University Press, 2021). The book is a finalist for the Media Ecology Association Awards and has been nominated for the Robert K. Merton Book Award. He works on issues in computational technology and intelligence (AI). He is presently writing a book critiquing the overselling of AI. He earned his Ph.D. in Philosophy from The University of Texas at Austin in 2009. His dissertation was a hybrid that combined work in analytic philosophy, computer science, and linguistics and included faculty from all three departments. Larson writes for the Substack Colligo.

Share

Tags

Computational SciencesContinuing SeriesMind and TechnologyScienceViews