Neuroscience & Mind Icon Neuroscience & Mind

Egnor: Democracy, Tyranny, and Technology

David Klinghoffer


Writing for The Atlantic, Yuval Noah Harari warns of the totalitarian tendencies inherent in artificial intelligence and related technological “advances.” Michael Egnor does him one (or two) better, though. In a brilliant post over at our sister site Mind Matters, Egnor identifies the relationship between democracy, tyranny, and technology.

AI poses its threat through the obscurity of its processes and by the contagion it helps to spread. Excerpt:

It is the obscurity of AI that most impairs liberty. We do not know what is being done to us or even what is being done by us. What algorithms does Google use when we search on political topics? We don’t know. It is inevitable that such searches are biased, perhaps deliberately, perhaps not. But the bias is unknown to us, and perhaps unknown even to Google, and the obscurity grows by the year. Google searches may (and likely do) tend to favor certain political views.

It is not far-fetched to imagine self-driving cars “choosing” routes that go past merchants who “advertise” surreptitiously, using the autonomous vehicles. How much would McDonald’s pay to route the cars and slow them down when they pass the Golden Arches? How much would a political party pay to skew a Google search on their candidates? It is likely that searches are skewed in ways that are not deliberate as well. The unfathomable layers of complexity in contemporary AI make objectivity and balance nearly impossible to ensure and enforce.

The most dangerous aspect of AI to our liberty is the obscurity inherent to it. AI blinds us to motives and processes.

The second danger of AI, which follows on the first and enormously magnifies it, is contagion. AI provides boundless instantaneity and dissemination of ideas. I can type a sentence and (in principle) have it read on every computer on earth in less than a second. I can praise my heroes and denounce my enemies instantly and without geographical bounds. Flash mobs are inherent to AI, and there is no practical limit to their immediacy, size, or fervor. Social media provides us with virtual mobs, and not infrequently physical ones, within seconds or minutes.

The economic and social impacts of technology, as Harari points out, are profound and of great interest. But the primal danger that AI poses to humanity is deeper and more insidious than economics. AI changes our psyche, individually and collectively.

In other words, it is both the power AI has and the passivity it encourages that should be freaking us out. Mind Matters is published by the newly launched Walter Bradley Center for Natural and Artificial Intelligence, a program of Discovery Institute. Read the rest over there.

Photo credit: Robert Scoble, via Flickr (cropped).