Intelligent Design Icon Intelligent Design
Neuroscience & Mind Icon Neuroscience & Mind

Artificial General Intelligence: AI’s Temptation to Theft Over Honest Toil

Photo credit: Vlad Tchompalov via Unsplash.

Artificial intelligence, which I am considering in a series here at Evolution News, poses a challenge to human work, promising to overtake many human jobs in coming years. Yet a related concern, which is often ignored and needs to be addressed, is whether this challenge will come from AI in fact being able to match and exceed human capabilities in environments where humans currently exercise those capabilities, or whether it will come from AI engineers manipulating our environments so that machines thrive where otherwise they could not. Such a sanitization of environments to ease AI along is a temptation for AI’s software engineers. It in effect substitutes theft over honest toil. 

In Walter Isaacson’s 2023 biography of Elon Musk, the theme of self-driving cars comes up frequently as one of the main challenges facing Musk’s Tesla engineers. At one point in Isaacson’s biography, a frustrated Musk is trying to understand what it will take to get a Tesla automobile to drive itself successfully through a difficult roadway in the Los Angeles area. Tesla engineers had repeatedly tried, without success, to improve the car’s software so that it could successfully navigate that problem roadway. But in the end, they took a different tack: Tesla engineers arranged to have lane markers painted on the problem roadway. Those markers, when absent, confused the self-driving software but, when present, allowed it to succeed. The self-driving success here, however, was not to AI’s credit. It was due, rather, to manipulating the environment as a workaround to AI’s failure

AI Never Operates in a Vacuum

Rather, it operates in an environment in which humans are already successfully operating. We often think that AI will leave an environment untouched and simply supersede human capability as it enters and engages that environment. But what if the success of AI in given circumstances depends not so much on being able to rival human capabilities but rather in “changing the game” so that AI has an easier job of it? Rather than raise the bar so that machines do better than humans at given tasks, this approach lowers the bar for machines, helping them to succeed by giving them preferential treatment at the expense of humans. 

The mathematician George Polya used to quip that if you can’t solve a problem, find an easier problem and solve it. Might AI in the end not so much supersede humans as instead impoverish the environments in which humans find themselves so that machines can thrive at their expense? Consider again self-driving vehicles. What if, guided by Polya’s dictum about transforming hard problems into easier problems, we follow Musk’s example of simply changing the driving environment if it gets too dicey for his self-driving software? 

AI engineers tasked with developing automated driving but finding it intractable on the roads currently driven by humans might then resolve their dilemma as follows: just reconfigure the driving environment so that dicey situations in which human drivers are needed never arise! Indeed, just set up roads with uniformly spaced lanes, perfectly positioned lane markers, utterly predictable access, completely up-to-date GPS, and densely distributed electronic roadway sensors that give real-time vehicular feedback and monitor for mishaps.

My friend and colleague Robert Marks refers to such a reconfiguration of the environment as a “virtual railroad.” His metaphor fits. Without such a virtual railroad, fully automated vehicles to date face too many unpredictable dangers and are apt to “go off the rails.” Marks, who hails from West Virginia, especially appreciates the dangers. Indeed, the West Virginia back roads are particularly treacherous and give no indication of ever submitting to automated driving.

Or consider what fully automated driving would look like in Moldova. A U.S. acquaintance who visited that country was surprised at how Moldovan drivers avoid mishaps on the road despite a lack of clear signals and rules about right of way. When he asked his Moldovan guide how the drivers managed to avoid accidents in such right-of-way situations, the guide answered with two words: “eye contact.” Apparently, the drivers could see in each other’s eyes who was willing to hold back and who was ready to move forward. This example presents an interesting prospect for fully automated driving. Perhaps we need “level 6” automation (level 5 is currently the highest), in which AI systems have learned to read the eyes of drivers to determine whether they are going to restrain themselves or make that left turn into oncoming traffic. 

Just to be clear: I’m not wishing for fully automated self-driving to fail. As with all automation in the past, fully automated self-driving would entail the disruption of some jobs and the emergence of others. It would be very interesting, as an advance of AI, if driving — in fully human environments — could be fully automated. My worry, however, is that what will happen instead is that AI engineers will, with political approval, reconfigure our driving environments, making them so much simpler and machine friendly, that full automation of driving happens, but with little semblance to human driving capability. Just as a train on a rail requires minimal, or indeed no, human intervention, so cars driving on virtual railroads might readily dispense with the human element.

But at What Cost? 

Certainly, virtual railroads would require considerable expenditures in modifying the environments where AI operates — in the present example, the roads where fully automated driving takes place. But would it not also come at the cost of impoverishing our driving environment, especially if human drivers are prohibited from roads that have been reconfigured as virtual railroads to accommodate fully automated vehicles? And what about those West Virginia back roads? Would they be off limits to driving, period, because we no longer trust human drivers, but fully automated drivers are unable to handle them?

In his Introduction to Mathematical Philosophy, Bertrand Russell described how in mathematics one can introduce axioms that shift the burden of what needs to be proven, thereby garnering “the advantages of theft over honest toil.” In response, he rightly exhorted, “Let us leave them [i.e., the advantages] to others and proceed with our honest toil.” The AI community faces a similar exhortation: If you are intent on inventing a technology that promises to match or exceed human capability, then do it in a way that doesn’t at the same time impoverish the environment in which that capability is currently exercised by humans.

It is a success for AI when machines are placed into existing human environments and perform better than humans. Chess playing programs are a case in point. However, the worry is — and it’s a legitimate worry — that our environments will increasingly be altered to accommodate AI. The machines, in consequence, do better than us, but they are no longer on our playing field playing our game. It goes without saying who here is going to get the short end of the stick, and it won’t be the machines.

Next, “Artificial General Intelligence: Digital vs. Traditional Immortality.”

Editor’s note: This article appeared originally at