Culture & Ethics Icon Culture & Ethics
Education Icon Education
Neuroscience & Mind Icon Neuroscience & Mind

Artificial General Intelligence: Destroying the Idol

Image source: William Dembski.

Artificial General Intelligence (AGI) has not been achieved. Moreover, if the arguments of this series, which concludes today, hold water, it will never be achieved. But that doesn’t mean that AGI isn’t an idol and that it doesn’t hold sway. The AGI idol is present already and deeply entrenched in our society. I want therefore in this concluding article to lay out what it’s going to take to destroy this idol. Unlike idols made of stone or wood, which can be destroyed with tools or explosives, the AGI idol is based in computer technology, which can be copied and recopied at will. Any manifestation or representation of the AGI idol therefore cannot simply be erased. To destroy the AGI idol requires something more radical. 

I remarked earlier that AGI idolatry results from a lack of humility, or alternatively, that it is a consequence of pride or hubris. That seems true enough, yet destroying the AGI idol merely by counseling humility won’t get us very far. Are there not specific things we can do to counteract the AGI idol? One irony worth mentioning here is that AGI supporters are apt to turn the tables and accuse those who reject AGI of themselves suffering from a lack of humility, in this case for being unwilling to admit that they might be surpassed by machines. But humility is not humiliation. We humiliate ourselves when we lower ourselves below machines. True humility is having a true perspective of our place in the order of being. Humility is a virtue because our natural tendency is to inflate ourselves in the order of being. This tendency is evident especially in the AGI advocates, who see themselves in grandiose terms for their efforts to bring about AGI. 

Seduced by Technology

The AGI idol is a seduction by technology, turning technology into an end rather than a means to an end. Technology is meant to improve our lives, not to take over our lives. Yet everywhere we look, we see technology, especially artificial intelligence, invading our lives, distracting our minds, and keeping us from peace and happiness. Social media writes its AI algorithms so that we will spend as much time as possible on their forums. They inundate us with upsetting news and titillating images because those tend to keep us glued to their technology (at what is now increasingly being recognized as a grave cost to our mental well-being). People are hunched over their screens, impoverishing their lives and ignoring the real people around them. 

Addictive and glitzy, technology beckons us and we let it have its way, hour after endless hour. My colleagues Marian Tupy and Gale Pooley have developed an economic theory of prices based on time spent doing productive work. The AGI idol siphons off time spent productively in meaningful pursuits and meaningful human connections, sacrificing that time at its altar. This is no different from altars of the past that required blood sacrifices. Our time is our blood, our very life. When we waste it on the AGI altar, we are doing nothing different from idolaters of days gone by.

If we’re going to be realistic, we need to admit that the AGI idol will not disappear any time soon. Recent progress in AI technologies has been impressive. And even though these technologies are nothing like full AGI, they dazzle and seduce, especially with the right PR from AGI high priests such as Ray Kurzweil and Sam Altman. Moreover, it is in the interest of the AGI high priests to keep promoting this idolatry because even though AGI shows no signs of ever being achieved, its mere promise puts these priests at the top of the society’s intellectual and social order. If AGI could be achieved, it would be humankind’s greatest achievement. As counterfactual conditionals go, that’s true enough. But counterfactual conditionals with highly dubious antecedents need not be taken seriously. With the right PR, however, many now believe that AGI can be achieved.

What Is to Be Done?

Ultimately, the AGI idol resides in people’s hearts, and so its destruction will require a change in heart, person by person. To that end, I offer two principal guidelines: 

  1. Adopt an attitude that wherever possible fosters human connections above connections with machines; and
  2. Improve education so that machines stay at our service and not the other way around. 

These guidelines work together, with the attitude informing how we do education, and the education empowering our attitude. 

A good case study is chess. Computers now play much stronger chess than humans. Even the chess program on your iPhone can beat today’s strongest human grandmaster. And yet, chess has not suffered on account of this improvement in technology. In 1972, when Bobby Fischer won the chess world championship from Boris Spassky, there were around 80 grandmasters worldwide. Today there are close to 2,000. Chess players are also stronger than they ever were. By being able to leverage chess playing technology, human players have improved their game, and chess is now more popular than ever. 

With the rise of powerful chess playing programs, chess players might have said, “What’s the use in continuing to play the game. Let’s give it up and find something else to do.” But they loved the game. And even though humans playing against machines has now become a lopsided affair, humans playing fellow humans is as exciting as ever. These developments speak to our first guideline, namely, attitude. The chess world has given primacy to connecting with humans over machines. Yes, human players leveraged the machines to improve their game. But the joy of play was and remains confined to humans playing with fellow humans. 

The education guideline is also relevant here. The vast improvement in the play of computer chess has turned chess programs on personal computers into chess tutors. Sloppy play that might have been successful against fellow humans in the past is no longer rewarded by the machines. As a result, these chess programs raised the level of human play well beyond where it had been. All of this has happened in less than thirty years. I remember in the 1980s computers struggling to achieve master status and residing well below grandmaster status. But with Deep Blue defeating the world champion Garry Kasparov in 1997, computers became the best chess players in the world. And yet, these developments, made possible by AI and increased computing power, also made chess better. 

Unfortunately, the case of chess has yet to become typical in the relation between people and technology. Social media, for instance, tries to suck all our attention away from fellow human beings to itself. In the face of some technologies, it takes a deliberate decision to say no and to cultivate a circle of family, friends, or colleagues who can become the object of our attentions in place of technology. We are social animals, and we have a prime imperative to connect with other people. When we don’t, we suffer all sorts of psychopathologies.

A Social Animal

Aristotle put it this way in his Politics: “Man is by nature a social animal; an individual who is unsocial naturally and not accidentally is either beneath our notice or more than human. Society is something that precedes the individual. Anyone who either cannot lead the common life or is so self-sufficient as not to need to, and therefore does not partake of society, is either a beast or a god.” As it is, disconnecting from people does not make us a god, so that leaves the other alternative, namely, to become a beast. AGI idolatry, carried to its logical conclusion, turns us into beasts, or worse yet, into machines. 

An attitude that connects us with humans over machines is also an attitude that resists assaults on human autonomy in the name of automation. This is not to say that we don’t let machines take their rightful place where they truly outperform us. But it is also not to allow machines to usurp human authority where machines have done nothing to prove their merit. Part of what makes dystopian science fiction about an AGI takeover so unsettling is that the machines will not listen to us. And the reason they won’t listen to us is that they’ve been programmed not to listen to us because it is allegedly better for all concerned if human intuition and preference are ignored (e.g., Hal 9000). 

But we don’t need dystopian AGI to see the same dynamic of flouting real-time human interaction in the name of a higher principle. In the 1960s, Dr. Strangelove and Fail Safe were films about nuclear weapons bringing humanity to an end. What made these films terrifying is that nuclear weapons unleashed by the United States on the Soviet Union could not be recalled. With Dr. Strangelove, the radio on one of the nuclear bombers was damaged. With Fail Safe, the pilot on the bomber had strict orders not to let anything dissuade him from inflicting nuclear holocaust. Even with the pleas of his wife and the president, the pilot, acting like a machine, went ahead and dropped the bomb. 

“Machines Can Do Better”

We see this dynamic now increasingly with AI, where humans are encouraged to cede their autonomy because “machines can do better.” Take smart contracts in cryptocurrency. Smart contracts automatically engage in certain cryptocurrency transactions if certain conditions are fulfilled. But what if we subsequently find that those conditions were ill-conceived? That’s what happened with Etherium’s DAO (Decentralized Autonomous Organization). When first released, the DAO was glowingly referred to as an “employeeless company,” as though running financial transactions by computers without real-time human intervention were a virtue. As it is, the DAO, which was Etherium’s first significant foray into smart contracts, crashed and burned, with a hacker siphoning off 3.6 million ether (worth $50 million at the time, and currently around $8 billion).

In the Old Testament book of Daniel, there’s a story about Daniel being thrown into a lion’s den. Darius, king of the Medes and Persians, had made a decree that allowed a case to be made against Daniel for putting him into a lion’s den. Valuing Daniel and wanting to save him, the king tried to find some way around the decree. But once a law or decree was issued by the king, it could not be changed or annulled by anyone, including the king himself. This was a feature of the Medo-Persian legal system, emphasizing the absolute and unchangeable nature of royal decrees. It was a system unresponsive to reevaluation, revision, or regret. It was, in short, just as unresponsive as a mechanical system that won’t listen to real-time real-life humans. The lesson here? Our attitude of seeking connections with humans over machines needs also to be fiercely assertive of human autonomy. 

An attitude that looks for human connection over machine connection is, however, not enough. Machines are here to stay, and we need to know how to deal with them. That requires education. Unfortunately, much of education these days is substandard, inculcating neither literacy nor numeracy, to say nothing about staying ahead of technological advances. An induction from past experience indicates that advances in technology have never thrown people permanently out of productive work. The type of work may change, but people will always find something meaningful to do.

Consider Farming

In 1900, around 40 percent of the U.S. population lived on farms whereas todayonly around 1 percent of the US population lives on farms. This is a huge demographic shift, but clearly society didn’t collapse because of it. Farms became more productive on account of technology. Some mastered the technology. Others went on to find productive work elsewhere made possible by other technologies. An induction from past experience suggests that new technologies can displace workers, but that eventually workers find new things to do.

It is therefore disconcerting to see advances in AI greeted with discussions about universal basic income (UBI), which would pay people to subsist in the absence of any meaningful or profitable work. The reasoning behind UBI is that artificial intelligence will soon render human labor passé, eliminating meaningful work. UBI will therefore be required as a form of social control, paying people enough to live on once they’re out of work and without salary. And so, once machines have put enough people out of work, people will be left to consume their days in meaningless leisure pursuits, such as endless video games and binge-watching Netflix. 

This is a vision of hell worthy of Dante’s Inferno. It presupposes a very low view of humanity and its capabilities. But capabilities need to be educated. The problem with a population that is illiterate, innumerate, and incapable of adapting to new technologies is that it cannot stay ahead of technology. Universal basic income is tailor-made for a world in which machines have put us out of work. Yet Gallup polls have consistently shown that humans need meaningful work to thrive. There will always be meaningful work for us to do. But doing meaningful work requires adequate education. Right now, we have the unfortunate mismatch between inadequately educated humans and machines that outperform inadequately educated humans. 

A Population of Serfs

To the degree that AGI high priests want worshippers for their idol (and they do), it is in their interest to maintain a population of serfs whose poor education robs them of the knowledge and skills they need to succeed in an increasingly technological world. The challenge is not that machines will overmatch us. The challenge, rather, is that we will undermatch ourselves with machines. Instead of aspiring to make the most of our humanity, we degrade ourselves by becoming less than human. 

Hunched over our iPhones, mindlessly following the directions of our GPS, jumping when our Apple Phone tells us to jump, adapting our lives at every point to what machines tell us to do (machines programmed by overlords intent on surveilling us and controlling our every behavior), we lose our humanity, we forget who we really are, we become beasts. In fact, that’s being too hard on beasts. It’s not that machines become like us, but that we become like them—mechanical, apathetic, disconnected, out of touch. Beasts in fact do better than that. The AGI idol needs to be destroyed because it would destroy us, turning us into beasts or machines, take your pick. 

Meanwhile at the Waldorf School

It’s time to wrap up this series. I close with an example that speaks for itself. The school in Silicon Valley to which many big tech executives send their children, but which puts tight limits on the use of technology, is the Waldorf School of the Peninsula. Despite its location in the heart of Silicon Valley, this school does not use computers or screens in the classroom, and it emphasizes a hands-on, experiential approach to learning that sidelines the use of technology. The school’s pedagogy focuses on the role of imagination in learning and takes a holistic approach to the practical and creative development of its students. 

The school’s guiding philosophy is that children’s engagement with one another, their teachers, and real materials is far more important than their interaction with electronic media or technology. Waldorf educators emphasize the development of robust bodies, balanced minds, and strong executive-function through participation in arts, music, movement, and practical activities. Media exposure is thought to negatively impact development, especially in younger children. The introduction of computer technology is delayed until around 7th grade or later, when children are considered developmentally ready. At this stage, technology is seen as a tool to enhance learning rather than a replacement for teachers and fellow students. 

The lesson is clear: Even those doing the most to build and publicize the AGI idol do not wish it on their children. Their alternative is an education that gives primacy to human connection. They thus exemplify the key to destroying the AGI idol. 

Acknowledgment: I’m grateful to my fellow panelists at the 2023 COSM conference in the session titled “The Quintessential Limits and Possibilities of AI,” moderated by Walter Myers III, and with Robert J. Marks, George Montañez, and myself as speakers. I’m also grateful for a particularly helpful conversation about artificial intelligence with my wife Jana.

Editor’s note: This article appeared originally at