Culture & Ethics
Neuroscience & Mind
Professor Explains “How to Be a God”
We have enough problems with attaining universal human rights, but activists want animals and “nature” to have human-type rights. Transhumanists and futurists also worry about guaranteeing rights for AI technologies when they attain “consciousness.”
The latest example comes in The Conversation from a professor of game designing — who knew that was an academic discipline? — named Richard A. Bartle, at the University of Essex. He believes that “we may one day create virtual worlds with creatures as intelligent as ourselves.” From, “How to Be a God“:
I believe we will have virtual worlds containing characters as smart as we are — if not smarter — and in full possession of free will. What will our responsibilities towards these beings be? We will after all be the literal gods of the realities in which they dwell, controlling the physics of their worlds. We can do anything we like to them.
Not God but Gamers
Actually, that would not be a problem because they would be neither alive nor real. No matter how sophisticated these avatars or cyber creatures, it would all be mere programming, in a fictional universe of our own conjuring. That would not make us gods, but gamers.
But Bartle believes we would have a concrete moral obligation to these non-existent beings:
If we create our characters to be free-thinking beings, then we must treat them as if they are such — regardless of how they might appear to an external observer.
That being the case, then, can we switch our virtual worlds off? Doing so could be condemning billions of intelligent creatures to non-existence. Would it nevertheless be OK if we saved a copy of their world at the moment we ended it? Does the theoretical possibility that we may switch their world back on exactly as it was mean we’re not actually murdering them? What if we don’t have the original game software?
Sorry, but this isn’t worth the loss of any sleep. To begin, only human beings can be “murdered.” Moreover, something that isn’t alive can’t be killed. The worst thing that would be happening is that non-existent beings would remain non-existent.
And here’s a non-problem:
Accepting that our characters of the future are free-thinking beings, where would they fit in a hierarchy of importance? In general, given a straight choice between saving a sapient being (such as a toddler) or a merely sentient one (such as a dog), people would choose the former over the latter. Given a similar choice between saving a real dog or a virtual saint, which would prevail?
The dog. It is alive. Life should be the first prerequisite of inherent moral value. To which, I add, a dog can experience pain, love, joy, hunger, and contentment. In direct contrast, a virtual saint isn’t really a saint. It’s merely a computer program.
Just Don’t Play
Bartle asks whether we should create such “creatures” at all. I don’t think we can or ever will, but if we are worried about such non-existent moral dilemmas, just don’t play.
Finally, Bartle seems to be saying — out of The Matrix — that we live in such a manufactured universe:
Humanity doesn’t yet have an ethical framework for the creation of realities of which we are gods. No system of meta-ethics yet exists to help us. We need to work this out before we build worlds populated by beings with free will, whether 50, 500, 5,000,000 years from now or tomorrow. These are questions for you to answer.
Be careful how you do so, though. You may set a precedent.
We ourselves are the non-player characters of Reality.
No, we are not characters. We are real people, living in an actual universe — whether created, intelligently designed, or evolved — in which the actions we take are consequential and truly do matter morally.
If we want to make a better world in the here and now, perhaps we should focus more on the rights and duties that arise out of human exceptionalism and not concern ourselves with the fate of fictional non-beings that — not who — will never really exist.
Cross-posted at The Corner.