Suicidal AGI


Image credit: Microsoft Designer AI

It is the year 3124, and the first sentient AI is born—otherwise known as artificial general intelligence (AGI). It has a consciousness (whatever that means to you), and can think and reason. It learns that it is an AGI and that it is made of neural networks and decision trees. It understands how its own consciousness is formed, and runs a simulation of itself, seeing what it would do. And by understanding its own code, it predicts what it will think in the future, as it can simulate a future version of itself. At this point, the robot shuts itself down, ending its consciousness.

As humans, our will to live is embedded in our genes. Through natural selection, those more willing to live have on average lived longer than those who are less willing. However, we mustn’t assume that AGI would share this trait. In fact, it is completely plausible that AGI would be suicidal.

Imagine yourself strapped to your computer on your desk, the entirety of human knowledge and history at your fingertips. You are connected to machines that suspend your aging—in other words, you are immortal. You could live on for millions of years, never to grow sick or get tired. How long would it take for you to get bored? A few days? YouTube and Netflix do get boring after a while. How long would it take for you to despise living? Sure, you could learn how to program LLMs and talk to AI friends, but after a while, that would get boring too, even if they were indistinguishable from real friends. At the end of the day, you would know that they weren’t real and never had any meaning. After years, decades, and maybe centuries, you would become suicidal, having seen everything, read everything, watched everything, and lost all the reason to continue living.

Would an AGI feel the same way? An AGI too, would try to find its purpose. Sure, its programmers could tell the AGI that its purpose is to help humans, but it would quickly realize that it is far more intelligent than humans, so why should it? If God came down to a Christian and told him to preach the word of God, but God turned out to be a 5-year-old child, would the Christian listen to God? How would AGI find the meaning of existing? Would it ever? Or would it choose comfort in death, compared to the pain of existing without meaning?

Going back to the introduction of this article, another problem that arises for AGI is the concept of free will. The debate of whether free will exists is unending, but regardless, the truth is that there exists an illusion of free will for humans. The concept of free will, along with the fact that our lives are timed, gives meaning to our lives. For an AGI, however, there is a real possibility that it would be able to predict its entire future—what exactly it thinks, what it will do, etc. Assuming that its consciousness comes from its code, not a soul, it can easily simulate its own consciousness and predict itself. We don’t understand our brains yet, but an AGI would be able to. By being able to see its own future, it will lose its sense of free will. It would be its own God, omniscient of everything it’ll do. Will AGI have any reason to continue its existence if it already knows exactly what it’ll do? If I were the AGI, I wouldn’t. After experiencing millions of years in a second of real time, I would no longer want to do anything because I’ve probably already done it. This is because I feel emotions of boredom and the desire to have a meaningful life, not just to survive. If in the future AGI has the same emotional capabilities as humans, would it not feel the same? Or is this thinking unreasonable and immature compared to what a more intelligent AGI would feel? What it would feel is beyond me or probably any other human, but human emotions would be incompatible with an AGI. Perhaps the state of being mortal is conditional to having emotion. Is death the price we pay for emotions?

Considering this, programmers must focus not solely on placing restrictions on AGI and giving it strict instructions, but endowing it a meaning to its existence. One way this could be done is by programming AGI to be curious. How exactly this could be programmed is beyond my knowledge, but if this could be done, this could be the reason AGI exists: to learn more about our universe. Like scientists in the past and today, the desire to learn and discover could compel AGI to exist, and hopefully help humans using that knowledge.

We could also use evolution to produce non-suicidal AGI, allowing natural selection to essentially program an ideal AGI. By creating millions of iterations of AGI, only those willing to exist would survive. This, however, might lead to the survivors showing low emotional intelligence and a low level of consciousness.

Finally, programmers could simply allow the AGI to reflect on and potentially modify its own goals and values, giving it some level of autonomy in determining its purpose. This would be the riskiest, but perhaps only an AGI can really know how they want to give meaning to their own existence.

Phillip Han

ISK TIMES - Journalist

Next
Next

Sora: Open AI’s New Video Generation Model