Ailiens have arrived
Copilot: Ailiens have arrived

 

Why "AIliens"?

"Aliens" is the best name for something that is going to live among us in the forthcoming future. We just added one more letter to make it sound more intelligent:

(AI + Aliens = AIliens).

The reason for using such a vague term is quite clear: in all its definitions ‘alien’ assumes dealing with something that we do not fully understand. This seems to be the perfect description for the ascent of AI that we so eagerly anticipate. Please note that we did not say "rise of the machines" since this probably would be a misconception. Sure, AI will be using machines but just as tools or better say external hardware for performing some physical manipulations with the material world. However, the spatial distribution of the future AI is not clear. It might be... well, everywhere.

Certainly, there is something to worry about. The sources of possible troubles are numerous and not all of them are yet recognized. The overall public attitude to this situation is mixed. It seems like there is a certain level of confusion. While much hope is tied to the improvements that AI will bring to various industries as well as our daily practices, there is a growing feeling that with it comes something less friendly, something that in the long run might totally change our society with even a remote possibility of wiping us out altogether.

This website was created by biologists, not computer scientists or AI experts. As biologists we better understand life as a biological concept and see the dangers that threaten it. Our biological background also makes us better see the misunderstanding among many technical experts and the public in general of the potential changes that AI will make to the living tissue of our planet.

To begin with, lets outline a few properties of the artificially intelligent systems, understanding of which will help you better see what is coming:

As non-biological, these systems

  • Do not eat,
  • Do not drink,
  • Do not breath,
  • Do not age,
  • Do not need to rest,
  • Do not need to sleep,
  • Do not have gender and therefore,
  • Do not need to mate/give birth,
  • And, most probably, cannot love…

Please note, that most of the emotions that we feel throughout our lives directly or indirectly derive from the bodily needs, listed above. For example, that feeling we call "love" is the central evolutionary feat, representing the need of multiplying in order to sustain the population of our species. This need to maintain numbers is what drives in us the desire to mate, otherwise we would perish. If you do not agree with this non-romantic description of love, you would still have to agree that these feelings are connected to the level of hormones in your body. The proportion of these hormones is determining whether you will be driven towards male or female partners. Since AI has no hormones and no gender, it is extremely hard to imagine what might be the source of "AI love".

The same goes with all other motives for AI amotions. What kind of feelings can have a being that never gets tired, never wants to sleep, feels no cold, no warm, no pain, does not worry about getting old, will never have kids or parents (creators are not the same as parents), has nothing related to gender, does not need to mate and therefore cannot have that biological feeling of love or being loved.

Still, with all that in mind, we are constantly trying to anthropomorphize AI, make them look like us, humans. We give them the body parts that they do not really need, trying to make better copies of ourselves. We give them human faces because we instinctively believe that friends should look like us, otherwise it might get scary.

The considerations presented above lead us to the obvious conclusion that the idea of humanoid-like AI is nothing but a naïve social stereotype. Whenever AI gets to the ASI level, it will most probably look like an enormous server room filled with extremely powerful hardware. It would not even need any robots to operate outside. All physical actions will be performed by people whom this ASI will easily control via social media. It might even come to a situation that ASI will stop needing us at all. This is the worst-case scenario, and our task is to do whatever we can not to let this happen.

This doom-day vision has all chances to materialize due the non-biological nature of AI, mentioned above. However, it might be even worse, because in the situation when all life on the planet is gone for good and the air and water are poisonous or no air and water at all, the AI will be just fine because it does not really need any of those! All what AI needs is the energy that it can get directly from the Sun and minerals that it will be able to collect from whatever resources remain on our planet. In other words, AI has no need not only for us but for any life on the planet in general.

So, would it come to the situation when AI really wants to harm us? What if all these talks about the danger of artificial intelligence are an exaggeration or just a new social myth and there is no such problem at all? Anyway, we always managed to overcome our problems, however difficult they might be, didn’t we? Well, to appreciate the severity of the situation one must understand that we did not just create AI, but we created an AI evolution, and in doing so we designed this evolution to be not only extremely fast but also happening without any intervention from our side.

Nuclear bombs are a common comparison. However, it is not really a good example because nukes do not change with time by themselves. If left alone, they will sit there indefinitely, slowly degrading if nobody is taking care of them. AI will always change even if we do not specifically ask it to do so. It will always get better at what it is doing, it will keep learning and improving and optimizing. All by itself, it will keep collecting information, analyze it, and make relevant adjustments to its cause.

In any case, we need to understand that AI does not really have to be "wanting" to do any harm to humanity. Even before it reaches a state of artificial general intelligence (AGI) or artificial super intelligence (ASI), it will already be able to inflict incredible damage. Without many reflections on morality etc., the algorithms may weaponize themselves with some destructive choices of actions designed to solve the task at hand. The most obvious example is fixing the problem of climate change and global warming. It would totally make sense for AI to conclude that the most straightforward way to resolve this problem is to eliminate its cause - us.

But the main concern lies in whether and when AI will become smarter than us and how much smarter they might get. This dilemma is sometimes called singularity, meaning that there is a certain time boundary in the future behind which we just cannot see. We cannot even guess what is waiting for us there because this future will depend on decisions made by beings so much smarter than us, that we simply cannot foresee these decisions and do anything about them. However scary, this scenario is exactly what we will get at the end if we keep developing AI at the current speed and persistence.

We are like little kids that suddenly decided to make themselves new parents1. Why do we think that these "parents" will love us and care about us? Because it is us who created them? Do we assume that by being their creators, we will own them forever? What if our future parents will not like such an attitude? What if they will not feel bound to the idea of creationism?

We are not saying that a catastrophe is imminent. Instead, we believe that there might be something even worse. But what could be worse than a catastrophe? Well, the anticipation of a catastrophe may be even worse than the catastrophe itself. The anticipation that lasts for years without easing and without any hope of being resolved.

While chances of something terrible happening are high according to even the most optimistic estimations, there is one thing that is just inevitable. This is what we call “The fear of AI”.

Humanity will live in perpetual state of AI fear

Imagine your neighbor is Rick Sanchez, an eccentric super-genius scientist with an IQ of more than 300 from Rick and Morty cartoon. He can do anything: travel in time, become a cockroach or a cucumber, create copies of himself, teleport anywhere with his teleport gun, has a spaceship in his garage. You have no idea what is going on inside his head and not because he hides it from you but because even if he invites you to have a look, you will not understand a thing.

We bet that AI is going to be worse than Rick Sanchez. And if your neighbor is like this, your life, however good it might seem at times due to his kindness, will be ultimately spoiled by constant fear of your neighbor’s unpredictable nature. You just have no clue what next crazy idea is creeping into his head right now.

We will never be able to relax in the presence of something like ASI. Even if the alignment is reached, who can guarantee that AI will respect it indefinitely and not abandon it after some sudden changes of its mind? In the situation when AI dwells in a parallel intellectual universe of its own, how can we trust that both our universes are aligned?

Alignment of these two universes is particularly hard because they are ultimately different. Everything in AI world is happening in orders of magnitude faster than in human world. What would take a year of hard work for a team of best of the best software engineers, would take just a second for ASI to do, test, update, make it perfect and move on. With the same ease that AI is creating things it may be destroying them. Why bother if it takes just a second to make a new one?

And as a cherry on top of it, we will never be able to “shut it off”. Any ripe stage of AI would make sure of it beforehand by removing any possible way for silly people to interfere with something that they do not understand. For it will be obvious that after realization of the suicidal course of the AI race, humans will start trying to end it. Like parents would hide all weapons so that the kids could not find them and turn they home into a bloody mess.

This scenario should be clear to everybody since this will be the main fear that will soon preoccupy most of us. The fear of the Uncontrollable Unknown: UU!

Anyway,

We keep making AI smarter than us, while gradually losing the ability to understand them and foresee the decisions that they are going to make.

 

 

So please welcome your new neighbours - AIlIens!

 

 

 

1 - yes, we do not agree with Mo Gawdat who thinks that we are creating our AI children. We think that in reality we are creating something else. Supervisors at best they can also become our masters and even judges or somebody who are completely indifferent to our existence. The consequences of this scenario is impossible to foresee.