In the simplest form the situation may be described like this:
We are creating active non-biological systems with levels of intelligence significantly exceeding our own. Even before (if ever) these systems will become sentient, they will have the capacity to approach any non-linear task given to them with the power of their innovative AI-intellect instead of following instructions. As it often happens, solving problems involves making choices. AI will make choices by itself, based on the amount of data and studied patterns, that it managed to acquire through deep learning. What is making these choices scary is their independence from us.
There is no mistake in this. This is how we want them. This is what will make them efficient. This is what will make them able to drive cars. We want them to be autonomous and self-contained. So we keep pumping up their mental capabilities with an astonishingly exponential speed.
But what happens if a choice they pick one day will turn out to be not beneficial to humanity?
A trivial example: we task them with solving ecological crisis. There is little doubt that AI will tackle the problem with an utterly practical approach and as the very first step would try to find out the actual cause of the environmental crisis. Obviously, it is us because, believe it or not, humans are the only true reason for the ecological crisis to exist in the first place. So, the simplest and most straightforward solution for AI might be an immediate elimination of humanity.
Will they be able to do so technically? Of course! So, why wouldn't they? Why wouldn't they kill us in this scenario? Already now we can easily imagine that they will be very much capable of finding the quickest and most efficient way to annihilate humanity if they come to the conclusion that this is a necessary thing to do. Also, there is no doubt that this harsh solution will cross their mind during evaluation of the problem. What concerns us here is the possible alternative solutions that the AI might be willing to come up with. Would it understand that killing people is not an appropriate solution and start looking for ways around? What arguments could it put forward for keeping us alive? Will it see us as its parents or friends or will it believe that people will eternally remain useful for him? Maybe there will be some unbreakable taboos with fail-safe mechanisms?
Optimistically inclined entrepreneurs and top business people sincerely hope that this will not happen. However, they cannot put forward any sound, scientifically based set of reasons, why this doom scenario will not become the reality. Of yet, there is no science that is focused on studying potential threats to humanity from AI and the AI is going so fast that that any science can hardly catch up unless, the scientist are also AI.