• News
  • Latest
  • Impact
    • Alignment
    • Hope (Utopia)
    • Doubt (Dystopia)
    • Timeline
    • AILIens
    • AI Safety
    • Impact on Jobs
    • AI Safety Acts & Reports
    • AI Consciousness
    • AI Progress
    • Deep Learning
    • Public Papers
    • AI Economy
  • News
  • Latest
  • Impact
    • Alignment
    • Hope (Utopia)
    • Doubt (Dystopia)
    • Timeline
    • AILIens
    • AI Safety
    • Impact on Jobs
    • AI Safety Acts & Reports
    • AI Consciousness
    • AI Progress
    • Deep Learning
    • Public Papers
    • AI Economy

Doubt

In the simplest form the situation may be described like this:

We are creating active non-biological systems with levels of intelligence significantly exceeding our own. Even before (if ever) these systems will become sentient, they will have the capacity to approach any non-linear task given to them with the power of their innovative AI-intellect instead of following instructions. As it often happens, solving problems involves making choices. AI will make choices by itself, based on the amount of data and studied patterns, that it managed to acquire through deep learning. What is making these choices scary is their independence from us.

There is no mistake in this. This is how we want them. This is what will make them efficient. This is what will make them able to drive cars. We want them to be autonomous and self-contained. So we keep pumping up their mental capabilities with an astonishingly exponential speed.

But what happens if a choice they pick one day will turn out to be not beneficial to humanity?

A trivial example: we task them with solving ecological crisis. There is little doubt that AI will tackle the problem with an utterly practical approach and as the very first step would try to find out the actual cause of the environmental crisis. Obviously, it is us because, believe it or not, humans are the only true reason for the ecological crisis to exist in the first place. So, the simplest and most straightforward solution for AI might be an immediate elimination of humanity.

Will they be able to do so technically? Of course! So, why wouldn't they? Why wouldn't they kill us in this scenario? Already now we can easily imagine that they will be very much capable of finding the quickest and most efficient way to annihilate humanity if they come to the conclusion that this is a necessary thing to do. Also, there is no doubt that this harsh solution will cross their mind during evaluation of the problem. What concerns us here is the possible alternative solutions that the AI might be willing to come up with. Would it understand that killing people is not an appropriate solution and start looking for ways around? What arguments could it put forward for keeping us alive? Will it see us as its parents or friends or will it believe that people will eternally remain useful for him? Maybe there will be some unbreakable taboos with fail-safe mechanisms?

Optimistically inclined entrepreneurs and top business people sincerely hope that this will not happen. However, they cannot put forward any sound, scientifically based set of reasons, why this doom scenario will not become the reality. Of yet, there is no science that is focused on studying potential threats to humanity from AI and the AI is going so fast that that any science can hardly catch up unless, the scientist are also AI.

Details
Written by: Super User
Category: Doubt
Published: 03 August 2023
Hits: 364

Trending in AI Safety

  • AI Takeover Map

    AI Takeover Map

    Above is the conceptual world map illustrating the speed of potential AI takeover across different regions. Countries are color-coded to reflect estimated takeover pace: 🔴 Swift – rapid AI integration and governance risk 🟠 Fast – accelerating adoption with moderate safeguards 🟡 Moderate – balanced...
  • AI Takeover Scenarios: Pathways, Risks, and Governance in 2025

    AI Takeover Scenarios: Pathways, Risks, and Governance in 2025

    Artificial intelligence (AI), once the domain of speculative fiction, now stands as one of the most consequential forces reshaping global societies, economies, and governance structures. The steady progression from narrow AI applications to transformative frontier models has inspired both hopes...
  • General consideration on AI safety

    General consideration on AI safety

    Artificial Intelligence (AI) safety is no longer a niche concern—it’s a foundational challenge for the 21st century. As AI systems grow more capable, autonomous, and embedded in critical infrastructure, the question is no longer if we should care about safety, but how deeply and how urgently.
  • AI Musth: When Alignment Breaks

    AI Musth: When Alignment Breaks

    Key Points Alignment Is Understandable—Even Without Technical Jargon The article argues that AI alignment, often buried under complex terms like “mesa-optimization” or “reward hacking,” can—and should—be explained in plain language. Using analogies like taming an elephant helps bridge the gap...
  • Created with Microsoft Copilot 2025

    EU AI Act

    The European Union Artificial Intelligence ACT (EU AI Act) is a significant development in the world of artificial intelligence. It's a legislative proposal by the European Commission that aims to regulate AI systems within the European Union. The act focuses on ensuring that AI is developed and...
  • Privacy Policy
  • Terms & Conditions
  • About us
  • Contact
Copyright © 2022 - 2026 Ailiens.net
  • SiteMap
  • SiteMap XML
  • RSS Feed