• Alien Nathan Edward@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    doesn’t take a lot to imagine a scenario in which a lot of people die due to information manipulation or the purposeful disabling of safety systems. doesn’t take a lot to imagine a scenario where a superintelligent AI manipulates people into being its arms and legs (babe, wake up, new conspiracy theory just dropped - roko is an AI playing the long game and the basilisk is actually a recruiting tool). doesn’t take a lot to imagine an AI that’s capable of seizing control of a lot of the world’s weapons and either guiding them itself or taking advantage of onboard guidance to turn them against their owners, or using targeted strikes to provoke a war (this is a sub-idea of manipulating people into being its arms and legs). doesn’t take a lot to imagine an AI that’s capable of purposefully sabotaging the manufacture of food or medicine in such a way that it kills a lot of people before detection. doesn’t take a lot to imagine an AI capable of seizing and manipulating our traffic systems in such a way to cause a bunch of accidental deaths and injuries.

    But overall my rebuttal is that this AI doom scenario has always hinged on a generalized AI, and that what people currently call “AI” is a long, long way from a generalized AI. So the article is right, ChatGPT can’t kill millions of us. Luckily no one was ever proposing that chatGPT could kill millions of us.