• REDACTED@infosec.pub
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      2 days ago

      Seriously. There have been always people with mental problems or tendency towards self harm. You can easily find ways to off yourself on google. You can get bullied on any platform. LLMs are just a tool. How detached from reality you get by reading religious texts or ChatGPT convo highly depends on your own brain.

      It’s like how entire genre of videogames are now getting censored because of few online incels.

      • atrielienz@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        22 hours ago

        I like your username, and generally even agree with you up to a point.

        But I think the problem is there are a lot of mentally unwell people out there who are isolated and using this tool (with no safeguards) to interact with socially as a sort of human stand in.

        If a human actually agrees that you should kill yourself and talks you into doing it, they are complicit and can be held accountable.

        Because chatbots are being… Billed as a product that passes the Turing test, I can understand why people would want the companies that own them to be held accountable.

        These companies won’t let you look up how to make a bomb on their LLM, but they’ll let people confide suicidal ideation and not put in any safeguards for that, and because they’re designed to be agreeable, the LLM will agree with a person who tells it they think they should be dead.

        • REDACTED@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          20 hours ago

          I get your point, but the reality is that companies do actually put (well, started to) safeguards in place. I feel like I could get murdered on lemmy for saying this, but I was a ChatGPT subscriber for a year, up until last month. The amount of “Sorry Dave, I cannot do that” replies I recently started getting was ruining my experience. OpenAI recently implemented entire new system that transfers you to a different model if it detects something mental going on with you.

          • atrielienz@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 hours ago

            The negligence lies in marketing a product without considering the implications of what it can do in scenarios that would make it a danger to the public.

            No company is supposed to be allowed to endanger the public without accepting due responsibility, and all companies are expected to mitigate public endangerment risks through safeguards.

            “We didn’t know it could do that, but we’re fixing it now” doesn’t absolve them of liability for what happened before because they lacked foresight, did no preliminary testing, and or planning to mitigate their liability. And I’m sure that sounds heartless. But companies do this all the time.

            It’s why we have warning labels and don’t sell specific chemicals in bulk without a license, or to children etc. it’s why, even if you had the money, you can’t just go buy 20 tonnes of fertilizer without the proper documentation and licenses, as well as an acceptable use case for 20 tonnes.

            The changes they have made don’t protect Monsanto from litigation for the deaths their products caused in the before times. The only difference there is that there was proof they had knowledge of the detrimental affects of those products and didn’t disclose them.

            So I suppose we’ll see.