I’m sure that’s true in some technical sense, but clearly a lot of people treat them as borderline human. And Open AI, in particular, tries to get users to keep engaging with the LLM as of it were human/humanlike. All disclaimers aside, that’s how they want the user to think of the LLM, a probabilistic engine for returning the most likely text response you wanted to hear is a tougher sell for casual users.
Right, and because it’s a technical limitation, the service should be taken down. There are already laws that prevent encouraging others from harming themselves.
I’m not arguing about regulation or lawsuits not being the way to do it - I was worried that it would get thrown out based on the wording of the part I commented on.
As someone else pointed out, the software did do what it should have, but Open AI failed to take the necessary steps to handle this. So I may be wrong entirely.
I agree. But that’s now how these LLMs work.
I’m sure that’s true in some technical sense, but clearly a lot of people treat them as borderline human. And Open AI, in particular, tries to get users to keep engaging with the LLM as of it were human/humanlike. All disclaimers aside, that’s how they want the user to think of the LLM, a probabilistic engine for returning the most likely text response you wanted to hear is a tougher sell for casual users.
Right, and because it’s a technical limitation, the service should be taken down. There are already laws that prevent encouraging others from harming themselves.
Yeah, taking the service down is an acceptable solution, but do you think Open AI will do that on their own without outside accountability?
I’m not arguing about regulation or lawsuits not being the way to do it - I was worried that it would get thrown out based on the wording of the part I commented on.
As someone else pointed out, the software did do what it should have, but Open AI failed to take the necessary steps to handle this. So I may be wrong entirely.