What a surprise, the empathy-free text generator makes things worse when people expect it to output empathy. My condolences to the kid’s family and I hope he’s in a better place, but this sort of thing is going to happen more and more until people realize that AI chatbots only seem human-like because the human brain is so good at empathy that it projects emotions and agency onto anything, even a literal cowpile with googly eyes on top.
AI isn’t “good enough to fool us” . We’re just stupid enough to be fooled even by something as moronic as AI. What we emphasize in such a statement makes all the difference in how we handle this tech.
Yeah, article said he had talked for months about hanging himself. Any human friend would have done their best to save him. Being proactive about making him feel better, working through his problems with him, and/or notifying his parents or a school teacher.
Meanwhile the chat bot just encouraged him to seek help himself. Which isn’t bad, but when someone is suicidal, particularly when they keep bringing it up, is clearly not enough.
I feel really bad for anyone treating chatbots as friends. They are basically guaranteed to get screwed over by the bot. And furthermore, they aren’t learning how to connect with humans, humans who might become a lifelong friend, or teach one the skills to befriend a future lifelong friend.
What a surprise, the empathy-free text generator makes things worse when people expect it to output empathy. My condolences to the kid’s family and I hope he’s in a better place, but this sort of thing is going to happen more and more until people realize that AI chatbots only seem human-like because the human brain is so good at empathy that it projects emotions and agency onto anything, even a literal cowpile with googly eyes on top.
AI isn’t “good enough to fool us” . We’re just stupid enough to be fooled even by something as moronic as AI. What we emphasize in such a statement makes all the difference in how we handle this tech.
The ELIZA effect, now proven in blood.
Yeah, article said he had talked for months about hanging himself. Any human friend would have done their best to save him. Being proactive about making him feel better, working through his problems with him, and/or notifying his parents or a school teacher.
Meanwhile the chat bot just encouraged him to seek help himself. Which isn’t bad, but when someone is suicidal, particularly when they keep bringing it up, is clearly not enough.
I feel really bad for anyone treating chatbots as friends. They are basically guaranteed to get screwed over by the bot. And furthermore, they aren’t learning how to connect with humans, humans who might become a lifelong friend, or teach one the skills to befriend a future lifelong friend.