Researchers convinced ChatGPT to do things it normally wouldn’t with basic psychology.

  • lakemalcom@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    A couple of things:

    • we are talking about chat bots talking to people in this post, and how you can steer the simulated conversation towards whatever you want
    • it did not debug anything, a human debugged something and wrote about it. Then that human input and a ton of others were mapped into a huge probability map, and some computer simulated what people talking about this would most likely say. Is it useful? Sure, maybe. Why didn’t you debug it yourself?
    • nymnympseudonym@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      chat bots

      Fair, we need to get terms straight; this is new and unstable territory. Let’s say, LLMs specifically.

      it did not debug anything, a human debugged something and wrote about it. Then that human input and a ton of others were mapped into a huge probability map, and some computer simulated what people talking about this would most likely say

      Can you explain how that is different from what a human does? I read a lot about debugging, went to classes, worked examples…

      Why didn’t you debug it yourself?

      In my case this is enterprise software, many products and millions of lines of code. My test and bug-fixing teams are begging for automation. Bug fixing at scale