• 0 Posts
  • 5 Comments
Joined 2 years ago
cake
Cake day: July 16th, 2023

help-circle
  • Modern llms were a left field development.

    Most ai research has serious and obvious scaling problems. It did well at first, but scaling up the training didn’t significantly improve the results. LLMs went from more of the same to a gold rush the day it was revealed that they scaled “well” (relatively speaking). They then went through orders of magnitude improvements very quickly because they could (unlike previous ai training models which wouldn’t have benefited like this).

    We’ve had chatbots for decades, but with a the same low capability ceiling that most other old techniques had, they really were a different beast to modern LLMs with their stupidly excessive training regimes.



  • Same logic would suggest we’d never compete with an eyeball, but we went from 10 minute photos to outperforming most of the eyes abilities in cheap consumer hardware in little more than a century.

    And the eye is almost as crucial to survival as the brain.

    That said, I do agree it seems likely we’ll borrow from biology on the computer problem. Brains have very impressive parallelism despite how terrible the design of neurons is. If we can grow a brain in the lab that would be very useful indeed. More useful if we could skip the chemical messaging somehow and get signals around at a speed that wasn’t embarrassingly slow, then we’d be way ahead of biology in the hardware performance game and would have a real chance of coming up with something like agi, even without the level of problem solving that billions of years of evolution can provide.


  • Oh sure, the current ai craze is just a hype train based on one seemingly effective trick.

    We have outperformed biology in a number of areas, and cannot compete in a number of others (yet), so I see it as a bit of a wash atm whether we’re better engineers than nature or worse atm.

    The brain looks to be a tricky thing to compete with, but it has some really big limitations we don’t need to deal with (chemical neuron messaging really sucks by most measures).

    So yeah, not saying we’ll do agi in the next few decades (and not with just LLMs, for sure), but I’d be surprised if we don’t figure something out once get computers a couple orders of magnitude faster so more than a handful of companies can afford to experiment.


  • scratchee@feddit.uktoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    2 days ago

    Possible, but seems unlikely.

    Evolution managed it, and evolution isn’t as smart as us, it’s just got many many chances to guess right.

    If we can’t figure it out we can find a way to get lucky like evolution did, it’ll be expensive and maybe needs us to get a more efficient computing platform (cheap brain-scale computers so we can make millions of attempts quickly).

    So yeah. My money is that we’ll figure it out sooner or later.

    Whether we’ll be smart enough to make it do what we want and not turn us all into paperclips or something is another question.