

Well written response. There is an undeniable huge improvement to LLMs over the last few years, and that already has many applications in day to day life, workplace and whatnot.
From writing complicated Excel formulas, proofreading, and providing me with quick, straightforward recipes based on what I have at hand, AI assistants are already sold on me.
That being said, take a good look between the type of responses here -an open source space with barely any shills or astroturfers (or so I’d like to believe) - and compare them to the myriad of Reddit posts that questioned the same thing on subs like r/singularity and whatnot. It’s anecdotal evidence of course, but the amount of BS answers saying “AI IS GONNA DOMINATE SOON” ; “NEXT YEAR NOBODY WILL HAVE A JOB”, “THIS IS THE FUTURE” etc. is staggering. From doomsayers to people who are paid to disseminate this type of shit, this is ONE of the things that mainly leads me to think we are in a bubble. The same thing happened/ is happening to crypto over the last 10 years. Too much money being inserted by billionaire whales into a specific subject, and in years they are able to convince the general population that EVERYBODY and their mother is missing out a lot if they don’t start using “X”.
Good point on bringing this up. It’s imitation. The moment it needs to make arbitrary decisions it starts committing mistakes due to lack of context, which it doesn’t have because it’s a fucking machine following IF scripts. It is useless in this sense, but AI creators thought they could circumvent this by programming the AI say: “Oh! Sorry, you are correct, it is exactly what you said” after you give it the context only your human persona can grasp.
Had they called this letters or words calculator, it would be much more ethical than calling this AI. Hence why the bubble is bursting, lying to people about it’s capabilities. It is just a new calculator for phrases instead of numbers.