Sckharshantallas

  • 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle

  • I’m not writing a paper or essay… so my standards are different.

    It actually shouldn’t matter in this case. Wikipedia isn’t a “source” of anything, it simply states facts and backs them with sources (though not always, many articles will have a “missing source” for many paragraphs). It’s also public, so anyone can add things without it being peer reviewed.

    So if you actually care about whether some information is correct, you should check what is the source. And if something is wrong you can do your part and change the text to be more neutral or better phrased. Edits that improve pages are almost always gonna stick.

    In the end it’s all ant’s work to update/fix the huge number of badly written stuff in there.



  • There’s no problem in citing in that an interview cited fact X. Then if the issue is discussed, some other reputable news sources might say it’s likely not true and you can source them too.

    When you present the facts as they are instead of trying to portray them as absolute truths, you’re doing the right work for Wikipedia.

    Even scientific facts aren’t “the truth”, but our current understanding of things. Wikipedia isn’t about what’s the ultimate truth, it’s about documenting and organizing information so that people can get a grasp on subjects.



  • Yeah, countries should realize that brain drain is much more serious issue than is usually portrayed.

    But honestly the issues that lead to brain drain are far beyond what one or a few people in power can fix. It’s usually caused by deep societal issues, things that emerge after little dysfunctions snowball all the way to the large system that is the whole country.

    For example, I’ve seen articles like this which in my opinion summarize what is the real issue in Brazilian society. But also one could argue this behavior becomes prevalent because society is already dysfunctional and people normalize the current way of thinking. It’s really a chicken and egg problem to solve when you look at the whole country scale.







  • It is unpredictable because there are so many permutations

    Actually LLMs are unpredictable not only because the space of possible outputs (combinatorics) is huge, though that also doesn’t help us understand them.

    Like there might be an astronomical number of different proteins but biophysics might be able to make somewhat accurate predictions based on the properties we know (even if it requires careful testing in the real thing).

    For example, it might be tempting to calculate the tokens associations somehow and kinda create a function mapping what happens when you add this or that value in the input to at least estimate what the result would be.

    But what happens with LLMs is changing one token in a prompt produces a sometimes disproportionate or unintuitive change in the result, because it can be amplified or dampened depending on the organization of the internal layers.

    And even if the model’s internal probability distribution were perfectly understood, its sampling step (top-k, nucleus sampling, temperature scaling) adds another layer of unpredictability.

    So while the process is deterministic in principle, it’s not calculable in a tractable sense—like weather prediction.





  • It’s very possible for someone to appear fine in public while struggling privately. The family can’t be blamed for not realizing what was happening.

    The bigger issue is that LLMs were released without sufficient safeguards. They were rushed to market to attract investment before their risks were understood.

    It’s worth remembering that Google and Facebook already had systems comparable to ChatGPT, but they kept them as research tools because the outputs were unpredictable and the societal impact was unknown.

    Only after OpenAI pushed theirs into the public sphere (framing it as a step toward AGI) Google and Facebook did follow, not out of readiness, but out of fear of being left behind.