Why would anyone want an editor that doesn’t fact check?
Not sure about Wikipedia, but Conservapedia would find it very useful. In fact, since most of their entries are factually incorrect and appear as fantasy I think AI writing articles would save them a lot of time.
Bonus: hallucinations can help create new conspiracy theories!
tbh i somehow didnt even realize that wikipedia is one of the few super popular sites not trying to shove ai down my throat every 5 seconds
i’m grateful now
Don’t count your chickens before they hatch, Jimmy Wales founded Wikipedia and already used ChatGPT in a review process once according to this article.
damn T_T
I will stop donating to Wikipedia if they use AI
Wikipedia already has a decades operating cost of savings.
No they don’t because they blast it on inflated exec wages.
Why don’t they blast execs and reduce the expenses.
Just got back from asking them. They said they like cash moneys and don’t like blasting themselves.
What’s funny is that for enormous big systems with network effects we are trying to use mechanisms intended for smaller businesses, like a hot dog kiosk.
IRL we have a thing for those, it’s called democracy.
In the Internet it’s either anarchy or monarchy, sometimes bureaucratic dictatorship, but in that area even Soviet-style collegial rule is something not yet present.
I’m recently read that McPherson article about Unix and racism, and how our whole perception of correct computing (modularity, encapsulation, object-orientation, all the KISS philosophy even) is based on that time’s changes in the society and reaction to those. I mean, real world is continuous and you can quantize it into discrete elements in many ways. Some unfit for your task. All unfit for some task.
So - first, I like the Usenet model.
Second, cryptography is good.
Third, cryptographic ownership of a limited resource is … fine, blockchains are maybe not so stupid. But not really necessary, because one can choose between a few versions of the same article retrieved, based on web of trust or whatever else. No need to have only one right version.
Fourth, we already have a way to turn sequence of interdependent actions into state information, it’s called a filesystem.
Fifth, Unix with its hierarchies is really not the only thing in existence, there’s BTRON, and even BeOS had a tagged filesystem.
Sixth, interop and transparency are possible with cryptography.
Seventh, all these also apply to a hypothetical service over global network.
Eighth, of course, is that the global network doesn’t have to be globally visible\addressable to operate globally for spreading data, so even the Internet itself is not as much needed as the actual connectivity over which those change messages will propagate where needed and synchronize.
Ninth, for Wikipedia you don’t need as much storage as for, say, Internet Archive.
And tenth - with all these one can make a Wikipedia-like decentralized system with democratic government, based on rather primitive principles, other than, of course, cryptography involved.
(Yes, Briar impressed me.)
EDIT: Oh, about democracy - I mean technical democracy. That an event (making any change) weren’t valid if not processed correctly, by people eligible for signing it, for example, and they are made eligible by a signed appointment, and those signing it are made eligible by a democratic process (signed by majority of some body, signed in turn). That’s that blockchain democracy people dreamed at some point. Maybe that’s not a scam. Just haven’t been done yet.
How do you prevent sybil attacks without making it overly expensive to vote?
How do you use Sybil attack for a system where the initial creator signs the initial voters, and then they collectively sign elections and acceptance of new members and all such stuff?
Doesn’t seem to be a problem for a system with authorized voters.
Flood them with AI-generated applicants.
So why would they accept said AI-generated applicants?
If we are making a global system, then confirmation using some nation’s ID can be done, with removing fakes found out later. Like with IRL nation states. Or “bring a friend and be responsible if they are a fake”. Or both at the same time.
Would every participant get to see my government-issued ID?
One can elect a small group which will and will sign its connection to something intermediate. Then only they will.
Wales’s quote isn’t nearly as bad as the byline makes it out to be:
Wales explains that the article was originally rejected several years ago, then someone tried to improve it, resubmitted it, and got the same exact template rejection again.
“It’s a form letter response that might as well be ‘Computer says no’ (that article’s worth a read if you don’t know the expression),” Wales said. “It wasn’t a computer who says no, but a human using AFCH, a helper script […] In order to try to help, I personally felt at a loss. I am not sure what the rejection referred to specifically. So I fed the page to ChatGPT to ask for advice. And I got what seems to me to be pretty good. And so I’m wondering if we might start to think about how a tool like AFCH might be improved so that instead of a generic template, a new editor gets actual advice. It would be better, obviously, if we had lovingly crafted human responses to every situation like this, but we all know that the volunteers who are dealing with a high volume of various situations can’t reasonably have time to do it. The templates are helpful - an AI-written note could be even more helpful.”
That being said, it still reeks of “CEO Speak.” And trying to find a place to shove AI in.
More NLP could absolutely be useful to Wikipedia, especially for flagging spam and malicious edits for human editors to review. This is an excellent task for dirt cheap, small and open models, where an error rate isn’t super important. Cost, volume, and reducing stress on precious human editors is. It’s a existential issue that needs work.
…Using an expensive, proprietary API to give error prone yet “pretty good” sounding suggestions to new editors is not.
Wasting dev time trying to make it work is not.
This is the problem. Not natural language processing itself, but the seemingly contagious compulsion among executives to find some place to shove it when the technical extent of their knowledge is occasionally typing something into ChatGPT.
It’s okay for them to not really understand it.
It’s not okay to push it differently than other technology because “AI” is somehow super special and trendy.
That being said, it still wreaks of “CEO Speak.”
I think you mean reeks, which means to stink, having a foul odor.
Those homophones have reeked havoc for too long!
This is another reason why I hate bubbles. There is something potentially useful in here. It needs to be considered very carefully. However, it gets to a point where everyone’s kneejerk reaction is that it’s bad.
I can’t even say that people are wrong for feeling that way. The AI bubble has affected our economy and lives in a multitude of ways that go far beyond any reasonable use. I don’t blame anyone for saying “everything under this is bad, period”. The reasonable uses of it are so buried in shit that I don’t expect people to even bother trying to reach into that muck to clean it off.
This bubble’s hate is pretty front-loaded though.
Dotcom was, well, a useful thing. I guess valuations were nuts, but it looks like the hate was mostly in the enshittified aftermath that would come.
Crypto is a series of bubbles trying to prop up flavored pyramid schemes for a neat niche concept, but people largely figured that out after they popped. And it’s not as attention grabbing as AI.
Machine Learning is a long running, useful field, but ever since ChatGPT caught investors eyes, the cart has felt so far ahead of the horse. The hate started, and got polarized, waaay before the bubble popping.
…In other words, AI hate almost feels more political than bubble fueled. If that makes any sense. It is a bubble, but the extreme hate would still be there even if it wasn’t.
Crypto was an annoying bubble. If you were in the tech industry, you had a couple of years where people asked you if you could add blockchain to whatever your project was and then a few more years of hearing about NFTs. And GPUs shot up in price. Crypto people promised to revolutionize banking and then get rich quick schemes. It took time for the hype to die down, for people to realize that the tech wasn’t useful, and that the costs of running it weren’t worth it.
The AI bubble is different. The proponents are gleeful while they explain how AI will let you fire all your copywriters, your graphics designers, your programmers, your customer support, etc. Every company is trying to figure out how to shoehorn AI into their products. While AI is a useful tool, the bubble around it has hurt a lot of people.
That’s the bubble side. It also gets a lot of baggage because of the slop generated by it, the way it’s trained, the power usage, the way people just turn off their brains and regurgitate whatever it says, etc. It’s harder to avoid than crypto.
He can also stick AI inside his own ass
Honestly, translating the good articles from other languages would improve Wikipedia immensely.
For example, the Nanjing dialect article is pretty bare in English and very detailed in Mandarin
You can do that, that’s fine. As long as you can verify it is an accurate translation, so you need to know the subject matter and the target language.
But you could probably also have used Google translate and then just fine tune the output yourself. Anyone could have done that at any point in the last 10 years.
Google translate is horrendously bad at Korean, especially with slang and accidental typos. Like nonsense bad.
The problem with LLMs and other generative AI is that they’re not completely useless. People’s jobs are on the line much of the time, so it would really help if they were completely useless, but they’re not. Generative AI is certainly not as good as its proponents claim, and critically, when it fucks up, it can be extremely hard for a human to tell, which eats away a lot of their benefits, but they’re not completely useless. For the most basic example, give an LLM a block of text and ask it how to improve grammar or to make a point clearer, and then compare the AI generated result with the original, and take whatever parts you think the AI improved.
Everybody knows this, but we’re all pretending it’s not the case because we’re caring people who don’t want the world to be drowned in AI hallucinations, we don’t want to have the world taken over by confidence tricksters who just fake everything with AI, and we don’t want people to lose their jobs. But sometimes, we are so busy pretending that AI is completely useless that we forget that it actually isn’t completely useless. The reason they’re so dangerous is that they’re not completely useless.
Christ, I miss when I could click on an article and not be asked to sign up for it.
Oh, right! Thanks for reminding me. I tried to archive it the last time but it took forever.
Edit. There ya’ go: https://archive.is/oWcIr