Well, who would have doubted it? Fuck, 1984 is already here.
Knowing Google, they care more about blurring the lines between AI and reality to confuse and force it onto people than they do about saving a few dollars on storage costs.
Yep.
It’s all about control and manipulation.
They love reminding us who is really in charge.
I’m huge into makeup, and I watch a lot of beauty content on YouTube because I want to see how certain makeup looks and performs before I buy it. This AI bullshit defeats the purpose of demonstrating makeup.
“AI”
Sharpening, Denoising and upscaling barely count as machine learning. They don’t require AI neural networks.
Barely count or not they absolutely ruin every piece of media I’ve seen them used in. They make people look like wax figures and turn text into gibberish.
They don’t require AI neural networks.
Sharpening and denoising don’t. But upscalers worth anything do require neural nets.
Anything that uses a neural network is the definition of AI.
Not true
Company I used to work for had excellent upscalers running on FPGAs that they developed 20+ years ago.
The algorithms have been there for years, just AI gives it bit of marketing sprinkle to something that has been a solved problem for years.
Well, the algorithms that make up many neural networks have existed for over 60 years. It’s only recently that hardware has been able to make it happen.
AI gives it bit of marketing sprinkle to something that has been a solved problem for years.
Not true and I did say “any upscaler that’s worth anything”. Upscaling tech has existed at least since digital video was a thing. Pixel interpolation is the simplest and computationally easiest method. But it tends to give a slight hazy appearance.
It’s actually far from a solved problem. There’s a constant trade-off beyond processing power and quality. And quality can still be improved by a lot.
at least since digital video
Right. Even back in the eighties UK broadcasters were “upscaling” American NTSC 480i60 shows to 576i50. The results were varied. High-ticket shows like Friends and Fraiser looked great, albeit a bit soft and oversaturated, while live news feeds looked terrible. If you’ve never seen it, The Day Today has a perfect example of what a lot of US programmes lookd like converted to PAL.
Depends on what you’re trying to upscale.
Sharpening is a simple convolution, doesn’t even count as ML.
I really hate that everything gets the AI label nowadays
The “ai bad” brainrot has everyone thinking that any algorithm is AI and all AI is ChatGPT.
just today someone told me that Vocaloid was also AI music, they are either too dumb to make some basic fact-checking or true believers trying to hype up AI by any means necessary
Thisthisthis
But you can use AI for that
It’s very likely to do with compression codecs to save money.
Ostensibly, yes. Just like the Patriot Act was to fight terrorism.
Nice
(linked from the article about a Netflix series upscale)
Seems like this should be illegal, Google should be broken up, and its leadership imprisoned
I’m down for a breakup but I don’t see how we could twist this into illegality.
You could probably make it illegal to alter people’s videos without their explicit consent. But also the Republicans have shown us that laws mean what the people in charge want
without their explicit consent.
By signing up to this service you agree to allow us to alter or modify your content as we require for efficient operation or to increase content engagement
You can make that kind of thing illegal. I think “shrink wrap eulas” are dubious. Rule that fine print with a bunch of other stuff doesn’t count as explicit. Like there are rules now about cookie acceptance that has changed how the web works, and most sites don’t try to hide the cookie thing because that’s against the rules.
We wouldn’t need so many damn laws to prevent shitty companies from doing shitty things if we could just become the kind of society that doesn’t support shitty companies. The cookie thing is a great example of how a well–intentioned regulation made the internet an even more irritating place to be.
I suppose. But have you tried to get people to care about things? It’s stupid hard. I can’t get most of my friends to stop using Twitter, which is a pretty low stakes change. Nevermind something like “eat less meat” or “walk instead of drive sometimes”
If you can make people care, you can solve a lot of problems
I have tried, and writing my last comment actually brought a lot of repressed rage out. I’ve been too lenient on my friends and family who continue to use things like Facebook and X, because I didn’t want to be that opinionated, ideological snore who won’t shut up about how Facebook is the world’s most prolific purveyor of hate speech, propping up the Trump administration, Israel, LGBT hate groups, the Rohingya genocide, housing discrimination, abortion witch hunts, blah blah blah. But the thing is that these are true things, and people should be appalled enough to never touch a Meta product again, even if it means teaching an elderly family member to learn a new group messaging app.
So I’m back to being a loudmouth bitch who scolds people for using Facebook. And X. But I probably don’t stand a chance with Google.
I kinda doubt you’d be able to write a law that would actually have the effect you’re looking for. In the case of what you just wrote, all YouTube would need to do is write into their ToS that by uploading to their platform you’ve given them explicit permission to alter the video for purposes of storage space or increasing/decreasing quality.
I think you’re under estimating what the law can do, probably because most of the time it’s used to bolster rich assholes.
It might not be currently illegal but I think there should be a law defining “crimes against society” that only applies to corporations and politicians. It could be vague like “disorderly conduct” but just for corpos and politicians and would include things like lying to the public and could have punishments like corpos losing their business license (death) and banishment to the moon/sun for politicians.
Overly vague laws are never a good thing.
We say that so often, it’s lost meaning.
From what I’ve seen so far, the case here seems to be that it’s only being done to shorts, and what’s happening is that they’re being permanently stored at a lower quality and size and are then upscaled on the fly. I mean… it feels kinda fair to me. Theres a good reason YouTube has so little competition, and it’s because how hard and expensive maintaining a service like this is. They’re always trying to cut costs, and storage is gonna be a big cost. Personally, I’m glad it’s just shorts for now. It absolutely shouldn’t be happening to people who are paying for the service or making money for it, though.
I mean yeah, it doesn’t seem entirely unreasonable. But if it actually was reasonable, wouldn’t they just inform the uploader?
Or give an option to toggle. Surely letting people turn it off would save them even more resources, if they don’t have to bother with upscaling the video in the first place.
It likely costs them less to upscale than it does to store and serve a full sized video, so they’re not giving the uploader the choice.
Storage is very cheap. This only makes sense if they actually do the upscaling client side
It’s not so much that they down- and upscale the video of shorts, their algorithm changes the look of people. It warps skin and does a strange sort of sharpening that makes things look quite unreal and almost plastic.
It is a filter that evens the look with images generated by, say, grok or one of the other AI filters.
In a year people will think that “AI-look” is a normal video look, and stuff generated with it is what humans can look like. We will see crazed AI-fashion looks popping up.
Yeah, upscaling can generate artefacts and such.
It would not make any sense for them to be upscaled on the fly. It’s a computationally intensive operation, and storage space is cheap. Is there any evidence of it being done on the fly?
It would if they can do it on your device.
While it could theoretically be done on device, it would require the device to have dedicated hardware that is capable of doing the processing, so it would only work on a limited number of devices. It would be pretty easy to test this if a known modified video were available.
AI upscaling can be run on a ton of devices nowadays.
Also people are forgetting it’s not just storage, it’s bandwidth they save with this move. So even if they store both the low and high res copies they can save 4x the bandwidth (or more) serving to devices with upscaling capabilities.
it wouldn’t need dedicated hardware, it would just be slower on phones without that hardware. there’s nothing that AI does that can’t be done on any phone or PC.
same thing with ray tracing, it’s technically possible on cards that aren’t a part of the RTX line, they just can’t do it as fast as an RTX card (per NVIDIA).
That would depend entirely on WHAT its doing. I have not personally seen any of these videos yet, but based on what was described in the article, I would imagine that a typical CPU would not be able to handle it.
a typical CPU in a phone would do just fine. AI effects in photo and video started coming out in phones before new phones started having dedicated hardware to accelerate it. phones have been doing stuff as intensive as that for years. for example, iPhones have been able to make complex and precise full scale textured replicas of real world environments that you can then import into Blender using their lidar capabilities for years. that’s quite a bit more intensive of a process than using AI to edit a video.
and as for a PC, there isn’t anything you can do to edit a video using AI that a PC CPU would not be able to handle. if a 10 year old laptop can generate video out of thin air using genAI, then applying a sharpening effect would be a piece of cake. hell, I’ve done stable diffusion on a laptop with just 4GB of VRAM. it’s quite a bit slower than with a faster PC, but certainly doable.
It’s not that computationally intensive to upscale frames. TVs have been doing it algorithmically for ages and looking good doing it. Hell, nVidia graphics cards can do it for every single frame of high end games with DLSS. Calling it “AI” because the type of algorithm it’s using is just cashing in on the buzzword.
(Unless I’m misunderstanding what’s going on.)
You are right that nvidia cards can do it for games using DLSS. Nvidia also has a version called RTX video that works for video. But are they could to be dedicating hardware for playback every single time a user requests to play a short? That is significantly different than just serving a file to the viewer. If they had all of these Nvidia cards laying around, they surely have better things that they could use them for. To be clear here, the ONLY thing I am taking issue with is a comment that it seems that youtube may be upscaling videos on the fly (as opposed to upscaling them once when they are uploaded, and then serving that file 1 million times). I’m simply saying that it makes a hell of a lot more sense any day of the week to upscale a file one time than to upscale it 1 million times.
My video card deffo heats up more when watching youtube over peertube. I’m pretty sure they’re using my graphics card for upscaling.
It would make sense if it’s a scheme to inject ads directly into the stream so adblockers wouldn’t work anymore.
They could do that without upscaling. Upscaling every video only fly would cost an absolute shit ton of money, probably more than they would be making from the ad. There is no scenario where they wouldn’t just upscale it one time and store it.
This is shitty journalism that massively distorts what actually happened. It’s just traditional video filters, and AI panic.
Legitimate critique of this demonic technology is not FUD!
There is no AI panic. There is a distrust against the intention of the companies pushing it. Can you trust Google, Amazon, Microsoft, Meta, Anthropic etc?
There is an AI panic, just like there was a microprocessor panic 50 years ago. Distrust and panic are different things. There is also AI distrust. There is also an AI revolution, an AI bubble, and a whole new AI epoch. There’s lots of AI shit going on right now, and panic is certainly one of them.
This article is AI panic because it’s what we would call a hallucination if an LLM wrote it. There is no AI in this story. People in a panic often jump at nothing.
yucky, shorts lol
I KNEW THOSE SHORTS I’VE BEEN WATCHING HAD THE “AI LOOK” GOD-DAMNIT! With the smooth faces and the weird plastic looking contrast.
Don’t watch shorts
Why? I like shorts, bite sized, shaped for mobile when I’m in bed or shitting, interesting content — my feed is very curated after many years of training it, so I only ever get interesting stuff, no brain rot 👍. Coincidentally my Watch Later list is getting out of control. 😓
My biggest issue with shorts as someone whos watched them is they very often leave out a lot of context or very important information. Shorts are just an evolution of clickbait titles or inflammatory headlines in my opinion. Theres some that are really good but the primary nature of shorts means your exposed to all types not just good ones.
I pretty much get only good quality content. I am very particular about my viewing history. A lot of shorts are probably click bait, but I’ve been very diligent with down voting, and pressing “don’t show this account again”, removing accidental garbage from my viewing history, stuff like that. I believe it has paid off in the end.
I like
W R O N G
Looking at the vote ratio, apparently so!
I had accidentally fat fingered a downvote while laughing at myself. Fixed it so your ratio looks better now.
Haha no worries, I was just curious 😄 Thanks buddy!
lmao why am I getting down voted 😆 serious replies only please 🙏
Stop consuming content.
Guess I better get off Lemmy then
Well, youtube is not even intended to host quality content anymore, but besides that, this appears to just be visual tweaks. This title is trying to be vague enough that one could assume it’s tweaking the content itself which would be of real concern. It’s not doing that (for now). Video graphics seems like an awefully minor thing to be screaming about AI over. Especially when AI has actual reprocussions in the knowledge accuracy sector.
From what I’ve heard this mostly happens on YT Shorts, and the AI upscaling they’re doing is making people look like plastic and uncanny as hell.
I haven’t noticed on normal videos, since that’s pretty much all I watch.
It’s a great way to make money. Pay extra to opt out of AI enhancement!
there might be a few youtubers or purists who would pay to opt out of something like that, but the average uploader isn’t gonna give two shits about enhancements youtube makes. especially when it took this long for a few people to even notice.
I know. Not even one percent of the population can see and hear the difference. Most people can’t even tell what’s human-made and what’s AI slop.
I wouldn’t say people are incapable of noticing the difference. most people just don’t care as much as a very vocal minority of the population seems to. especially people watching shorts. nobody watching shorts is looking for quality, they’re looking for short videos that don’t outlast their attention span. it doesn’t matter whether or not something is AI, all that matters is it engages them for ten seconds or so till they scroll to the next short, and keeps the dopamine flowing.
Tbh, I kind of just thought people were uploading worse quality videos to Shorts, or people’s phones were doing some bullshit smoothing filter. I didn’t realize it until I watched a creator I know who wouldn’t upload such an uncanny video filter.
YouTube doing this without telling anyone is kinda crazy. There’s a few people who’ve been complaining their own shorts don’t even look like them
Counterpoint: no they couldn’t.