Maruchan for sure
- 1 Post
- 13 Comments
Zarxrax@lemmy.worldto Technology@lemmy.world•YouTube secretly used AI to edit people's videos. The results could bend realityEnglish3·3 days agoThat would depend entirely on WHAT its doing. I have not personally seen any of these videos yet, but based on what was described in the article, I would imagine that a typical CPU would not be able to handle it.
Zarxrax@lemmy.worldto Technology@lemmy.world•YouTube secretly used AI to edit people's videos. The results could bend realityEnglish41·3 days agoYou are right that nvidia cards can do it for games using DLSS. Nvidia also has a version called RTX video that works for video. But are they could to be dedicating hardware for playback every single time a user requests to play a short? That is significantly different than just serving a file to the viewer. If they had all of these Nvidia cards laying around, they surely have better things that they could use them for. To be clear here, the ONLY thing I am taking issue with is a comment that it seems that youtube may be upscaling videos on the fly (as opposed to upscaling them once when they are uploaded, and then serving that file 1 million times). I’m simply saying that it makes a hell of a lot more sense any day of the week to upscale a file one time than to upscale it 1 million times.
Zarxrax@lemmy.worldto Technology@lemmy.world•YouTube secretly used AI to edit people's videos. The results could bend realityEnglish65·3 days agoWhile it could theoretically be done on device, it would require the device to have dedicated hardware that is capable of doing the processing, so it would only work on a limited number of devices. It would be pretty easy to test this if a known modified video were available.
Zarxrax@lemmy.worldto Technology@lemmy.world•YouTube secretly used AI to edit people's videos. The results could bend realityEnglish9·3 days agoThey could do that without upscaling. Upscaling every video only fly would cost an absolute shit ton of money, probably more than they would be making from the ad. There is no scenario where they wouldn’t just upscale it one time and store it.
Zarxrax@lemmy.worldto Technology@lemmy.world•YouTube secretly used AI to edit people's videos. The results could bend realityEnglish352·3 days agoIt would not make any sense for them to be upscaled on the fly. It’s a computationally intensive operation, and storage space is cheap. Is there any evidence of it being done on the fly?
Zarxrax@lemmy.worldto Ask Lemmy@lemmy.world•How easy/hard was it to for y'all to learn your multiplication tables? What grade did start learning and when did you know it all? (up to 12x12 I mean)3·4 days agoI don’t remember if it was 2nd or 3rd grade, but I just memorized them. My grandmother bought flash cards and drilled them with me every day until I had memorized them all.
In the early days, game shows were sometimes rigged. Then laws were passed in the USA requiring fair play. So, no I don’t think it’s rigged. I don’t watch the show often, but I have definitely seen people lose.
Zarxrax@lemmy.worldto Nintendo@lemmy.world•Bubsy in: The Purrfect Collection Release Date Trailer | September 9English7·12 days agoSpoiler: the sequels aren’t better
Think of it like casino chips for crypto. If you want to buy lesser known cryptocurrency, no one is trading directly for dollars, the trades are happening from one crypto to another. You spend your actual cash to buy in, then you can cash out when you are done. But if you are trading all the time, looking for opportunities, you don’t want to just leave everything parked on some random cryptocurrency, because it’s highly volatile. Stable coins are like the casino chips because they hold a relatively stable value, so you have your money in those and it’s ready to trade whenever the need arises.
Stable coins can hold a stable value because they are usually backed by some actual assets like money and securities and stuff.
Zarxrax@lemmy.worldto Technology@lemmy.world•Why using ChatGPT is not bad for the environmentEnglish2·15 days agoI think this is a bad faith argument because it focuses specifically on chatgpt and how much resources it uses. The article itself even goes on to say that this is actually only 1-3% of total AI use.
People don’t give a shit about chatgpt specifically. When they complain about chatgpt they are using it as a surrogate for ai in general.
And yes, the amount of electricity from ai is quite significant. https://www.iea.org/news/ai-is-set-to-drive-surging-electricity-demand-from-data-centres-while-offering-the-potential-to-transform-how-the-energy-sector-works
It projects that electricity demand from data centres worldwide is set to more than double by 2030 to around 945 terawatt-hours (TWh), slightly more than the entire electricity consumption of Japan today. AI will be the most significant driver of this increase, with electricity demand from AI-optimised data centres projected to more than quadruple by 2030.
I’m not opposed to ai, I use a lot of AI tools locally on my own PC. I’m aware of how little electricity they consume when I am just using for a few minutes a day. But the problem is when it’s being crammed into EVERYTHING, I can’t just say I’m generating a few images per day or doing 5 LLM queries. Because it’s running on 100 Google searches that I perform, every website I visit will be using it for various purposes, applications I use will be implementing it for all kinds of things, shopping sites will be generating images of every product with me in the product image. AI is popping up everywhere, and the overall picture is that yes, this is contributing significantly to electricity demand, and the vast majority of that is not for developing new drugs, it’s for stupid shit like preventing me from clicking away from Google onto the website that they sourced an answer from.
I consider myself a bad hobbyist programmer. I know a decent bit about programming, and I mainly create relatively simple things.
Before LLMs, I would spend weeks or months working on a small program, but with LLMs I can often complete it significantly faster.
Now, I don’t suppose I would consider myself to be a “vibe coder”, because I don’t expect the LLM to create the entire application for me, but I may let it generate a significant portion of code. I am generally coming up with the basic structure of the program and figuring out how it should work, then I might ask it to write individual functions, or pieces of functions. I review the code it gives me and see if it makes sense. It’s kind of like having an assistant helping me.
Programming languages are how we communicate with computers to tell them what to do. We have to learn to speak the computer’s language. But with an LLM, the computer has learned to speak our language. So now we can program in normal English, but it’s like going through a translator. You still have to be very specific about what the program needs to do, or it will just have to guess at what you wanted. And even when you are specific, something might get lost in translation. So I think the best way to avoid these issues is like I said, not expecting it to be able to make an entire program for you, but using it as an assistant to create little parts at a time.
Most people don’t love their job.