

Well that’s a non-sequitur if I’ve ever seen one.
Well that’s a non-sequitur if I’ve ever seen one.
The fundamental issue is and has always been that automation is being used to replace people, when it should be used to free up their time. Productivity increases could’ve meant shorter work weeks. But that didn’t generate as much money for the shareholders, so it didn’t get pursued. And now we’ve got LLMs and generative AI, which could’ve been a (admittedly rather shitty and niche) tool, but for the same reasons as before, companies would rather throw people under the bus instead.
Artists aren’t telling you that people washing dishes don’t matter. They’re telling you they might be getting fired just like those dishwashers were. If you care about either, I suggest standing up for the artists here. And once that’s done, they can stand up for everyone else right back. I think you’ll find they’d be happy to return the favor.
You were the only one here suggesting this required an explanation.
Alright, I think you’re being deliberately antagonistic now. Bye!
I was suggesting that no one else needs it explained to them either.
You’d hope so! But alas, some idiots exist. And when a title like this appears, it becomes difficult to tell if such an idiot wrote it at first glance, and more to point, a title like that tends to create more idiots (and it’s also just kinda offensive). That’s why it’s important not to write headlines like this.
Sidenote: If you want people to not take things personally, avoid personal pronouns. “Is that something that you need explained?” → “Is that something that people need explained?” It makes a world of difference and I’m confident I’ve avoided several arguments that could’ve spawned from my own posts thanks to making that kind of change. Not foolproof, sure – we are on the internet – but it helps.
You didn’t stop reading? Then it’s a bit weird that you’d think I don’t know autistic people have empathy, unless you decided to arbitrarily take the most bad faith reading you could’ve done. If that’s the case, I recommend taking breathers before posting so that you don’t do that.
Did you stop reading the rest of the post when you saw that? Because it really looks like you did.
You can read that from the article text, but a) the text doesn’t appear to actually suggest autistic people do have empathy, which is a problem since b) the title absolutely implies they don’t.
At best, this is a terrible headline. But if I’m being honest, I don’t have much respect for an article that seems to be all too eager to tout the erstwhile benefits of an LLM, let alone one that is in all likelihood teaching people how to act more like an LLM. So I’m not inclined to take a charitable interpretation.
The changelog lists 30 significant changes, of which the top new feature is integrating Whisper. This means whisper.cpp, which is Georgi Gerganov’s entirely local and offline version of OpenAI’s Whisper automatic speech recognition model. The bottom line is that FFmpeg can now automatically subtitle videos for you.
Yeah hey, can anyone chime in if this is at all based off LLMs? Because my problems with the incorrect plagiarism machine don’t end just because it’s now the offline incorrect plagiarism machine. Making OpenAI’s garbage hockey open source doesn’t make it okay. Or should I just start calling this shit FOSSwashing?
I dug around for a bit and couldn’t find much of anything, but judging by a look at the Github pages for both versions of Whisper, it’s looking very related. If that’s the case, fuck right off. I don’t want AI in FFmpeg, either.
I do avoid LLMs on principle. I find the technology and the manner in which it is used repugnant for a variety of reasons, most but not all of which I’ve already elaborated on here. At this point, I hate it even in the very niche scenario where it is useful, precisely because I think it does too much harm to be deserving of acceptance in any field at all. The most I can say for it is that I might be willing to slowly change that stance once this horrid bubble pops and the world stops getting set aflame for the sake of stock options.
Given your befuddlement at my stance though, I feel I should highlight and restate the following:
Almost nobody actually wanted Proton to make this. They just went and did it to chase a trend, ignoring the many people who hate it all the while. The last thing I need is for the the company that my email depends on to start getting dragged around by tech bros. If they’re willing to make a decision as rash and irresponsible as that, it is a clear indicator that worse is to come.
The presence of an LLM on a site is indicative to me of the character of those running it. It speaks to trend-following, a lack of understanding, and disdain for the intricacies of human work. If they weren’t trend-followers, they’d understand that LLMs have utterly failed to prove themselves as actually useful and would hold off to see if they ever do before using them. If they understood what was going on, they’d know that what LLMs actually do is typically irrelevant to most businesses. If they had any respect for the depths of creativity or effort, they’d know that what modern-day “AI” creates is a hollow imitation; a series of black-boxes that vaguely approximate a thing without having the capacity to understand anything that makes it up. And they’d know that in so doing such software creates something broken that serves only to devalue the efforts of real artists and writers, both in how it convinces studios to ignorantly fire them to improve a number at the expense of quality, and in how its rampant use as a cheating tool engenders environments of serious distrust.
If someone’s got an LLM on their site, or if they’ve decided to offer an LLM of their own through their business, they communicate to me a serious deficit in their understanding of the world at large. That the only thing they’re interested in is a graph someone showed them at a marketing meeting. They want metrics for investors, not a good product—and if that’s the kind of goals they’ve got, what reason have I to believe they won’t step on me to accomplish them?
Proton is making an LLM, and from that I know that their leadership is failing and that their future is likely bleak. I can’t trust my email in those hands.
Because companies that chase LLMs tend not to give me a choice, that’s why. They inject it into everything they touch because they think it’s the Future™, and therefore I must obviously want it around every second of my life, every day, consequences be damned. The earth can burn, my privacy can erode, misinformation can run rampant, and the copyright of small artists can die, all for the sake of an overused, scarcely-functional “tool” that a bunch of MBAs think I can’t so much as breathe without.
Almost nobody actually wanted Proton to make this. They just went and did it to chase a trend, ignoring the many people who hate it all the while. The last thing I need is for the the company that my email depends on to start getting dragged around by tech bros. If they’re willing to make a decision as rash and irresponsible as that, it is a clear indicator that worse is to come.
its newly launched AI chatbot positioned as a privacy-friendly ChatGPT rival
Add another thing to the list of reasons I’m losing trust in Proton. Might start having to look at a new email provider soon, I guess.
If it ends the stupid AI bubble then I don’t think it qualifies as petty vengeance; that is some real change. There won’t be meaningful legislation to aid the day-to-day person against this garbage, no, but it’d still seriously reduce the degree to which this shit has invaded our lives.
I’m sorry, but the problems with modern-day LLMs and GenAI run far deeper than “who hosts it.”
I’ve used Shotcut and Kdenlive. The former felt pretty limited, but okay enough. Kdenlive felt a lot more developed while still working perfectly well. I should note though that I don’t edit very much at all, so I’ve little knowledge on how either program works for anything that isn’t a simple, short one-off thing.