

Anubis has worked if that’s happening. The point is to make it computationally expensive to access a webpage, because that’s a natural rate limiter. It kinda sounds like it needs to be made more computationally expensive, however.
Alt account of @Badabinski
Just a sweaty nerd interested in software, home automation, emotional issues, and polite discourse about all of the above.
Anubis has worked if that’s happening. The point is to make it computationally expensive to access a webpage, because that’s a natural rate limiter. It kinda sounds like it needs to be made more computationally expensive, however.
Do you have any sources for the 10x memory thing? I’ve seen people who have made memory usage claims, but I haven’t seen benchmarks demonstrating this.
EDIT: glibc-based images wouldn’t be using service managers either. PID 1 is your application.
EDIT: In response to this:
There’s a reason a huge portion of docker images are alpine-based.
After months of research, my company pushed thousands and thousands of containers away from alpine for operational and performance reasons. You can get small images using glibc-based distros. Just look at chainguard if you want an example. We saved money (many many dollars a month) and had fewer tickets once we finished banning alpine containers. I haven’t seen a compelling reason to switch back, and I just don’t see much to recommend Alpine outside of embedded systems where disk space is actually a problem. I’m not going to tell you that you’re wrong for using it, but my experience has basically been a series of events telling me to avoid it. Also, I fucking hate the person that decided it wasn’t going to do search domains properly or DNS over TCP.
Debian is superior for server tasks. musl is designed to optimize for smaller binaries on disk. Memory is a secondary goal, and cpu time is a non-goal. musl isn’t meant to be fast, it’s meant to be small and easily embedded. Those are great things if you need to run in a network/disk constrained environment, but for a server? Why waste CPU cycles using a libc that is, by design, less time efficient?
EDIT: I had to fight this fight at my job. We had hundreds of thousands of Alpine containers running, and switching them to glibc-based containers resulted in quantifiable cloud spend savings. I’m not saying musl (or alpine) is bad, just that you have horses for courses.
Is it? I thought the thing that musl optimized for was disk usage, not memory usage or CPU time. It’s been my experience that alpine containers are worse than their glibc counterparts because glibc is damn good. It’s definitely faster in many cases. I think this is fixed now, but I remember when musl made the python interpreter run like 50-100x slower.
EDIT: musl is good at what it tries to be good at. It’s not trying to be the fastest, it’s trying to be small on disk or over the network.
The one where every LLM-generated shell script I read is another deep splinter in my fingernail quick that I have to rip out and destroy because it’s a godfucked mess of bad practices that we can never ever ever ever EVER train out of an LLM at this point.
When the filament goes through the hotend, any moisture in the filament will boil and make the filament all bubbly and not extrude well.
It’s supposed to help make you better at games by giving you an easy way to practice.
I think a new GPL needs to be created to account for this. Like, “any generative system using this as an input which can ever replicate this code base (even in part), must be bound to this license.” People could then run overfitting analysis to see if they ever get their copyleft code out of the model. If they do, then they have grounds to sue. I’m fine with an LLM being trained on my code, but I want the four freedoms to be preserved if it is.
Open source isn’t good enough, I want my software to use a strong copyleft license with no ability to relicense via a CLA (CLAs that don’t grant the ability to relicense software are rare, but acceptable). AGPL for servers, GPL for local software, LGPL for libraries when possible, and Apache, MIT, or BSD ONLY when LGPL doesn’t make sense.
Not listed is the best tool:
dd if=path/to/file.iso of=/dev/sd$whatever oflag=sync bs=128M status=progress
I wonder if this suffers from the same power density issue as most alternatives to pumped hydro systems. It’s REALLY hard to do better than megatons of water pumped 500 meters up a hill.
I learned to program by shitting out God awful shell scripts that got gently thrashed by senior devs. The only way I’ve ever learned anything is by having a real-world problem that I can solve. You absolutely do NOT need a CS degree to learn software dev or even some of compsci itself, and I agree that tools like Bolt are going to make shit harder. It’s one thing to copy stack overflow code because you have people arguing about it in the comments. You get to hear the pros and cons and it can eventually make sense. It’s something entirely different when an LLM shits out code that it can’t even accurately describe later.
ngl, I do wish it was still used. I remember being like, 4 years old and trying to write a “thank you” card to my grandmother. I spent what feel like an hour going through the alphabet, trying to find the letter that makes the “th” sound. Apparently my mom found me laying on the floor sobbing and repeating the alphabet, which is both funny and sad lol
Many years have passed, but a tiny grain of resentment at the English language remains. The thorn would have prevented that.
I’ll agree that list comprehensions can be a bit annoying to write because your IDE can’t help you until the basic loop is done, but you solve that by just doing
[thing for thing in things]
and then add whatever conditions and attr access/function calls you need.