• 0 Posts
  • 27 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle
  • Banning 4chan for that reason would be valid if they had a law against that to enforce.

    But in the same way you don’t go after someone for tax evasion in a country they’ve never been to or interacted with, you don’t fine 4chan because they won’t start collecting IDs from users when the company is not even in your jurisdiction.

    Either way, I can’t imagine people there missing 4chan. They just need to give a valid reason to block it instead of BSing a fine.


  • In this case, it seems like a feature.

    It does make me wonder why not use a bounded channel instead (assuming these tasks are shared between threads, maybe because it’s multi-consumer?) but a deque is more flexible if that flexibility is needed.

    Personally, I can think of a use for this myself. I have a project where I’m queuing audio to play in the background, and using this kind of deque for the queue would work here as well (though I use a bounded channel currently).

    There are also a lot of times when i’ve wanted a stack-only data structure to avoid allocations where these can theoretically come in.





  • A pretty good way to get a code review is to post the code on GitHub and make a post advertising it as a tool everyone needs. People will be quick to review it.

    As far as LLMs go, they tend to be too quick to please you. It might be better to ask it to generate code that does something similar to what you’re doing, then compare the two to see if you learn anything from it or if it does something in a better way than how your code does it.



  • The only person who can answer whether a tool will be useful to you is you. I understand that you tried and couldn’t use it. Was it useful to you then? Seems like no.

    Broad generalizations of “X is good at Y” rarely can be accurately measured with a useful set of metrics, rarely are studied using sufficiently large sample sizes, and often discredit the edge cases where someone might find it useful or not useful despite the opposite being found generally true in the study.

    And no, I haven’t tried it. It wouldn’t be good at what I need it to do: think for me.



  • This makes me less enthusiastic about local models. I mean, nothing on the internet is inherently secure and the patch came quickly, but local LLMs being hackable in the first place opens a new can of worms.

    Everything downloaded from the internet is hackable. Web browsers are the most notorious for being attacked, and regularly need to mitigate exploitable vulnerabilities. What’s important is how they fix the vulnerability and how they prevent it from happening again in the future.

    Personally, when I do run Ollama, it’s always from within a container. I mostly do this because I find it more convenient to run it this way, but it also adds a degree of separation between its running environment and my personal computer. Note that this is not a sandbox (especially since it still uses my GPU and executes code locally), just a small layer of protection.