• 0 Posts
  • 57 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle

  • merc@sh.itjust.workstoFunny@sh.itjust.worksCall of Daddy
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    16 hours ago

    men who want to get involved with their kids find themselves constantly using female coded gear

    This isn’t an issue with baby carriers. Look at the top results on Amazon. They’re mostly black or grey. Sure, more than 90% of the images with a parental figure show a woman, but the items themselves aren’t “gender coded”.

    Given that, the idea here is that carrying around a baby is itself a gender-coded activity, so men need to use gender-affirming clothing to emphasize that they’re not women by buying something that looks similar to what a soldier might wear. That’s what’s fucking stupid. Just buy the standard black baby carrier. I promise 90% of the world won’t think you’re less of a man because you’re caring for your offspring.



  • merc@sh.itjust.workstoFunny@sh.itjust.worksCall of Daddy
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    I’m sure there are advantages to the “baby on your chest” design vs. other designs. But, that’s not what people are commenting on. They’re commenting on the tacti-cool suburban ninja elements.

    Like, the loops on the front of the carrier. Real police and soldiers use those to carry items like flashlights, guns, knives, extra magazines, etc:

    Cops with loaded-up vests

    First of all, hubby at the Wal*Mart doesn’t need quick access to guns or flashlights. He might need quick access to a wet wipe, but I don’t think they make tactical wet-wipe pouches.

    Second of all, The reason that attachment system is useful for body armour is that things are directly on the wearer’s chest. They can look down, see the item they need, and grab it immediately. When the tactical attachment system is on the baby’s back, you can’t look down and see it anymore. You could reach around and fumble for something, but if you’re doing that, why not just put down the tacti-cool shoulder bag and look in it instead?

    Finally, surplus gear is great. This isn’t surplus. It’s imitation military gear. Surplus gear is good because it’s actual military gear designed to hold up in harsh environments. In military gear, form follows function. It’s brown because it’s designed to be decent camouflage in many different environments. Brown isn’t going to help hubby hide in the cereal isle at Wal*Mart. It has PALS straps because they’re the best way to attach gear and make it quickly accessible. As I pointed out above, fumbling around behind the baby’s back for something doesn’t serve that same function. The surplus gear is also reasonably durable because soldiers wear it while doing heavy physical activity in harsh environments.

    I would imagine that your bog-standard baby carrier is actually going to be reasonably durable for its normal intended use of lugging a baby around. That’s what people buy it for, and if it doesn’t hold up people will buy something else. The size of the straps, the padding, etc. for a standard baby carrier will be one where form follows function. But, this tacti-cool baby gear is probably not durable. The manufacturers know that people buying it will be buying form over function, so they won’t be putting the emphasis on something durable, but on making it look visually similar to army gear. It’s not military surplus, it’s Hot Topic imitation army gear.




  • How do humans answer a question? I would argue, for many, the answer for most topics would be "I am repeating what I was taught/learned/read.

    Even children aren’t expected to just repeat verbatim what they were taught. When kids are being taught verbs, they’re shown the pattern: “I run, you run, he runs; I eat, you eat, he eats.” They’re are told that there’s a pattern, and it’s that the “he/she/they” version has an “s” at the end. They now understand some of how verbs work in English, and can try to apply that pattern. But, even when it’s spotting a pattern and applying the right rule, there’s still an element of understanding involved. You have to recognize that this is a “verb” situation, and you should apply that bit about “add an ‘s’ if it’s he/she/it/they”.

    An LLM, by contrast, never learns any rules. Instead it ingests every single verb that has ever been recorded in English, and builds up a probability table for what comes next.

    but most people are not taught WHY 2+2=4

    Everybody is taught why 2+2=4. They normally use apples. They say if I have 2 apples and John has 2 apples, how many apples are there in total? It’s not simply memorizing that when you see the token “2” followed by “+” then “2” then “=” that the next likely token is a “4”.

    If you watch little kids doing that kind of math, they do understand what’s happening because they’re often counting on their fingers. That signals that there’s a level of understanding that’s different from simply pattern matching.

    Sure, there’s a lot of pattern matching in the way human brains work too. But, fundamentally there’s also at least some amount of “understanding”. One example where humans do pattern matching is idioms. A lot of people just repeat the idiom without understanding what it really means. But, they do it in order to convey a message. They don’t do it just because it sounds like it’s the most likely thing that will be said next in the current conversation.



  • From what I understand, it’s using an LLM for coding, but taken to an extreme. Like, a regular programmer might use an LLM to help them with something, but they’ll read through the code the LLM produces, make sure they understand it, tweak it wherever it’s necessary, etc. A vibe coder might not even be a programmer, they just get the LLM to generate some code and they run the code to see if it does what they want. If it doesn’t, they talk to the LLM some more and generate some more code. At no point do they actually read through the code and try to understand it. They just run the program and see if it does what they want.


  • Tests are probably both the best and worst things to use LLMs for.

    They’re the best because of all the boilerplate. Unit tests tend to have so much of that, setting things up and tearing it down. You want that to be as consistent as possible so that someone looking at it immediately understands what they’re seeing.

    OTOH, tests are also where you figure out how to attack your code from multiple angles. You really need to understand your code to think of all the ways it could fail. LLMs don’t understand anything, so I’d never trust one to come up with a good set of things to test.


  • Also, LLMs are essentially designed to produce code that will pass a code review. It’s output that is designed to look as realistic as possible. So, not only do you have to look through the code for flaws, any error is basically “camouflaged”.

    With a junior dev, sometimes their lack of experience is visible in the code. You can tell what to look at more closely based on where it looks like they’re out of their comfort zone. Whereas an LLM is always 100% in its comfort zone, but has no clue what it’s actually doing.


  • I think storyboards is a great example of how it could be used properly.

    Storyboards are a great way for someone to communicate “this is how I want it to look” in a rough way. But, a storyboard will never show up in the final movie (except maybe fun clips during the credits or something). It’s something that helps you on your way, but along the way 100% of it is replaced.

    Similarly, the way I think of generative AI is that it’s basically a really good props department.

    In the past, if a props / graphics / FX department had to generate some text on a computer screen that looked like someone was Hacking the Planet they’d need to come up with something that looked completely realistic. But, it would either be something hand-crafted, or they’d just go grab some open-source file and spew it out on the screen. What generative AI does is that it digests vast amounts of data to be able to come up with something that looks realistic for the prompt it was given. For something like a hacking scene, an LLM can probably generate something that’s actually much better than what the humans would make given the time and effort required. A hacking scene that a computer security professional would think is realistic is normally way beyond the required scope. But, an LLM can probably do one that is actually plausible for a computer security professional because of what that LLM has been trained on. But, it’s still a prop. If there are any IP addresses or email addresses in the LLM-generated output they may or may not work. And, for a movie prop, it might actually be worse if they do work.

    When you’re asking an AI something like “What does a selection sort algorithm look like in Rust?”, what you’re really doing is asking “What does a realistic answer to that question look like?” You’re basically asking for a prop.

    Now, some props can be extremely realistic looking. Think of the cockpit of an airplane in a serious aviation drama. The props people will probably either build a very realistic cockpit, or maybe even buy one from a junkyard and fix it up. The prop will be realistic enough that even a pilot will look at it and say that it’s correctly laid out and accurate. Similarly, if you ask an LLM to produce code for you, sometimes it will give you something that is realistic enough that it actually works.

    Having said that, fundamentally, there’s a difference between “What is the answer to this question?” and “What would a realistic answer to this question look like?” And that’s the fundamental flaw of LLMs. Answering a question requires understanding the question. Simulating an answer just requires pattern matching.








  • the cars can self drive without a doubt

    So can my sister’s car for a few seconds if you put the cruise control on. But it can’t self-drive safely, and neither can Teslas. But, my sister’s car doesn’t advertise the ability to self-drive. But, Musk pretends that Telsas can, which is extremely dangerous. He’s killing people by muddying the waters and pretending his cars can self-drive safely.


  • Self driving tech is cool. Tesla’s take on Self Driving is not cool because it’s not effective.

    Telsa can drive autonomously through the street using only cameras.

    Sorta… a bit… in a way that will lead to an accident sooner or later. If they put LIDAR on their cars it would be far more effective, but Musk wants to be different. He insists on only using cameras even though you can’t safely do self-driving with only cameras. Typical Musk, cutting corners and lying.