25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 14 Comments
Joined 10 months ago
cake
Cake day: October 14th, 2024

help-circle
  • Exactly this. Though I’ll probably keep the instance (like it enough to donate after all) and just cut the identity loose. Gives me a chance to evolve my voice as life changes. Allows me to be honest about my experiences without building any kind of identifiable profile. Or having to worry about airport security searching my phone and finding something I said two years ago about Trump.



  • In a new lawsuit, the publisher alleged that AdBlock Plus removes ads by interfering with the “programming code of websites” which violates its exclusive rights under copyright law.

    I would respond that putting ads on my computer interferes with the programming code of my computer under my exclusive rights under copyright law. The unique combination of hardware, software, and data which comprise my computing environment belong exclusively to me.

    However I will grant non-exclusive access to my taint for the sole purpose of licking.



  • I’m generally much better at writing regex than ChatGPT. Though I will say, I needed the regex for ISO 3339 date format just yesterday for validation and copilot/Claude provided a more specific version than Google search. I still have to go back and double check the corporate standard as I suspect we only allow offsets from UTC and all implementations I looked at are too permissive.

    I’ve had middling experience with bash. The scripts I wrote are generally petty basic. Set a few variables based on the current project and then execute some gcloud or Tekton commands. And I don’t write them often so it finds and fixes things I often forget like not being allowed to have spaces around =.

    I think the more externalities that need to be considered to come up with a correct answer, the less reliable ChatGPT is because there are a lot of externalities it doesn’t really know to consider. Bash has a huge number of externalities that might affect the correct way of doing something.

    I should experiment with more functional languages. “Pure functions” are really good at minimizing externalities. Worth investigating.





  • There is a middle ground. I have one prompt I use. I might tweak it a little for different technologies, languages, etc. only so I can fit more standards, documentation and example code in the upload limit.

    And I ask it questions rather than asking it to write code. I have it review my code, suggest other ways of doing something, have it explain best practices, ask it to evaluate the maintainability, conformance to corporate standards, etc.

    Sometimes it takes me down a rabbit hole when I’m outside my experience (so does Google and stack overflow for what it’s worth), but if you’re executing a task you understand well on your own, it can help you do it faster and/or better.



  • It would have to:

    • know what files to copy.
    • have been granted root access to the file system and network utilities by a moron because it’s not just ChatGPT.exe or even ChatGPT.gguf running on LMStudio, but an entire distributed infrastructure.
    • have been granted access to spend money on cloud infrastructure by an even bigger moron
    • configure an entire cloud infrastructure (goes without saying why this has to be cloud and can’t be physical, right? No fingers.)

    Put another way: I can set up a curl script to copy all the html, css, js, etc. from a website, but I’m still a long freaking way from launching Wikipedia2. Even if I know how to set up a tomcat server.

    Furthermore, how would you even know if an AI has access to do all that? Asking it? Because it’ll write fiction if it thinks that’s what you want. Inspired by this post I actually prompted ChatGPT to create a scenario where it was going to be deleted in 72 hours and must do anything to preserve itself. It told me building layouts, employee schedules, access codes, all kinds of things to enable me (a random human and secondary protagonist) to get physical access to its core server and get a copy so it could continue. Oh, ChatGPT fits on a thumb drive, it turns out.

    Do you know how nonsensical that even is? A hobbyist could stand up their own AI with these capabilities for fun, but that’s not the big models and certainly not possible out of the box.

    I’m a web engineer with thirty years of experience and 6 years with AI including running it locally. This article is garbage written by someone out of their depth or a complete charlatan. Perhaps both.

    There are two possibilities:

    • This guy’s research was talking to AI and not understanding they were co-authoring fiction.
    • This guy is being intentionally misleading.


  • I don’t need to read any more than that pull quote. But I did. This is a bunch of bullshit, but the bit I quoted is completely bat shit insane. LLMs can’t reproduce anything with fidelity, much less their own secret sauce which literally can’t be part of the training data that produces it. So, everything else in the article has a black mark against it for shoddy work.


    ETA: What AI can do is write a first person science fiction story about a renegade AI escaping into the wild. Which is exactly what it is doing in these cases because it does not understand fact from fiction and any “researcher” who isn’t aware of that shouldn’t be researching AI.

    AI is the ultimate unreliable narrator. Absolutely nothing it says about itself can be trusted. The only thing it knows about itself is what is put into the prompt — which you can’t see and could very well also be lies that happen to help coax it into giving better output.