• truthfultemporarily@feddit.org
    link
    fedilink
    English
    arrow-up
    121
    ·
    21 hours ago

    Just a reminder LLMs are incapable of even counting. They are a statistical model figuring out which tokens are most likely to appear next based on previous tokens.

    Putting copilot in excel makes no sense whatsoever and MS must know people will use it and get completely wrong results.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 hours ago

        That is different. It’s because you’re interacting with token-based models. There has been new research on giving byte level data to LLMs to solve this issue.

        The numerical calculation aspect of LLMs and this are different.

        It would be best to couple an LLM into a tool-calling system for rudimentary numeral calculations. Right now the only way to do that is to cook up a Python script with HF transformers and a finetuned model, I am not aware of any commercial model doing this. (And this is not what Microshit is doing)

    • WhatAmLemmy@lemmy.world
      link
      fedilink
      English
      arrow-up
      44
      ·
      edit-2
      20 hours ago

      Even better. They are incapable of discerning correlation vs causation, which is why they give completely illogical and irrelevant information.

      Turns out pattern recognition means dogshit when you don’t know how anything works, and never will.

      • ricecake@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        The only thing, beyond laughing at it being dumb or making silly pictures that I don’t really care about, that I’ve found as an actual useful use for this wave of AIs is basically “pretend you’re an expert in whatever field you’re being asked about, and that you’re talking to a moderately less experienced professional, and give a very brief description of the topic, focusing on what the user can lookup on their own instead”.

        As an example, I asked it about designing some gears for a project. It told me I used a word wrong and the more precise term would give me better search results, defined a handful of terms I’d run into, and told me to buy a machinery handbook or get a design table since the parameters are all standardized.

        The current approach isn’t going to replace thinking for yourself, but pattern recognition can do a good job seeing that questions about X often end up relating to A, B, and C.

        Oh, and I also got Google’s to only respond as though it’s broken and it made it really fun to try to figure out the news through it’s cryptic gibberish. A solid hour of amusement, and definitely worth several billion dollars of other people’s money.

      • Wolf314159@startrek.website
        link
        fedilink
        English
        arrow-up
        15
        ·
        19 hours ago

        Somehow this reminds of a meme thread that just popped up wherein there are a lot of people proudly declaring their inability to study and claiming that the mere suggestion that one should read the manual as a first step to solving a problem is actually very offensive.

        • WhatAmLemmy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          14 hours ago

          That’s not far off from reality, where normies laugh at you for suggesting they read the manual of the 21st century appliance (basically a computer) they spent hundreds/thousands purchasing.

          Soon the ridicule will be replaced with offense, then “straight to jail” shortly after.

          • Cypher@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            14 hours ago

            My only issue with RTFM is how often the manual is absolute dog shit, written by some engineer whom assumes knowledge only an engineer would already have.

    • ms.lane@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      21 hours ago

      It could, theoretically, be useful if they just made it a working manual for the software.

      ie. “How do I connect these columns to a pivot table.”

      But that doesn’t sell software like 'Replace your workers with AI!" (even though it can’t replace anything)

    • hansolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      11 hours ago

      They seriously did this? FFS, they literally killed one of the few things MS still had going for it.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      1
      ·
      20 hours ago

      I disagree in that it’d be useful to find stuff in a limited set or automate some repetitive tasks. You can probably finagle some combination of a chatbot and a limited set of scripts into automating some common but complex tasks or at least helping you find out where the tools are or what they do.

      That’s not how AI companions on spreadsheet software seem to work, though. They seem to have just plugged in the chatbot to the raw, unfiltered set of data and functionality, told the LLM to do its best to do what it’s told and called it a day. This goes for both MS and Google, for the record.

      It’s pretty useless that way. I don’t know who convinced devs that the way to implement this was to go maximal and live with the failure rate instead of going narrow and keeping things under control, but it was a mistake.