Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    76
    ·
    1 day ago

    They probably added a system guardrail as soon as they heard about this test. it’s been going around for a while now :)

    • melfie@lemy.lol
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 hours ago

      Yeah, it’s probably in the system prompt for now until the next round of training.

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      20 hours ago

      I’m pretty sure Google’s AI is fed by the same spider that goes out and finds every new or changed web page (or a variant of that).

      As soon as someone writes an article about how AI gets something wrong and provides a solution, that solution is now in the AI’s training data.

      OTOH, that means it’s probably also ingesting a lot of AI generated slop, which causes its own set of problems.

    • imetators@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Article mentions that Gemini 2.0 Flash Lite, Gemini 3 Flash and Gemini 3 Pro have passed the test. All these 3 also did it 10 out of 10 times without being wrong. Even Gemini 2.5 shares highest score in the category of “below 6 right answers”. Guess, Gemini is the closest to “intelligence” out of a bunch.

      • timestatic@feddit.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        22 hours ago

        I mean if they fix specific reasoning test answers (like the strawberry one) this doesn’t actually make reasoning better tho. It just optimizes for benchmarks