Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • Fmstrat@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    3
    ·
    17 hours ago

    This is adjustable via temperature. It is set low on chatbots, causing the answers to be more random. It’s set higher on code assistants to make things more deterministic.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      15 hours ago

      Changing the amount of randomness still results in enough randomness to be random.