Yes this is a recipe for extremely slow inference: I’m running a 2013 Mac Pro with 128gb of ram. I’m not optimizing for speed, I’m optimizing for aesthetics and intelligence :)

Anyway, what model would you recommend? I’m looking for something general-purpose but with solid programming skills. Ideally obliterated as well, I’m running this locally I might as well have all the freedoms. Thanks for the tips!

  • trave@lemmy.sdf.orgOP
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    6 days ago

    update: I tried GLM4.5 air and it was awesome until I remembered how censored it is by the Chinese government. Which I guess is fine if I’m just coding but just on principal I didn’t like running a model that will refuse to talk about things China doesn’t like. I tried Dolphin-Mistral-24B which will answer anything but isn’t particularly smart.

    So I’m trying out gpt-oss-120b which was running at an amazing 5.21t/s but the reasoning output was broken and it seems the way to fix it was to switch from the llamacpp python wrapper to pure llamacpp

    …which I did, and it fixed the reasoning output… but now I only get .61t/s :|

    anyway, I’m on my journey :) thanks y’all

    • Sims@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      ? they are just trying to protect themselves from western propaganda, just as all nations should. Its working great. Everything you’ve heard about China comes from the western propaganda apparatus, and I doubt you have discovered how insanely polluted the western information sphere is - amongst other things, propaganda towards enemies of the US Plutocracy. All models are trained on that nonsense, and you can’t say “Hi” to a western model without being influenced by western ideological pollution/propaganda…

    • humanspiral@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      on principal I didn’t like running a model that will refuse to talk about things China doesn’t like.

      A good way to define a political issue is that there are at least 2 sides to a narrative. You can’t use a LLM to decide a side to favour if you can’t really use Wikipedia either. It takes deep expertise and an open mind to determine a side more likely to contain more truth.

      You may or not seek confirmation of your political views, but media you like should do so more than a LLM, and it is a better LLM that avoids confirming or denying your views, arguably anyway.

  • pebbles@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    8 days ago

    With 128GB of ram on a Mac, GLM 4.5 Air is going to be one of your best options. You could run it anywhere from Q5 to Q8 depending on how you wanna manage your speed to quality ratio.

    I have a different system that likely runs it slower than yours will, and I get 5 T/s generation which is just about the speed I read at. (Using q8)

    I do hear that ollama may be having issues with that model though, so you may have to wait for an update to it.

    I use llamacpp and llama-swap with openwebui, so if you want any tips on switching over I’d be happy to help. Llamacpp is usually one of the first projects to start supporting new models when they come out.

    Edit: just reread your post. I was thinking it was a newer Mac lol. This may be a slow model for you, but I do think it’ll be one of the best your can run.

    • trave@lemmy.sdf.orgOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      8 days ago

      oh I didn’t realize I could use llamacpp with openwebui. I recall reading something about how ollama was somehow becoming less FOSS so I’m inclined to use llamacpp. Plus I want to be able to more easily use sharded ggufs. You have a guide for setting up llamacpp with openwebui?

      I somehow hadn’t heard of GLM 4.5 Air, I’ll take a look thanks!

      • pebbles@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        7 days ago

        Yeah setting up openwebui with llamacpp is pretty easy. I would start with building llamacpp by cloning it from github and then following the short guide for building it linked on the readme. I don’t have a Mac, but I’ve found building it to be pretty simple. Just one or two commands for me.

        Once its built just run llama-sever with the right flags telling it to load model. I think it can take huggingface links, but I always just download gguf files. They have good documentation for llama-server on the readme. You also specify a port when you run llama-server.

        Then you just add http://127.0.0.1:PORT_YOU_CHOSE/v1 as one of your openai api connections in the openwebui admin panel.


        Separately, if you want to be able to swap models on the fly, you can add llama-swap into the mix. I’d look into this after you get llamacpp running and are somewhat comfy with it. You’ll absolutely want it though coming from ollama. At this point its a full replacement IMO.

      • trave@lemmy.sdf.orgOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 days ago

        some coding yeah but also want one that’s just good ‘general purpose’ chat.

        Not sure how much context… from what I’ve heard models kinda break down at super large context anyway? Though I’d love to have as large of a functional context as possible. I guess it’s somewhat a tradeoff in ram usage as the context all gets loaded into memory?

        • Womble@piefed.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          If you really dont care about speed (as in ask a question and come back half an hour later dont care) you could try a 3 bit quantization of qwen3 thinking thats at around 100GB so you could fit it in memory and still have enough leftover for the OS. But I’m not kidding about coming back an hour later for your response (or even longer), thats a very big model for a decade old computer.

        • mierdabird@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 days ago

          Qwen 3 coder is the current top dog for coding afaik, there’s a 30b size and something bigger but I can’t remember what because I have no hope of running it lol. But I think the larger models have up to a million token context window.