Yes this is a recipe for extremely slow inference: I’m running a 2013 Mac Pro with 128gb of ram. I’m not optimizing for speed, I’m optimizing for aesthetics and intelligence :)
Anyway, what model would you recommend? I’m looking for something general-purpose but with solid programming skills. Ideally obliterated as well, I’m running this locally I might as well have all the freedoms. Thanks for the tips!
oh I didn’t realize I could use llamacpp with openwebui. I recall reading something about how ollama was somehow becoming less FOSS so I’m inclined to use llamacpp. Plus I want to be able to more easily use sharded ggufs. You have a guide for setting up llamacpp with openwebui?
I somehow hadn’t heard of GLM 4.5 Air, I’ll take a look thanks!
Yeah setting up openwebui with llamacpp is pretty easy. I would start with building llamacpp by cloning it from github and then following the short guide for building it linked on the readme. I don’t have a Mac, but I’ve found building it to be pretty simple. Just one or two commands for me.
Once its built just run llama-sever with the right flags telling it to load model. I think it can take huggingface links, but I always just download gguf files. They have good documentation for llama-server on the readme. You also specify a port when you run llama-server.
Then you just add http://127.0.0.1:PORT_YOU_CHOSE/v1 as one of your openai api connections in the openwebui admin panel.
Separately, if you want to be able to swap models on the fly, you can add llama-swap into the mix. I’d look into this after you get llamacpp running and are somewhat comfy with it. You’ll absolutely want it though coming from ollama. At this point its a full replacement IMO.
What happend to ollama? Did it got bought? Is it turning propitary?