Which Local AI App Is Best?

Currently running LM Studio.

looking at unsloth models because they reduced memory consumption. I am downloading right now the largest model I can fit in my machine with partial offload to GPU.

Who knew we needed boatloads of RAM… On my next PC build, I will probably do my best to Max out RAM as well :rofl:

1 Like

Ok so after a few tinkering, I think Ollama+OpenWebUI may be better if you want a network wide (think Tailnet wide) access to a local LLM. I’ve setup Ollama on a Proxmox machine (that also has VMs for gaming because the stupid GPU requirement). I’ve set it open to the LAN so that My TrueNAS Scale installation of OpenWebUI can talk to the Ollama on the Proxmox:

  • I get to game with a VM
  • I get to have an AI if I also need it (mostly during tinker session).
  • I get to turn it on the Proxmox VE Server via Wake on LAN packets through my pfSense
  • I get to save power by shutting down the Proxmox when not in use.
  • I can access it anywhere on earth from my server at home as long as I have internet (for my laptop and phone) because I have Tailscale.