If you have the hardware you could also consider running a local model. While local models may not be able to match the full prowess of chatgpt 5.1, for more common queries it would be good for privacy and then in situations that require the raw power of nanogpt you can use that.
Otherwise if cloud is the preferred option then Lumo, Apple, or Nanogpt are likely fine as long as you dont share PII. The caveat being you just have to be on top of yourself to not let things slip to these LLMs.