Local llm chat


I found this app that can run many different llm’s, and would love to use it and the GUI app. Does anyone know if it indeed doesn’t report what I chat and if it has any trackers?


PS if the topic has been asked before I apologize, when I searched for ollama I didn’t see any results.

PPS if anyone knows of an android app to access the llm on my computer, that would be fantastic

Thanks again!

1 Like

I’ve heard of (but am not familiar with) that specific piece of software. But broadly speaking, it is definitely possible to use a locally hosted, open source LLM, without your data being sent, used or abused by 3rd parties.

There is a community on reddit that is an absolute wealth of knowledge with respect to open source, locally hosted LLMs called r/localllama. They can likely answer your questions better and quicker than most of us here. While it isn’t a directly privacy focused community, it is a community focused on self-hosting and open source so there is a lot of overlap with privacy.


Privacy and AI in today’s world is definitely something worth discussing and exploring.

I stumbled across this recently. Not tried it yet, but it looks very good and might be exactly what you’re looking for:

1 Like

Justjust note that Llama is not open source: Meta’s LLaMa 2 license is not Open Source - Voices of Open Source

1 Like

Thanks for the resources everyone! The thing is that that subReddit doesn’t have a guide which would be very nice. I wonder if there is any? Especially one that focuses on privacy and security.

It looks like it’s source available rather than open source.



I am a non-expert when it comes to licensing but based on my read of the policy, it falls in between Open Source and Source Available, and calling it either without clarification would be somewhat misleading.

I think it is much more permissive than “source available” implies, because:

You are granted a non-exclusive, worldwide, non-transferable and royalty-free limited license under Meta’s intellectual property or other rights owned by Meta embodied in the Llama Materials to use, reproduce, distribute, copy, create derivative works of, and make modifications to the Llama Materials.

On its own the above is inline with FOSS principles, BUT it falls short of being fully free and open source because of these two caveats:

You will not use the Llama Materials or any output or results of the Llama Materials to improve any other large language model (excluding Llama 2 or derivative works thereof).

And because:

If, on the Llama 2 version release date, the monthly active users of the products or services made available by or for Licensee, or Licensee’s affiliates, is greater than 700 million monthly active users in the preceding calendar month, you must request a license from Meta, which Meta may grant to you in its sole discretion.

edit: for reference that means even companies as large as Twitter, Reddit, or Twitch, would not be subject to this clause.

So while it does fall short of fully open source, that sounds like it only applies if you are the owner of an app or service with greater than 700,000,000 monthly active users, or if you are trying to use Llama to improve a competing LLM that is not based on Llama. Basically it falls short of open source, but not in ways that are likely to impact anyone who is not (1) a multi-billion dollar tech comparny, or (2) using Llama to build an LLM from scratch. (or I’m a non-expert and I’ve misread things, and I should be corrected or ignored :slight_smile: )


It would be a pity then ,if someone were to pirate it… Oh wait… :joy:

IIRC it should be fine if your LLM is not going to compete against Meta. I think they just want to piggyback off the enthusiam of FOSS devs.

1 Like

In some apps, for instance GPT4ALL, there is an option to share your chats with the model owners, but I dont have an idea about this app.

Btw, talking about an open source LLM model is a bit funny, even *If (though) it is completely open source, how it was trained on which data is not open source. So, basically you get an open source black box. In practice, I don’t care it personally.

1 Like

If the training data is open source wouldn’t that alleviate that issue?

In theory, yes. In practice, who will invest significant money, time and resource, and make the the same training with the same process with same data.

1 Like

One other thing I’d add that hasn’t been brought up yet, only because I don’t know your level of knowledge or research on this topic, is that self-hosting LLM’s requires a substantial hardware (VRAM) and substantial energy, to achieve something that will still fall short of something like GPT4. There are options that are optimized for being run off of a CPU, or low RAM, I haven’t personally used any, but from what I’ve read, they work, but will be more limited and slower. The exception to all of this is if you have a recent Mac that uses unified memory.

1 Like

Thanks for letting me know, my knowledge is very limited, I’ve never even self-hosted anything before. I was actually wondering which OS would be the best to run it on.

What is considered enough RAM?

Depends on the model. But, at least you need 16gb ram for most good local models. It’s also important that you don’t open ram hungry apps while you are running the model.

Check mode explorer here.

For more of this discussion and questions, have a look at this subreddit. LocalLlama


I guess one could run Wireshark and listen to any network requests made by this local version. I could do that if I have the time.

You could run the program in bubblewrap and deny network access

I really like LM Studio. Currently using it with the Mistral 7B model on Apple Silicon. It supports GPU Acceleration as well.

1 Like

I don’t have any expertise in this area, but my impression as a casual and curious observer, at a bare minimum 8GB vRAM + sufficient RAM is kind of considered the bare minimum to run a mediocre model. There are some limited models intended to be run from the CPU + RAM, or on more limited hardware.

However, take note that this feels like a very fluid and rapidly changing field (both in the context of software (the LLMs are evolving and can be better optimized, and its possible (probable maybe) that they will be able to do more with less in the near future) and hardware (GPUs, CPUs, and dedicated AI processing stuff, seems to be evolving significantly as well).

1 Like