Ensu - Ente's Local LLM app

14 Likes

They are on fire!

4 Likes

Downloading it, the size of the model is 1.2GB, I will see how it goes

It is showing 3.1 GB on my end.

It is weird, i am not sure if they are rate limiting traffic or something but it shows network error every 77 MB of download….. Let see if the downloaded model even works after many times of disconnections.

What is the specification of your hardware?

Mine is AMD 8745H + 32GB DDR5 @ windows 11 Pro

Edit: the application does work even after over 10 network failures (server side issue I suppose). The application runs on CPU only,

So, base on the 1.2GB model I downloaded, according to the answers it provided, it

  1. Maximum context length is 40k.
  2. Utilise MoE
  3. It is built on ent.ai’s own proprietary model architecture
  4. Exact number of parameters and quantization is unknown
  5. Supports multiple language (It returns funny answers when I request full list of supported language, I think it is a bug).
  6. Support image input and able to describe image, but seems unable to perform transcribe / OCR
  7. it claims it support tool use (i.e.
    1. Ente Auth: Ente Auth supports authentication and identity verification processes.
    2. Ente Photos: Ente Photos can be used for image processing tasks such as object detection, facial recognition, and image analysis.
    3. Ente Locker: Ente Locker can be used for secure storage and access management of digital assets.)
  8. Cannot perform web search, I think its network access capability is limited to connecting to Ente’s own products using API.
  9. It cannot convert audio files to text

Thats what I found out for now.

3 Likes

M2 MacBook Air 24GB RAM - fast internet and on Mullvad connected to South East Asia

The model is also 1.2GB on a GOS Pixel 9a for me and downloaded successfully the first time.

I’m able to chat with it fine. I don’t know what other issues are. Also not sure which LLM it uses.

I would say it is kindda buggy. Incomplete responses, weird repetition of certain part of response, etc.

1 Like

It has LFM 2.5 VL 1.6B (Q4_0) (this is listed as default) and Gemma 3 4B IT (Q4_K_M) (which is used for macs with 16 GB of ram or more) listed in the source code.

4 Likes

Oh context length is only 4096 and max token is only 512….. thats basically unusable….. that’s why the responses ends at weird places.

2 Likes

It’s still brand new. Give it time.. I think its much fairer to evaluate this as a legitimate private LLM chatbot option many months later in Q4.

Thank you for sharing!

1 Like

Edge Gallery or Pocket Pal remain superior.

has anyone gotten the appimage to work on Linux? It won’t run for me.

I use Jan which is pretty good, seems like this may be limited in comparison?

They state themselves that it’s more of an early product showcase. From the blog post:

This is not the beginning, nor is this the end. This is just a checkpoint.

Ensu is currently an Ente Labs project. For now, we want to only iterate on the product and its direction, without bringing pricing and stability too early into the picture.

Just to set expectations right, it is currently not as powerful as ChatGPT or Claude Code. Still, it is already quite fun!

1 Like

I don’t think it has anything to do with actual development, it is just a few parameters.

I would also say Gemma 3 is getting a bit too old. They might want to use Ministral-3-3b which is much more performant and efficient.

1.28gb for me

After an update this morning the model is 2.5GB on Android

1 Like

I will have a look later on today, see hows their progress.

1 Like