Feedback on Fully Local AI Compliance Browser Extension

Hi, I’m Nate, one of the founders of Sonomos, a browser extension that detects and masks PII locally on your device.

The architecture: PII detection runs entirely on-device using a BERT-based NER model via Transformers.js. No data leaves the browser. We detect 60+ PII types across text inputs and 50+ file formats, with a keystroke guard scoped to the browser only. Users control which sites it’s active on.

The extension itself collects no data. Our website handles accounts and payments (via Stripe) for the paid tier, but that’s entirely separate from the extension’s operation. We use Supabase for user account management that contains essential account information such as name, email, and subscription status. The permissions in our manifest.json are the following:

“permissions”: [ “storage”, “activeTab”, “scripting”, “offscreen”, “alarms” ]

Our host_permissions just has "<all_urls>". This permission is required because users can activate Sonomos on any site they choose. We have no way to scope this at install time.

I’d genuinely value your technical feedback, especially on our permission model and whether our approach meets your criteria for browser extension recommendations!

Chrome extension link: https://chromewebstore.google.com/detail/sonomos/kmjigjbdejoppdjadkffdpjdfjkjncdi

Website: https://sonomos.ai

2 Likes

Given the sensitive info this product will be used for, I think it would be prudent to only trust such a tool if it’s open source, independently audited for privacy & security with the prompt release of the audit report.

I don’t see any mention of anything of the sort. I like the idea. Just don’t like how it is right now.

2 Likes

The bare minimum is an open-source repository I can audit and git clone.

I get this 404-like response:

1 Like

Especially when it is AI related and we’re going to use it as a privacy tool.

1 Like

Intrerresting concept. From a legal point of view commercially I am wondering how you look at the EU AI Act. As I believe there still regardless of what data you put in the models would need to be assessed with AIA regardless for each usecase.

I am wondering therefore what is your target audience?

I think it is easier to centrally make available the solutions you allow with personal data with no training on input data and block all other solutions rather then to try to filter input.

I am also wondering how your solution deals with more local like patterns like company registration number formats or social security numbers that differ quite a bit per country.

Lastly I think it would be interesting if you could also filter models that may be used. So I put is blocked on models.

Can you explain something on how to deploy configurations?

1 Like

I make no exceptions for any web extensions, AI-related or not.

It is great to see more tools prioritizing the “Local-First” approach, especially for something as sensitive as PII (Personally Identifiable Information) masking. Using Transformers.js for on-device BERT-based NER is a smart move to maintain privacy without sacrificing the sophisticated detection needed for 60+ data types.

1 Like

Something that I chanced upon on Reddit that reminded me of this thread: https://clokr.dev

GH: GitHub - progetticyber/clokr-extension: CLOKR — AI Privacy Shield: Chrome extension that masks sensitive data (email, IBAN, CF, phone) before sending to AI chatbots · GitHub

Reddit: Reddit - Please wait for verification

1 Like