How Mwmbl describes himself: The free, open-source and non-profit search engine.
Why I think this tool should be added
It fits perfect and very clear in the requirements for search engines and fulfills even 100 % of the “Best-Case”-Criteria.
It is the only search engine, that’s completely open-source, actively developed, has an independent index and does not collect PII (Personally Identifiable Information) by default, as far as I know.
I understand. But can we then add a search engine criteria that says „must include at least [Number] indexed websites“ to prevent further search engine suggestions of my kind?
Is there any kind of webmaster tool for this engine, to monitor traffic coming from this engine to a website, like google search console, analytics, etc?
I tried with 3 different browsers and it didn’t work.
Just now, it does. But anyway a search engine that doesn’t work everywhere all the time (or almost) just isn’t good enough for me to use, let alone recommend to others.
Just to be clear, it’s fairly new and 100% independent of big tech, so don’t expect perfection.
I’m a new user, but before talking bad about nice new projects that perfectly fit privacy guides criteria, IMHO, it’s better to document ourselves and go talk to the devs in their matrix space. They seem like nice people to me and gave me all the answers I’ve got.
Plans to compete with bigger players and to move to a renewable infrastructure in the future
They’re about to change technology, AFAIK. That’s why the main domain has some problems. Now the modern search engine is https://alpha.mwmbl.org/ but it will soon become the new stable at https://mwmbl.org/
Anyway, if you use a Firefox-based browser and want to help them extend this impressive project, you can do via their addon https://addons.mozilla.org/en-GB/firefox/addon/mwmbl-web-crawler/ and I think that logging to their website you can also report missing sites.
Just add this project as a beta resource recommendation, it deserve. We need projects like this to be known and grow.
This is both a great point, but also why I wouldn’t make it a suggestion yet. This project shows a lot of pure intention, well thought out design, collaborative and distributed ownership, and is tackling the search problem in a way that I would bet will eventually surpass anything Google or rivals have ever acheived. Why? Search index size may indicate some level of quality and closer to “completeness" but there’s also a larger and increasingly growing amount if noise on the internet that Google wouldn’t have accounted for with their original search, and likely doesn’t want to fix because it keeps you searching on google.com more and thus seeing more ads and growing their revenue (see Enshittification by Cort Doctorow).
Even without AI slop and negative business incentives, I believe now that we have well understand HCI designs for core services, well managed community contributions to core knowledge models will be the future of the internet just due to the local knowledge problem.
In this way, if this project maintains leadership and avoids the temptation or at least continues to make the core value prop owned by the community, then we will gradually realize a search engine that performed better than any centralized and opaque servoce could dream of doing. This is because the more users have the ability to critique and influence design under specific and localized contexts, you get better results. Google enabled this a lot in the beginning, publishing PageRank and giving users transparent methods to influence their search experience. But as scale, SEO ranking hacks, and further centralization of the knowledge model took place, the service got worse in consequence and later by design. This puts a non profit and community input at the center and provides that stays, it will be better.
That said, Privacy Guides needs to balance aspirational projects with incredible goals, with functional projects. The good news is that by mentioning this search engine on this forum, it comes up in search results for Google and Privacy Guides. Instead of a formal recommendation however, many will see this conversation and gain better context for those who are digging for better search alternatives versus those who need something that works today.
Perhaps it could make sense as an honorable mention but very early stages, or perhaps one of us can write up a guide on it that gives better context and summarizes everyone’s opinions and how to get started or involved.
This is a similar issue with Wikipedia. I hope at some point, they look to enable some level of federation capabilities that follow the model of https://yacy.net/ and Mastodon. The developer didn’t explicitly mention he didn’t like the idea of federation because he wanted to rule his own search engine, but rather because it’s slow and quirky and he wanted a faster centralized Wikipedia equivalent to search.
I think having a primary site (like mastodon.social) that many use but still enables other servers that support focus on topics or alternative views lowers the issue of censorship drastically. But as an individual or non profit, he or the non profit could be held liable and sued for the “views" expressed on the site. For example, if you host search results for building a bomb, CSAM, or methods to harm yourself or someone. Enabling a federated model is something I hope enters the roadmap once the core index is providing a solid contender for Google and then you can enable federated indexing with all the abilities to block between servers. I think that’s the best model to balance censorship and abusive materials and misinformation.
Sure, I just wanted to point it out, since it wants to be something new there’s no need to engage in the same practices as Big-tech and follow the same mistakes as DDG.