Oh that’s interesting I did not know that existed and yes perhaps they are just using it to generate commit messages.
My main issue is that using AI directly within the code especially within security related areas of an app like the commits linked in the op. Would lead to increased security vulnerabilities which could reduce privacy.
However, I kinda assumed that Claude was touching the code but I don’t know that for sure. Hence why I said in the op I do not know to what extent AI is being used.
I suppose this would be something to just ask them on GH to clarify what AI is being used for.
I wish this surprised me more but considering proton scribe and lumo. I figured this was probably going to happen. Not great, personally I don’t think products where security is critical like a password manager or other e2ee services should use AI directly in the code. Seems like a great way to accidentally shoot yourself and your users in the foot.
Developers have been shooting their own foots since Fortran punchcards, so I’m not too concerned. Lack of AI didn’t mitigate the Therac-25 from frying people, it was a lone developer doing their best with no standardized safety professes for releasing software on safety critical devices.
If AI generated code is enough to bring down an organization and introduce severe vulnerabilities, that means it was already built on a house of cards (see the Tea app). Privacy and security in software encompass an entire software development lifecycle (SDLC), not just one tool that generates code.
It is true that you can mitigate the risk by reviewing the work and you should anyways however why risk adding a tool which in every place you use it comes with a warning about its inaccuracy and is wrong the majority of the time. You could argue it saves time but in order to use it correctly you now need to spend time prompting and reviewing/rewriting code the AI wrote so its not necessarily saving as much time.
Its not just AI I say the same thing with C, C++ and other memory unsafe languages. Why use a tool that makes it easier to shoot yourself in the foot if your not grandfathered in.
Developing secure software starts by minimizing risk, one of the ways to do that is by not using tools that will introduce or make it easy to introduce vulnerabilities in the first place.
A place where I think AI is useful sometimes is by using it after a google search or a search on stack overflow fails. In this case any code you would still have need to review anyway and its likely quicker than asking the question on stack overflow or wherever. Plus your likely not taking hundreds of lines worth of code but just a small snippet. And if it gets it wrong its similar to not finding the answer on Google or stack overflow.
That being said though caution is still needed as unlike stack overflow it doesn’t come with upvotes or other comments and of course will sometimes hallucinate but I think its less risky using it as a backup for stack overflow than letting the AI directly write code.
Yeah the Tea app was definitely really poorly secured from basically every aspect. I don’t think AI would destroy a software security immeditaly and cause everything to crumble. I just don’t think the benefits are worth the risk. And yeah developing secure software is a full process with various aspects to it.
Fair enough and I appreciate the stance on AI, I can see this being resonated with many reasonable people especially that I know of.
It’s better to be transparent if AI was co-authored than not indeed.
It just makes bad developers write bad code faster, and good developers write good code faster. I currently utilize AI significantly for development and the output is the generally the same as if I wrote it.
To be blunt, if you aren’t a developer, or if you have simply not learned how to prompt AI well, then I don’t think you understand it’s just another tool in the toolbelt.
Y’all need to talk to some devs and realize that “AI Bad!” is not a way to go through life.
I’ve used Claude, ChatGPT, and Gemini for first drafts of reports and other common and repetitive documents plenty of times. It’s never more than 40-60% of what I need, but that’s 40-60% of the job I don’t have to do anymore. And when I need to note authors and who’s reviewed, no one blinks and eye when I mention “AI generated early draft” because a half dozen other people were going to ding it up no matter who wrote the first draft. When professionals do proper work, using an AI starting point is not a problem in any industry at all.
Posts like this do nothing but expose ignorance of how things work IRL.
Thanks for posting this here I was going to link it when I checked back today or if you responded but you beat me to it . Thanks again for taking the time to respond.