Security and privacy should be as good as it can be by design. Mainly because itâs decentralized, which is unlike any centralized services that build upon reputation. Since anyone can run these nodes, thereâs no trust given, no reputation, only transparency through a proven/auditable code by everyone 24/7 is the only truth here.
This is getting better every day. Theyâre quite busy with native file versioning implementation currently, see here.
Further, records received from Google indicate that, in the months preceding pompompurinâs correspondence with omnipotent, FITZPATRICK appears to have registered a Google account with the email address conorfitzpatrick2002@gmail.com to replace the older email address (conorfitzpatrick02@gmail.com) that pompompurin had identified.
Google is indeed always keen to comply with government data requests.
Its also worth noting this particular individual was involved with:
unauthorized purchasing and selling of stolen identification documents,
unauthorized access devices, unauthorized access to victim computer systems, and login credentials through his operation of a data breach website named âBreachForums.â
So you can expect anyone who holds data on you to hand it over if youâre doing that.
Government invaded his privacy because he was selling other peopleâs privacy
This guy was hillariously bad at opsec, if you read further down âhe used a VPNâ but, he also accessed the same accounts directly without a VPN, and those Google accounts were associated with Google Pay and also Verizon phone numbers associated with him.
No, I expect them to do the bare minimum to comply with the law. Its up to courts to judge whether someone is guilty or not, and not for private entities to do so in deciding if they will handover data or not. It should be an administrative decision and not a moral one. In fact, if the government already had enough data for a conviction, then they wouldnt need to ask for more.
The agent would have done exactly that, got a warrant and asked for the data. Google simply complied as any provider would. Theyâre pretty clear about that.
How does Google handle government requests for user information?
A variety of laws allow government agencies around the world to request the disclosure of user information for civil, administrative, criminal, and national security purposes. Google carefully reviews each request to make sure it satisfies applicable laws. If a request asks for too much information, we try to narrow it, and in some cases we object to producing any information at all. For more information, see our policies for how Google handles government requests for user information.
Those requests go through LERS which allows law enforcement to submit subpoenas and various court orders.
My comment was about the idea that an entity would comply with data requests based on moral judgement, which you had implied is to be expected, and not googleâs actions in this case or their official policy on the matter.
Yes, believing they followed their stated policy, because they concluded that it satisfied all applicable legal conditionants and not because they morally supported legal action against that individual.
Reading through this thread, it seems like everyone is disagreeing only because everyone is making blanket statements without defining their threat models and assumptions. I think 90% of the disagreement would be gone if everyone just prefaced their statements with âin the context of ____â or âif protecting your data from ____ is part of your threat model thanâ
IF protecting your photos from Google themselves (including the company, employees, and their AI tools), from the intel agencies they most likely continue to cooperate with, or from law enforcement with legal court orders, is part of your threat model. Google Photos isnât a good choice, and isnât secure with respect to those threat actors. But is still quite secure against other common threats.
IF those actors are out of scope for your threat model. Google photos is arguably pretty darn secure. Though by nature of not being client-side encrypted are still likely vulnerable to hacking or phishing/social engineering.
So in my view, you are pretty much all mostly correct, you are just making different assumptions about threat model without actually stating those assumptions explicitly.
Given that google scans your photos and false positives are known to exist I dont see how can that be an âifâ. Google themselves can report you to authorities for csam or something else. I know someone whose videos on google drive were deleted for supposed copyright infringement and the person reported by google to the authorâs society. The person in question had been involved in the production of the content and as part of that had been allowed to keep a copy
With what they had on him, itâs very much likely so the case because it is the path of least resistance. US citizen in US who apparently did something.
This guy wasnât some international terrorist/spy, he was a US citizen and was criminal, so that hardly matters. If you were a foreign spy and using google to conduct your trade, than thatâs still a you problem Not everyone has that kind of threat model.
You know how people are though, any chance to hate on âbig techâ, while pretending everyone needs the same kind of opsec/consideration of Americaâs number 1 wanted spy. Over the years Iâve seen this rhetoric in privacy communities, and it largely stems from ancient snowden era prejudices particularly against US companies.
At the end of the day if youâre putting something on someone elseâs computer and itâs not encrypted (aka the cloud) then it isnât safe from legal subpoena anywhere, and it really has nothing to do with âthe companyâ or where they are located, or their size.
A lot of companies which store your photos unencrypted potentially scan against something like PhotoDNA and this is for their own protection liability protection.
There are two solutions:
Use E2EE first
Donât store CP in the cloud (or have it in the first place because )
The fact of the matter is, technology like PhotoDNA has extremely low false positives, expected to be less than 1 in 1 trillion. The only documented case I can think of is the one with the two dads, that took a picture of their sonâs genitals for medical purposes. In that particular case he really should have thought about not transmitting something so sensitive on the internet. I certainly wouldnât be uploading that to some âmedical providerâs portalâ as thatâs likely not E2EE either. I would also want to ensure that data was deleted, when the problem was solved.
That uses a different system, and unfortunately DMCA is the issue here. Copyright trolls do from time to time, go around reporting things which they donât have proper ownership of
Not sure how copyright trolls would have access to someoneâs personal google drive files (if thats what youre suggesting) and DMCA doesnt apply either here in Portugal or in IrelandâŠ
Living is a country that was a dictatorship just 50 years ago, you dont imagine how that kind of reasoning triggers alarm bells. Ill just say that its for good reason that such scanning is illegal under Portuguese law. Not that Google cares about that.
Is that when matching against known material or also when finding unseen one? According to Google:
âFor many years, Google has been working on machine learning classifiers to allow us to proactively identify never-before-seen CSAM imagery so it can be reviewed and, if confirmed as CSAM, removed and reported as quickly as possible.â - Fighting child sexual abuse online
A bit creepy to have employees of some random company (pretty sure theyll outsource that kind of stuff) looking at peopleâs photos for possible matches. People who can live in different parts of the world and have widely different cultural backgrounds and hence notions of what is acceptable or not
They donât, but they submit a DMCA request to google, and google does matching based on the content id. Google obviously doesnât want Google Drive to be used for distributing copyrighted content.
Theyâre a US company so they have to abide by US law. If it was illegal according to Portuguese law then it would likely mean that product would not be available to Portugal, (and users with a Portugese IP address), but seeing as it is I doubt this is the case.
I was talking about PhotoDNA. In any case that is not the same system used for copyright infringing content. PhotoDNA works with known content and fingerprints (at least it did originally).
Itâs not itâs largely automated. The actual ârealâ CSAM is never handled by Google, thatâs mostly Interpol/NCMEC. They provide hashes to PhotoDNA.
Itâs not just Google that uses PhotoDNA, Microsoft, Adobe, Reddit, Discord do as well. Any company of that size generally needs an automated approach to dealing with data. When youâve got millions of users there will be a portion who are doing the wrong thing.
The content in question wasnt being distributed/shared with anyone. Also, by definition, any photo anyone takes is copyrighted to the person who took it, any text document copyrighted to the person/company who elaborated it and so on. The matter here is content being shared without adequate copyright, it certainly is not the possession/sharing of any copyrighted content.
No, theyre an Irish company, subject to the laws of Ireland and the country of the costumer theyre serving theough that subsidiary, hence my mention of Ireland.
Thats an incredibly naive view that I was not expecting. Theres no lack of stuff that illegal under Portuguese law and nonetheless allowed to go on for years or decades. Pretty sure that doesnt happen only in Portugal. For instance, Teslaâs Sentry Mode is illegal, was declared illegal in 2019 by CNPD, a government agency, and Teslas stil come with it to this day. I could give you dozens if not hundreds of other examples. Thing is, you need someone who feels aggrieved by it to go to court for the court to confirm that it is illegal. That can take a long, long, long time to happen. CNPD could take it to court, and they should be the ones doing so, but they probably dont have the money/prefer to spend it elsewhere.
Doesnât need to be. If it was something that matched a content id, they donât want that on their servers.
As far as I can tell, using PhotoDNA in Ireland is perfectly legal.
I couldnât find anything about automated scanning like that being illegal on a server side, when the data is not encrypted. Part of the reason for this is because by providing the data to «company» youâre agreeing to their terms and conditions. None of them keep it a secret that they use that kind of technology.
That article you linked doesnât support your argument, thatâs about photography of roads and keeping sensitive data like number plates and pictures of people. Obviously law is complex, and there are sometimes things that go unnoticed for a while, but I doubt the usage of PhotoDNA on Googleâs servers is one of them.
Well thatâs less of a legal question and more of an ethical one. These companies will usually do anything to reduce their own liability, when someone does the wrong thing. They will usually seek a solution that is both scalable and brings the least cost to the company.
Thing is, I am not. EULAs, terms, whatever you want to call them generally have no legal validity here in Portugal, not even due to anything in them but due to the fact that only a proper ID process would make them valid. When you click an âI agreeâ button, or sometimes not even that, theres no record to conclusively prove who accepted the terms and in what date. Thats required and its why for anything more serious one is forced through a cumbersome process of sending out ID copies, as well as to sign by hand every page of the terms being agreed to. Thankfully the id.gov.pt system is simplifying that process.
I presented the article to support the fact that big companies violate the law all the time, and dont necessarily care about complying even after they were made aware of their non-compliance. Anything else about the case is irrelevant to prove that point.
Why would you doubt that? Article 26 of the Portuguese Constitution states that everyone has a right to their word, good name and reputation.
Its as a result of this that, for example, a credit scoring system as it exists in the USA and other places is not allowed. Such system places on the individual the burden to prove he is able to repay loans, do so on time and so on. In other words, because there are people who default on their loans each individual has to demonstrate their ability to fulfill their contractual obligations by first takimg small loans, maybe have to put down a deposit to get a credit card (something also illegal here due to the same principle), etc.
Its exactly the same logic when you mention content filtering by default because some people do bad stuff. Each individual is being treated as if it was disreputable
Id argue its both but as per the above my focus here is on the legal side
Indeed, my point is that they might find it cheaper to violate certain laws, rather than to modify the product for certain markets. In other words, they might actually deem it worthy to increase their liability under some circumstances
Thatâs extremely misguided. Itâs well documented that they use illegal data from mass surveillance to gather information on regular domestic crimes, then use it to go after legal evidence that can be presented in court. Also, if you read Newsweek, you should know that they treat Trump supporters as domestic terrorists. So, yeah, grandma needs to have the same threat modelling as Al Qaeda.
Maybe so, but youâre not a lawyer either and trying to stretch interpretation of a single part of the constitution to apply to whatever it is you want elsewhere is unlikely to ever work.
The servers Google uses likely arenât located with in Portugal, and if someone ever does try it in a court (assuming Google lost including appeals) they would just discontinue service to that feature.
I wouldnât look for legal opinions in a tabloid with its own agenda. Anyway if youâre done having serious discussion, then I wonât bother wasting my time here.
The TLDR of it is, if you donât want anyone scanning your data when you upload it, use E2EE. There really is no substitute and all the mental gymnastics isnât going to change that.
Im pretty sure that CNPD used a lot of lawyers in the Tesla case and they cited that same part of the constitution, dealing with personal rights, also specifically as they pertain to the use of technology. If anything, Id argue its farfetched to say Tesla cannot record public space on privacy grounds but under the same law Google can scan private data. Perhaps worth mentioning that corporatioms have no constituional rights, individuals do so there wouldnt be any right elsewhere in the constitution google could claim was being protected by said scanning
Theyâre not in any way the same thing. Youâre giving them particular data to safe keep and they want to check that they are even allowed to be in possession of that data. If that is a term of service, i doubt thereâs anything in Portuguese law that forbids that. You either use the service and agree to the term, or Google decides not to let you use the service.
I very much doubt there is a law in Portugal that states a company must do business with you no matter what.
The matter you referenced was about recording peopleâs faces, number plates in a public places/roads and retention of that data, that doesnât sound very similar to me. The end of that article states (translated):
These two concepts must be applied taking into account that the purposes in question âmust be legitimateâ. «Data processing must always respect the principles of adequacy, necessity and proportionality», concludes the CNPD.
Checking images against known fingerprints of CSAM is likely to be considered âlegitimateâ because Google state they donât want to be holding that kind of content. I also seriously doubt youâd find a judge that would argue otherwise.