Apple scanning Icloud?

Again, why use any “cloud”? (if you don’t run it yourself, that is).

Although the motive is a good one, it won’t stop there. And the scanning is being done by an algorithm.

1 Like

Seems to me to be a case of people have been using it for bad stuff and this is why we can’t have nice things…

Where things get awkward is if you’re serving 100s of millions of people you’ll have every conceivable form of bad actor on your platform in sizeable quantity. That’s limitless justification for removing freedom and privacy from all innocent people within the limits of what the market can bare (assuming no gov’t interference).

I’ve now defeated a hash check:

zip ./illegalmaterial

I’ve now defeated a hash check that looks inside zip files:

zip -P "eo[ihny}Jo8XWz<@9vGZ" -r ./illegalmaterial

I’ve now defeated a known hash of a zip containing illegal material:

unzip && zip ./illegalmaterial

This is Windows 3.1 WinZIP-era stuff so i’m left wondering what this is.

  1. An almost meaningless virtue signal to the gov’t and media to reduce pressure and have something to hide behind when something hits.
  2. Part of a vastly more intrusive plan that has some substance but everyone will recognize as a tool of turnkey tyranny.

There is of course nothing preventing any cloud hosting service from running hashes on all content it hosts and flagging content as a violation of TOS other than CP. There are no shortage of actors around the world, whether state or private, who will eagerly form hash databases of material to which they object but which are completely legal. I can think of some rather spicy political memes which tech companies would consider violations of TOS.

Correct me if I’m wrong, but doesn’t MS already have this sort of access for files located on Windows PCs? No doubt Apple has that sort of access to files stored on Macs.

You’re right. Either through OneDrive or their tos before starting windows for the first time, i’m guessing they have the right (to which you consented) to scan your harddrive(s).

1 Like

I doubt it will be hash based. I’d bet on machine learning, where positives are checked by humans, although given the subject matter, learning might be more tricky than normally (far from impossible though, as original images aren’t needed for machine learning algorithms to improve).

While, in theory, I wouldn’t mind using service with such scanning, I don’t trust Apple with implementation and have my doubts about effectiveness of the practice. It’s worth remembering that protecting of children always have been used as an excuse for taking away of privacy and rights.

Also, this proves iColud isn’t really encrypted end-to-end.

“Necessity is the plea for every infringement of human freedom. It is the argument of tyrants; it is the creed of slaves.” William Pitt


Microsoft, Google, and Facebook do the same things and have been for years at this point. If anything, this is Apple being the last domino to fall.

Apple has always claimed to have iCloud keys. It’s how they comply with a three letter agencies.

Louis Rossman has a good video about this. It raises some questions that’s for sure. It wouldn’t take much effort for hashes of anything to get added to the list.

It’s not really any different from regular moderation. When user reports some content, moderator needs to check it. Reported content could be child abuse. Checking it isn’t distribution.

If Apple’s solution was hash based, it would be even more useless.

Moderator doesn’t need to see original content. It can be automatically altered – for example partially blurred in a way that should allow recognizing if original is illegal, but without copy being illegal itself.

I pity for the poor soul that gets to review this because he will get desensitized over time on the content.

Yeah. That’s another big problem. Especially given that moderation for those technical giants is often underpaid and workers don’t have much choice.

How long before they are scanning for “wrong think”?

Back in January, Bank of American just handed over, without asking, information of all their customers who were in Washington, D.C. on January 6.

I hear there is a one in a trillion chance of an image getting flagged. With that, due to the sheer amount of images stored on cloud accounts, looks like one in three people are going to get flagged with something.

1 Like

Here is Privacy, Security and OSINT creator Michael Bazzell with his explanation on how the photo scanning is based on a hash value. He goes on to explain the process, cons, and the likelihood of manual review to confirm, and cons and problems of using this process. The following link jumps in at ~5 minutes into his podcast:


Listened and i agree partially with what he says.
But to me, it all boils down to this: don’t use a cloud you haven’t set up yourself.
Government worldwide should do more, get more funds to hunt down active pedophiles. I know, easier said then done.
They’re part of the problem we have today with governments wanting to intrude in our privacy. (among others).

1 Like

Here in this video Braxman, goes into a lot of details on how this is not going to stop with iCloud


Braxman is pretty legit. I’ve had him surprise me a few times with perspectives on survelliance I haven’t heard elsewhere.

1 Like

Carey Parker, on firewalls don’t stop dragons, has some interesting points of discussion regarding all this.
After watching Braxman and listening to the podcast, i think that it’s up to the police to detect csam. It’s not up to the tech giants to open up Pandora’s box and hollow out privacy even more.
Because how did they do it before all this? Some good detective work and collaboration between world wide police forces.