Correct. If we could magically end the rape of children and stop the propagation of images of such, I think we'd all be on board. But when a system would realistically just push most of it elsewhere and violate a bunch of innocent people's privacy, people get nervous.
It's like being asked to have an invasive, detailed scan of your body shown to TSA personnel (and recorded) when you fly. The motive is apparently to prevent violence. But there were still incidents on flights when that was being done. It wouldn't have caught the box cutters on 9/11. The stated motive ceases to be part of the equation for me.
Pushing it elsewhere is still a win. You don’t want this to have easy cover. Keep in mind FB reported close to 17M pieces of CSAM last year. So if having the ability to detect it was such a deterrent, why are their numbers so high?
Their reported numbers were like 350 last year, compared to almost 17M for Facebook. You could theorize Apple has little CP, except in the Epic lawsuit we saw exec emails released that said the exact opposite. So clearly not much scanning has been going on.
I think it's actually impossible to use an AI to determine this type of thing. Even courts of humans sometimes rarely get it wrong sometimes with images that are open for debate (for example missing context) on whether it was abusive or not (parents bathing their kids, as one example). Then you have the whole can of worms that pedophiles will find sexual interest in images taken in non-abusive contexts but will use them none the less. How is an AI supposed to determine that. Even our laws can't handle that.
There's also images that can be abusive if made public but non-abusive if kept private. For example what about a kid taking pictures of themselves nude (many kids don't have parental moderated accounts) and putting them on icloud (iphone does this automatically with default settings).
It's a complete nuthouse and there's no way to do it.
(Also don't get me started on drawn artwork (or 3D CG) that people can create that can show all sorts of things and is fully or partially legal in many countries.)
They're not using an AI for that purpose. They're just sending a hash of the image and sending it along with iCloud uploads. The hashes are of images that have been identified by humans as containing child pornography.
It's also possible that a picture of an avocado will get flagged. A nude photo of someone who's shaved themselves is unlikely to look so much like an actual CSAM photo that it'll get flagged. It's certainly possible, but again, so's the avocado.
They're not hashes in the way md5sum or sha256 is a hash. They are _neural_ hashes. Which is not trying to fingerprint a specific image, but tag any image that is deemed sufficiently similar to the reference image. Hashes are unique, neural hashes aren't.
To add, there are natural neural hash collisions in ImageNet (a famous ML dataset). Images that look nothing alike to us humans.
Apple already does virus scanning on device and has for a decade. I fail to see how you trust one proprietary binary file scanner to stay in its lane and not spy on you, but fear a different one with a different stated purpose.
Not really - there are a ton of ways to profile and monitor a Mac to determine what processes are calling into what, and spotting something activating this code oth would be trivial if it were happening.
You say that, but there are also tons of ways to hide such things (there is an entire field dedicated to side-channel leakage). The point is you don't know unless you have the source code (and can compile it yourself, I'm ignoring the trust of compilers at the moment). You can try to reverse engineer and analyse all you want, but it doesn't mean you know what the system is really doing...
It may be trivial to look under a given rock, but when there are hundreds of thousands of rocks, the likelihood of someone noticing something left under a rock is low.
https://www.reddit.com/r/MachineLearning/comments/p6hsoh/p_a...