raccoondad Thank you for bringing this up! I work with a chat service that uses PhotoDNA to scan user uploaded photos. As far as I understand, because of false positives, if our image scanner ever catches something, we need to verify the report by hand before sending it to the NCMEC.
It makes no sense for the NCMEC to push PhotoDNA onto everyone's devices for offline scanning reports.
NCMEC is fairly closed off all things considered. As I said, working with them requires me to work with a separate child abuse prevention organization. I really doubt they would want millions of reports from offline devices with an API that could easily be exploitable.
This is why I imagine reports only start coming in AFTER its been uploaded to a google service like Drive or YouTube.
I am uncertain what you are trying to say here, but if you are an EU citizen you are probably familiar with the mandatory chat control provisions that was/are attempted to be added to the Child Sexual Abuse Reporting EU law, that mandates AI based client side scanning with automatic reporting on all end-to-end encrypted communication apps. Apple also voluntarily attempted to add something similar to iOS a few years ago, which also included scanning photos and videos stored locally on your device.
This is what @GrapheneOS was referring to as greatly violating people's privacy. It is very different from an internet website using PhotoDNA or similar. PhotoDNA is basically just a moderation tool, that helps automating the moderation that all websites are legally and ethically obligated to carry out anyway. PhotoDNA does not pose any threat to privacy, just as a human moderator reviewing all publicly uploaded content manually instead also wouldn't do.
But governments and even individual companies are trying to push these AI based CSAM scanners out to the individual devices, and start scanning content that actually is private, including content in private end-to-end encrypted chats, private end-to-end encrypted cloud storage, or even files stored locally on your device. These AI based scanning technologies are not just going to be used to scan content uploaded publicly to websites anymore. This is a huge threat to privacy, and a very real threat right now, which is why everyone are on their toes about this. Ironically it is even especially a threat to the privacy of the children these laws and technologies are supposed to protect, who now risk having their private sexual pictures sent to some random adult at some random government. There is also a huge worry about where it will stop. Governments weren't slow at starting to suggest other usages of this client side scanning, far beyond trying to detect online child abuse.
In the end, and what I think the GrapheneOS account was also trying to get at, is that it isn't the AI based scanning or even that it happens on your device that is the problem, it is automatic reporting of illegal content that is the problem. Blurring unwanted nudity in messages sent to you is a feature I think many would want to have. And Apple changed their mind and instead choose to use their AI scanning to warn children when they are about to send a naked picture of themselves to someone about the risks that might pose to them, so they can make a better informed choice. These applications have absolutely zero privacy risks, and also does not risk violating the rights of any group, including children's.