grayway2 I might be wrong but the scanning of CSAM on iCloud or iPhone was abandoned and never applied.
The plan for on-device scanning (client-side scanning) on iOS (iPhones and iPads) was abandoned. Unencrypted content uploaded to iCloud, regardless of how it was uploaded, has been scanned long before the plans for on-device scanning. I doubt they have stopped doing that. They don't want CSAM on their servers, and might have legal liability to not have it.
Eumenia Is this AI open source?
It doesn't sound like that is what they are planning. It sounds like they are planning for an EU provided online service that does the scanning. I know existing such online scanners specifically do not release their data, since they feel that would make it easier for people to avoid getting caught by the scanner. I don't know how the people behind Chat Control are reasoning, but since they at least originally was more interested in automatic reporting and arrests than they were in blocking the spread of CSAM, I suppose they too want to keep the data secret, and won't release it.
Eumenia So someone independent could verify anything?
If it is a local image hash database plus local AI scanning model that is loaded on device, it would be trivial to audit that it is effective in detecting and blocking all kinds of CSAM, while not falsely triggering on and blocking things that might look like CSAM but isn't intended to be blocked (like vacation photos with naked children, legal porn with young actors, age play porn, loli/shota, etc).
I personally doubt we will be allowed to audit that, unless they suddenly come to their senses and realize that the only way they can get mandatory blocking accepted is if they preserve peoples privacy.
(One can at least hope they publish information about how they trained the AI model, so not they are doing something illegal like they possessing CSAM images.)