dln949 Here is my poor layman's understanding of a scenario I've imagined in the near future that worries me:
Devices like Pixels have a chip that can do AI type analyses of what goes on within the device: Analyses of pictures, videos, audio, documents, text messages, keystorkes, apps opened, sites visited, etc.
Sure! Already true. According to "Turing equivalence" (roughly), most computers can compute the same things that most other computers can compute. Anyway, the processor in every Pixel that has ever shipped can do those things.
dln949 This is all done prior to any encryption being applied.
Yes. There are some people who believe it will be practical to do computations on encrypted data, but it's not practical now.
dln949 So basically there's a record of all that is done on the device, along with the AI's evaluation of it.
That does not happen without somebody writing a lot of code and then somebody installing that code. It's not transmissible like a virus - it can't sneak into your phone.
dln949 Then, at various times this data is sent to the headquarters of our benevelont overlords.
That does not happen without somebody writing a lot of code and then somebody installing it. It's not transmissible like a virus.
dln949 Or: You wish to send a photo to someone in which you've recorded officials abusing their authority in your nation, and you use an encryption app like Signal. But, before Signal encrypts the photo, the AI has already "grabbed" it and studied it and will be capable of independently reporting it to whoever.
Indeed the E.U. wants to mandate that. Also the UK. Probably lots of politicians everywhere. Call them up and tell them you will donate to and vote for their opponents if they do. It's a political problem, not an AI problem.
dln949 Even if GrapheneOS could defeat this threat, you still don't know what kind of device the person you're communicating with is using, so perhaps on his end his phone with an AI capable chip that you sent that encrypted photo to will, once it is decrypted, send that photo to the authorities, along with information about who it came from.
That can happen already. This is not an AI problem; it's an opsec problem.