• Off Topic
  • Microsofts Recall recording everything you do

What do you guys think about Microsoft Recall

"Recall uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds

To make it work, Recall records everything users do on their PC, including activities in apps, communications in live meetings, and websites visited for research

For me this ist the ultimate definition of Client Side Scanning. Microsoft claims that all the processing will be made locally and while this can be perfectly true it doesnt mean anything privacy wise.

For example
Someone uploads documents to their computer Recall/Copilot scanns them locally and finds something it considers harmful. It can then send those informations to Microsoft, while still having done all the processing locally

    25 days later

    CatPaw

    If it isn't completely under my control, I think it is awful.

    I fear this will not remain a Windows problem only. Here is my poor layman's understanding of a scenario I've imagined in the near future that worries me:

    Devices like Pixels have a chip that can do AI type analyses of what goes on within the device: Analyses of pictures, videos, audio, documents, text messages, keystorkes, apps opened, sites visited, etc. This is all done prior to any encryption being applied. So basically there's a record of all that is done on the device, along with the AI's evaluation of it. Then, at various times this data is sent to the headquarters of our benevelont overlords. Or: You wish to send a photo to someone in which you've recorded officials abusing their authority in your nation, and you use an encryption app like Signal. But, before Signal encrypts the photo, the AI has already "grabbed" it and studied it and will be capable of independently reporting it to whoever.

    I admit, this is purely just from my imagination, but in the past couple weeks I've read about Microsoft Recall, and then about how the EU is likely to push forward with its chat control legislation, which will require that videos and pictures are analyzed and reported prior to being encrypted. So, I don't know, maybe my imagination isn't so wild?

    I don't have the expertise to know if what I'm describing is realistic or science fiction. Nor do I know to what degree GrapheneOS could deliver users from such a scenario. But even if GrapheneOS could defeat this threat, you still don't know what kind of device the person you're communicating with is using, so perhaps on his end his phone with an AI capable chip that you sent that encrypted photo to will, once it is decrypted, send that photo to the authorities, along with information about who it came from.

    • de0u replied to this.

      dln949 Here is my poor layman's understanding of a scenario I've imagined in the near future that worries me:

      Devices like Pixels have a chip that can do AI type analyses of what goes on within the device: Analyses of pictures, videos, audio, documents, text messages, keystorkes, apps opened, sites visited, etc.

      Sure! Already true. According to "Turing equivalence" (roughly), most computers can compute the same things that most other computers can compute. Anyway, the processor in every Pixel that has ever shipped can do those things.

      dln949 This is all done prior to any encryption being applied.

      Yes. There are some people who believe it will be practical to do computations on encrypted data, but it's not practical now.

      dln949 So basically there's a record of all that is done on the device, along with the AI's evaluation of it.

      That does not happen without somebody writing a lot of code and then somebody installing that code. It's not transmissible like a virus - it can't sneak into your phone.

      dln949 Then, at various times this data is sent to the headquarters of our benevelont overlords.

      That does not happen without somebody writing a lot of code and then somebody installing it. It's not transmissible like a virus.

      dln949 Or: You wish to send a photo to someone in which you've recorded officials abusing their authority in your nation, and you use an encryption app like Signal. But, before Signal encrypts the photo, the AI has already "grabbed" it and studied it and will be capable of independently reporting it to whoever.

      Indeed the E.U. wants to mandate that. Also the UK. Probably lots of politicians everywhere. Call them up and tell them you will donate to and vote for their opponents if they do. It's a political problem, not an AI problem.

      dln949 Even if GrapheneOS could defeat this threat, you still don't know what kind of device the person you're communicating with is using, so perhaps on his end his phone with an AI capable chip that you sent that encrypted photo to will, once it is decrypted, send that photo to the authorities, along with information about who it came from.

      That can happen already. This is not an AI problem; it's an opsec problem.

      dln949 Said differently: you know that horror movie "Christine", about the Chevy mysteriously inhabited by an evil spirit that killed people? That's exactly how AI doesn't work in 2024. 2029, maybe? But if so it's probably not the phones that will rise up against us. While there are a lot of them, the battery life while running "AI" code is really short!

      Personally I recommend watching out for the legislators, not the AI chips.

        de0u Well, the actual chips themselves are neither good nor bad, right? But, my understanding is that the more powerful chips enable the OS to do more and new things, things the legislators might want - and those more and new things are what can be good or bad.

        So, my thinking is that we need to watch out for the legislators and the programmers - more specifically, we need to watch out for the legislators and the tech industry (which to a significant degree appear to have fused together and greatly influence each other).

        • de0u replied to this.

          dln949 My understanding is that the more powerful chips enable the OS to do more and new things, things the legislators might want - and those more and new things are what can be good or bad.

          Yes. But that is true every year, for the past 50-plus years. Before there was an Internet (I remember that), people couldn't use the Internet to snoop on people's buying habits or political thoughts.

          dln949 So, my thinking is that we need to watch out for the legislators and the programmers - more specifically, we need to watch out for the legislators and the tech industry (which to a significant degree appear to have fused together and greatly influence each other).

          What if the legislators say companies can't use AI to snoop on companies and the companies do? It will at least be painful. But what if the legislators say that every company must snoop?

            de0u

            "What if the legislators say companies can't use AI to snoop on companies and the companies do? It will at least be painful. But what if the legislators say that every company must snoop?"

            Yep, understand your point, very valid. In the case you propose the legislators are the main problem.

            I put to you a different case: What if many in the tech industry say to the legislators, Hey, look at this power we can share with you, see all you can do with it.... and all we ask for is appropriate "compensation" and "considerations" and "priveleges" and "participation" as your partner.

            • de0u replied to this.
              • [deleted]

              CatPaw

              In the factory Google Pixel firmware Google Assistant has access to both the screen as an image and the screen as text. This is turned on by default.

              • zzz replied to this.

                [deleted]
                Do you have a good link/source with more info?
                I'd love to read the documentation.

                  dln949 What if many in the tech industry say to the legislators, Hey, look at this power we can share with you, see all you can do with it.... and all we ask for is appropriate "compensation" and "considerations" and "priveleges" and "participation" as your partner.

                  That could happen. What options are there?

                  1. Call up politicians and tell them you will donate to and vote for their opponents if they do.
                  2. Call up companies and ask them not to make AI chips?
                  3. Call up companies and ask them not to ask politicians to mandate snooping?
                  4. ?

                  A separate app with no net connection, perhaps in its own profile, could encrypt the file, before sending it from another profile using an appropriate app

                  • [deleted]

                  What is a 'Recall' anyway? Right now it just sounds terrible. Glad that I don't use Windows as my main OS.

                  CatPaw
                  I would rather be poked in the undercarriage with a sharp stick than submit to this Orwellian nightmare. Even assuming the best case scenario where Microsoft has your best interest at heart (lol) and will never access this information and protect it properly; is the juice really worth the squeeze and its potential for disaster? I think not.