• Announcements
  • Claims made by forensics companies, their capabilities, and how GrapheneOS fares

  • [deleted]

GrapheneOS The OS implements the lockscreen and no issues with the secure element are needed for data to persist in the OS and be obtained via an OS exploit.

I agree that it's conceivable that an AP exploit by itself exploit enables this. However, on iOS 15-15.8.x, A11-A13 has no IPR while on the same exact OS version, A14-A15 does. Perhaps it's more of an engineering constrain rather than a technical limitation.

    [deleted]

    I'm talking specifically about iOS here. iOS in contrast to Android has NSFileProtectionComplete. " Shortly after the user locks a device (10 seconds, if the Require Password setting is Immediately), the decrypted class key is discarded, rendering all data in this class inaccessible until the user enters the passcode again or unlocks (logs in to) the device using Face ID or Touch ID." https://support.apple.com/guide/security/data-protection-classes-secb010e978a/web

    Android currently provides this feature via the keystore API. It is possible for apps to keep data at rest while the device is locked, just like iOS, and if they use StrongBox it's based on the secure element. Whether the keystore keeps keys marked as requiring user authentication and an unlocked device at rest rather than just disallowing usage is a quality of implementation issue specific to devices which we don't know much about at this point.

    Android is adding a similar data class for data that's at rest while locked to avoid apps needing to use another layer of encryption via the keystore as they currently do. This DOES NOT mean that more iOS apps keep data at rest while locked. Signal doesn't do it on either Android or iOS, but Molly exists on Android which in addition to their app passphrase uses the StrongBox keystore with a key requiring authentication and an unlocked device. Apps could transparently use the keystore that way without an app passphrase too.

    What you're referring to is an AFU extraction. The difference between an AFU extraction and FFS extraction is that data protected by NSFileProtectionComplete such as keychain data is not available in AFU extraction on iOS. On Android, AFU and FFS are the same due to lack of NSFileProtectionComplete class keys and no secure element exploit is needed for AFU or FFS extraction.

    That's not correct. Android supports keeping data at rest while locked. Those are not the same thing. Neither OS has much data kept at rest while locked in practice. iOS makes this easier for app developers but easier does not mean more apps actually use it. Molly and several Android TOTP apps are counterexamples to that assumption.

    Furthermore, even with secure enclave code execution, the plaintext passcode shouldn't be recovered if Apple had implemented it correctly. Because only the passcode verifier value i.e hash of (passcode + salt) is saved by the Secure Storage Component. However, Cellebrite specifically claims they can get iPhone plaintext passcode, suggesting Secure Enclave RAM is not clearing iPhone passcode properly after user input. Please see Cellebrite's explanation of IPR IPR.jpg

    The info you're quoting isn't correct since it ignores the hardware keystore on Android. Not everything they claim is correct.

    Please forgive my ignorance about how key derivation works on Android. All iOS data protection key derivation happens in secure enclave on iOS. You're saying on Android, the first part of key derivation happens on AP, then on TEE in Titan M2?

    Android does the first part of key derivation in the OS via scrypt. It derives keys with a simple personalized hash approach for each purpose from the scrypt key derivation, which is then passed along to the different use cases. One of those is the Weaver feature implemented by the secure element. Another is the initial authentication of the Owner user with the secure element which authorizes signed updates with a newer version of secure element firmware. Another is unlocking the keystore.

    The Weaver token is how Android implements time-based throttling via the secure element. The secure element has no direct involvement in key derivation. It's not super high performance hardware and is a poor place to do that for protecting against attackers able to bypass all the hardware-based security features.

    The final key derivation is done in the Trusted Execution Environment (TEE) on the main SoC, which means in TrustZone. It uses an SoC cryptographic acceleration feature providing hardware-bound key derivation as one of the features. The hardware-bound aspect means that exploiting the TEE shouldn't provide any way either direct or indirect via side channels to get access to the key used in the hardware-bound key derivation algorithm. On Snapdragon, the Qualcomm Crypto Engine provides these features and is meant to be used in the TEE applet for this.

    On Pixels and most modern Snapdragon devices, the OS does not receive the decrypted disk encryption keys but rather handles for using them. This is called the wrapped key feature. This prevents a kernel information leak, etc. from leaking the encryption keys. They're also presumably not meant to be available via a memory dump. We haven't verified this but the information from XRY, Cellebrite, etc. appears to indicate that works properly. Otherwise, XRY would not have had to use a leftover hash or something like that in memory to brute force the lock method. They did not appear capable of doing this with GrapheneOS, but further work has been done on GrapheneOS to rule out other ways this could happen by running full compacting GC which zeroes the old heap for SystemUI / system_server when the screen is locked, not just the standard of doing it a bit after unlocking to wipe leftovers of the lock method, etc. We also made some other changes and began auditing this.

    You're right. Cellebrite claimed they can do FFS on pixel 8. They must have at least gained code execution on AP on AFU mode and used AP to communicate with Titan M2 to decrypt data if not code execution on Titan M2 itself, correct?

    FFS is when they have the user's lock method already, so all it has to involve is exploiting the device from ADB shell after enabling developer options with the lock method, enabling ADB and authorizing their access. It's not any kind of fancy exploitation.

    Thank you informing me that Android is already implementing something similar

    The Android hardware keystore is what to look into if you want to know more. There are 2 hardware keystores on Pixels: the traditional TEE (TrustZone) keystore which encrypts data and stores it in regular storage and the StrongBox keystore provided through the secure element since the Pixel 3. Pixel 2 had a secure element with insider attack protection (Owner authentication needed to update firmware via valid signed updates with newer version) and Weaver (covered in https://grapheneos.org/faq#encryption) but predates StrongBox being available.

    According to you, even without setting any additional database password on Molly, AFU extraction will not yield the plaintext database? https://github.com/mollyim/mollyim-android/wiki/Data-Encryption-At-Rest Didn't say anything about this. Possible for @Nuttso to clarify?

    It uses the StrongBox keystore and should be setting the key as requiring an unlocked device. You can check the code to see if it does that.

    My question is, if secure enclave/Tian M2 is totally compromised with full code execution, and a brute force attack is performed on device, how long does it take to brute force a six-character alphanumeric passcode with lowercase letters and numbers?

    On Android, this is based on the combination of the standard scrypt key derivation which can be offloaded elsewhere without an exploit and the final hardware-bound key derivation done within the TEE (TrustZone). There cannot be a specific answer for how long it takes and an attacker could extract the key from the hardware to offload the hardware-bound key derivation part. The amount of time spent on hardware-bound key derivation varies by device and we don't know specifically how long it's configured to take in each Pixel generation.

    How long does it take for a pixel phone with standard os and graphene os respectively?

    It's currently the same and we can only increase the scrypt part. The hardware-bound key derivation part is in the TEE and we can't change that firmware beyond requesting improvements as we've done successfully in several areas.

    Keep in mind that the delay mentioned in FAQ reproduced below is all bypassed due to code execution on Titan M2

    The FAQ explains that there's scrypt key derivation in the OS and then at the end there's hardware-bound key derivation in the TEE similar to the secure enclave key derivation on iOS. The details of the hardware-bound part vary by device and are drastically different on Snapdragon vs. Tensor along with evolving significantly over time on Snapdragon. We know more about how it works on Snapdragon than Tensor.

      [deleted]

      I agree that it's conceivable that an AP exploit by itself exploit enables this. However, on iOS 15-15.8.x, A11-A13 has no IPR while on the same exact OS version, A14-A15 does. Perhaps it's more of an engineering constrain rather than a technical limitation.

      As can be seen from the Android table, they have limited resources to deal with older generations. That also explains why they haven't developed an exploit for a several year old Titan M2 firmware version. They want something working against current and future versions, not only increasingly irrelevant past versions. If they only had to develop an exploit against the earliest Titan M2 firmware before major improvements happened it wouldn't be nearly as difficult.

      [deleted] According to you, even without setting any additional database password on Molly, AFU extraction will not yield the plaintext database? https://github.com/mollyim/mollyim-android/wiki/Data-Encryption-At-Rest Didn't say anything about this. Possible for @Nuttso to clarify?

      GrapheneOS It uses the StrongBox keystore and should be setting the key as requiring an unlocked device. You can check the code to see if it does that.

      Molly doesn't enforce the unlock state for its keystore key. Otherwise Molly wouldn't be able to start in background when the phone is locked and wouldn't be able to show notifications. Same with Signal.

      When the database isn't protected by a passphrase, Molly should be able to open its database, even if the device is locked

      So it can't set up its keystore key with the authentication protection (aka require the unlock state).

      A phone in AFU state, screen locked, app unlocked -> Molly and Signal are the same thing. There's one difference, that makes some exploits to work for Signal case, but not for Molly, because Molly enforces the StrongBox keystore.

      They'd need the phone in AFU state and exploit some vulnerability, typically a LPE.

      Perfect scenario is where an app does its encryption stuff once the user is authenticated (device is unlocked). But for messaging apps that need to show notifications on background, while device is locked, it's not that simple.

        • [deleted]

        • Edited

        GrapheneOS Signal doesn't do it on either Android or iOS, but Molly exists on Android which in addition to their app passphrase uses the StrongBox keystore with a key requiring authentication and an unlocked device. Apps could transparently use the keystore that way without an app passphrase too.

        That's incorrect regarding signal on iOS. Signal chat database on iOS uses NSFileProtectionComplete. Only FFS yields Signal database on iOS not AFU extraction.

        [[this is not correct, it doesn't keep data at rest after first unlock on either Android or iOS even though it could]

        File system: Signal database is encrypted. The encryption key is stored in the keychain with the highest protection class. The only way to extract Signal conversations requires extracting the file system images and decrypting the keychain.
        https://blog.elcomsoft.com/2020/04/forensic-guide-to-imessage-whatsapp-telegram-signal-and-skype-data-acquisition/

        GrapheneOS That's not correct. Android supports keeping data at rest while locked.

        GrapheneOS The info you're quoting isn't correct since it ignores the hardware keystore on Android. Not everything they claim is correct.

        I agree. They're only talking about most situations probably.

        GrapheneOS Android does the first part of key derivation in the OS via scrypt. It derives keys with a simple personalized hash approach for each purpose from the scrypt key derivation, which is then passed along to the different use cases. One of those is the Weaver feature implemented by the secure element. Another is the initial authentication of the Owner user with the secure element which authorizes signed updates with a newer version of secure element firmware. Another is unlocking the keystore.

        The Weaver token is how Android implements time-based throttling via the secure element. The secure element has no direct involvement in key derivation. It's not super high performance hardware and is a poor place to do that for protecting against attackers able to bypass all the hardware-based security features.

        The final key derivation is done in the Trusted Execution Environment (TEE) on the main SoC, which means in TrustZone. It uses an SoC cryptographic acceleration feature providing hardware-bound key derivation as one of the features. The hardware-bound aspect means that exploiting the TEE shouldn't provide any way either direct or indirect via side channels to get access to the key used in the hardware-bound key derivation algorithm. On Snapdragon, the Qualcomm Crypto Engine provides these features and is meant to be used in the TEE applet for this.

        Thank you for the thorough explanation. Could you point me to more resources on this to learn more about how Weaver token does time-based throttling or how hardware bound key derivation works in TEE for pixel?
        Apple has a compiled page on how all the key derivations works in different subsystems https://support.apple.com/guide/security/secure-enclave-sec59b0b31ff/1/web/1

        GrapheneOS The amount of time spent on hardware-bound key derivation varies by device and we don't know specifically how long it's configured to take in each Pixel generation.

        GrapheneOS The hardware-bound key derivation part is in the TEE and we can't change that firmware beyond requesting improvements as we've done successfully in several areas.

        I feel this is a critical last line of defense and this is the only timing delay enforced by cryptography instead of code integrity albeit on processors with smaller attack surface. The Supersonic BF on iPhone has almost the same speed as the 80ms theoretical hardware key derivation speed quoted by Apple. This means all other secure enclave based mitigations have been bypassed and the only line of defense against a truly unlimited speed brute force is this cryptography enforced iteration count. Perhaps @GrapheneOS team can ask Pixel team to publish this data and increase the timing delay to at least Apple's 80ms standard, which has negligible user impact

          • [deleted]

          • Edited

          Nuttso Molly doesn't enforce the unlock state for its keystore key.

          On iOS, Signal chat database on iOS uses NSFileProtectionComplete. Only FFS yields Signal database on iOS not AFU extraction.

          [this is not correct, it doesn't keep data at rest after first unlock on either Android or iOS even though it could]

          File system: Signal database is encrypted. The encryption key is stored in the keychain with the highest protection class. The only way to extract Signal conversations requires extracting the file system images and decrypting the keychain.
          https://blog.elcomsoft.com/2020/04/forensic-guide-to-imessage-whatsapp-telegram-signal-and-skype-data-acquisition/

          [this is not correct, it doesn't keep data at rest after first unlock on either Android or iOS even though it could]

          Nuttso Otherwise Molly wouldn't be able to start in background when the phone is locked and wouldn't be able to show notifications. Same with Signal.

          Why wouldn't notification work? The push token can be saved with NSFileProtectionCompleteUntilFirstUserAuthentication while the main database key is saved with NSFileProtectionComplete.

          To show a preview of the notification however, I speculate that some ephemeral private key can be saved temporarily with NSFileProtectionCompleteUntilFirstUserAuthentication and then saved to a separate pending database. But to only notify user a new message is received without any preview, almost everything can be secured with NSFileProtectionComplete.

          Nuttso So it can't set up its keystore key with the authentication protection (aka require the unlock state).

          But Signal on iOS works exactly like this. See above

          [this is not correct, it doesn't keep data at rest after first unlock on either Android or iOS even though it could]

            If a strong passphrase is used in the Owner profile, a 6 digit PIN is used in a secondary user profile, device is in BFU mode. Would the secondary user profile remain secure as time goes by and new exploits against the secure element and the OS are developed?

            • de0u replied to this.

              evalda I do not recall where (I believe!) I saw this, but I think the situation is:

              1. At present the only way to use the phone to access the secondary profile is to unlock the owner profile and then switch to the secondary profile, but
              2. If somebody disassembles the device (etc.), the secondary profile's storage is encrypted with keys that are not derived from the owner profile's PIN/passphrase.

              So I think if it is desired for Profile X's storage to be highly secure then Profile X needs a long random passphrase.

              If I'm wrong, I'm sure I'll be corrected!

                de0u Thank you for your reply!

                Reading https://grapheneos.org/faq#encryption

                The owner profile is special and is used to store sensitive system-wide operating system data. This is why the owner profile needs to be logged in after a reboot before other user profiles can be used.

                I was hoping that some key material for secondary user profiles would be stored in the Owner profile and encrypted with the Owner passphrase. I guess that is not really the case, would be good to get an official confirmation from GrapheneOS team.

                If we assume that secondary profiles encryption key is based solely on 1) User lock method (6-digit PIN in my example) and 2) Secret in secure element that it won't release until supplied correct 1) or compromised. If this is how it works and no further secret from the Owner profile is needed, it does sound like a compromised secure element would lead to brute-forcing PIN and decryption of the secondary profile data. Now the question is whether device has to be disassembled to perform this attack?

                • de0u replied to this.

                  evalda Now the question is whether device has to be disassembled to perform this attack?

                  I don't believe so (which is why I wrote "etc."). Compromising hardware security via an exploit is also theoretically possible.

                  evalda I was hoping that some key material for secondary user profiles would be stored in the Owner profile and encrypted with the Owner passphrase. I guess that is not really the case, would be good to get an official confirmation from GrapheneOS team.

                  Ok, I went and searched harder.

                  @evalda, I believe you received an official answer to this question, or to a very similar question, a year ago: https://discuss.grapheneos.org/d/5274-is-second-user-profile-encrypted-also-by-first-user/9 That is, I believe the answer received then to the two-part question you posed then works out to "Both #1 and #2 are false".

                  If a question still remains in light of that answer, might it be possible to phrase it differently so it is clear which part(s) are not yet addressed?

                    de0u I believe you received an official answer to this question, or to a very similar question

                    Thank you for digging it up! I do remember that discussion, but the rationale given there included:

                    While it is true that currently Owner has to be unlocked before attempts on secondary user profiles can be made, it isn't out of the question for AOSP to change that behavior in the future if they regard it as a limitation, which they likely do (from a UX standpoint and considering the fact that multiple users are meant to be used by individual people, having to have the owner present before you can unlock your own profile after a reboot isn't great).

                    It's understood that it's safer not to rely on the Owner profile for protecting data on secondary profiles. But I am curious if Owner passphrase adds any additional protection to secondary profiles as of now, not taken into account any possible change in AOSP in the future.

                    For context, the reason I am asking it is remembering a second (or more) random passphrases for secondary user profiles is a lot of cognitive load for an older lady like me lol.

                    • de0u replied to this.

                      evalda It's understood that it's safer not to rely on the Owner profile for protecting data on secondary profiles. But I am curious if Owner passphrase adds any additional protection to secondary profiles as of now, not taken into account any possible change in AOSP in the future.

                      Based on the previous official statement, I believe that as of now in order to unlock a secondary profile's data it is necessary to first unlock the owner profile or else to brute-force the secondary PIN/passphrase given an image of the storage, and/or to compromise some part of the hardware security.

                      But I think it's pretty clear that as of now a strong owner passphrase does not increase the strength of a weak secondary-profile PIN if one assumes a well-resourced attacker who has an exploit or is willing to disassemble a device. Thus I believe the answer to your core question is "no".

                      Some people might wish it were "yes" (e.g., you) and some people are glad it's "no" (people who hope someday there will be a way to boot straight to a non-admin profile, perhaps for a child). But it does appear that the present answer is "no".

                      evalda For context, the reason I am asking it is remembering a second (or more) random passphrases for secondary user profiles is a lot of cognitive load for an older lady like me lol.

                      How necessary this might be depends on one's goals and threat model. If one has genuinely important data (perhaps multiple banking apps) in a secondary profile, a strong key may be necessary. How strong depends on one's presumed attackers and how much it's worth to keep the secondary profile secure. If the secondary profile has access to just one bank account containing one month's shopping money, auto-filled monthly by a different bank, maybe a medium-length PIN is enough.

                      evalda Or, TL;dr: if one assumes the attacker is weak (restricted to treating the device as a black box and interacting with it via the regular login windows), as of now a strong owner passphrase probably provides substantial protection to the secondary profile BFU even if the secondary profile credential is weak. But if one assumes a well-resourced attacker, a strong owner passphrase provides close to zero added protection against a weak secondary-profile PIN.

                      Since you have mentioned scenarios involving the secure element being compromised, that sounds like the "well-resourced attacker" part of the space, thus I think the "close to zero added protection" answer applies.

                      @de0u Thank you for your thorough responses, it all makes sense 🌷

                      Pixel6-8 in table3, AFU support says FFS YES as well as BF NO, does this table mean that even if I don't provide the unlock code to the law enforcement agency, if the device is in the locked state of AFU, they can get and analyse the complete system image through cellebrite and extract the application information from it?

                        taiyi Not after 2022 SPL meaning all Pixels since Pixel 6 after not affected in even AFU state. This is only accurate at the time of their report (April 2024).

                          evalda

                          According to the dev team, grapheneos is a modification built on top of AOSP, and if the original system can be breached, I don't think a modification built on top of the original system is safe either.