• Announcements
  • Claims made by forensics companies, their capabilities, and how GrapheneOS fares

Turtle12345 Look at the iPhone/iPad tables from the screenshots of the documentation we've provided:

https://grapheneos.social/system/media_attachments/files/112/462/760/076/651/069/original/abb6bfdb2d3cbc6a.png

It's only the latest device generation and OS versions which aren't fully supported yet.

17.4 was released in March and this documentation is from April meaning it doesn't reflect improvements they made in April and May. Less than a month is not enough time to draw any conclusion that they have any major issues with 17.4. You can see that they have support for earlier releases shipping soon for everything but the iPhone 15, which has been out for enough time to give the impression that it must be at least somewhat harder for them to deal with it or it simply changed a lot of things they need to adapt to.

  • [deleted]

  • Edited

GrapheneOS
I'd like to point out that Cellebtite has been able to extract plaintext iPhone passcode (Instant Passcode Retrieval) from AFU iPhone since A11.

It means that

  1. Secure Enclave RAM is not clearing iPhone passcode properly after user input. Theoretically, as soon as the Secure Enclave derives KEK, the plaintext iPhone passcode should be wiped from Secure Enclave RAM immediately and renders IPR cryptographically not possible. Can someone file a security report with Apple security?
  2. Cellbrite can dump Secure Enclave RAM since A11.

Titian M2 on pixel seems to prevent such code execution or memory dump for now. However, can we be sure that the pixel passcode is cleared by Titan M2 RAM after authentication?

Furthermore, Android security model exposes all cryptographic key in AP RAM even in AFU mode. This renders memory dump on Titan M2 RAM unnecessary because AP RAM dump is enough for all user data. Is it possible to implement NSFileProtectionComplete keys in graphene os such that specific applications can opt in to use such keys to make data cryptographically inaccessible in AFU mode?

When IPR makes this point almost irrelevant, iPhone Supersonic BF is still limited by Secure Enclave Processor's power. "The iteration count is calibrated so that one attempt takes approximately 80 milliseconds." Such 80ms delay is enforced by cryptography and cannot be bypassed by gaining code execution on secure enclave and can only be bypassed by extracting UID physically from the fuses of the SoC.

What's the iteration count and the cryptography delay of passcode derivation on Titan M2 used by graphene os? The standard delay (5 failed attempts: 30 second delay, etc) is enforced by Titan M2 and can be bypassed once code execution is obtained on the Titan M2

    [deleted] delays between failed PIN/password attempts are listed in this section: https://grapheneos.org/faq#encryption

    Also, this bullet point from the 2024020500 release might be informative:

    run full compacting garbage collection purging all regular Java heaps of dead objects in SystemUI and system_server after locking (this is already done after unlocking to purge data tied to the lock method and derived data, but it makes sense to do it after locking too)

    [deleted]

    Secure Enclave RAM is not clearing iPhone passcode properly after user input. Theoretically, as soon as the Secure Enclave derives KEK, the plaintext iPhone passcode should be wiped from Secure Enclave RAM immediately and renders IPR cryptographically not possible. Can someone file a security report with Apple security?

    The conclusion you're drawing from this information is wrong. The OS implements the lockscreen and no issues with the secure element are needed for data to persist in the OS and be obtained via an OS exploit. They do appear to have secure enclave exploits but that's not an inherent requirement for this capability.

    Cellbrite can dump Secure Enclave RAM since A11.

    They can likely get code execution on it rather than only dumping the memory, but it's not implied by that capability. It would be unusual if they could dump the memory without getting code execution.

    Titian M2 on pixel seems to prevent such code execution or memory dump for now. However, can we be sure that the pixel passcode is cleared by Titan M2 RAM after authentication?

    That's not how encryption works on Pixels and isn't how the secure element is integrated. It doesn't have access to the lock method but rather receives a key derived by the OS from the initial key derived via scrypt from the lock method. The final key derivation happens in the TEE via a hardware-bound algorithm.

    Furthermore, Android security model exposes all cryptographic key in AP RAM even in AFU mode. This renders memory dump on Titan M2 RAM unnecessary because AP RAM dump is enough for all user data. Is it possible to implement NSFileProtectionComplete keys in graphene os such that specific applications can opt in to use such keys to make data cryptographically inaccessible in AFU mode?

    That's not how encryption works on Pixels. The keys aren't available to the OS or in regular memory. You're missing that the SoC provides hardware encryption features, not only the secure element. Wrapped keys are used for disk encryption rather than them being in regular memory or accessible to the OS at any point.

    Android already does provide apps with the ability to keep data at rest while the locked is locked. You've been misinformed about this. Android does not yet have a data class for keeping data at rest when locked but that doesn't mean it isn't a supported feature already. It can be done via the hardware keystore. Android is in the process of adding a data class for this purpose to make it more efficient and easier to implement. It would serve no purpose for GrapheneOS to add an API that's not actually going to be used by apps, and we'd be stuck maintaining that forever instead of using the upcoming standard implementation. It's already possible for apps to keep data at rest while the device is locked via the keystore APIs across Android, so it's highly unlikely they would use a GrapheneOS specific API. It's not a realistic approach to improving things. If app developers cared about this, they'd already be keeping data at rest while locked but they aren't doing it on Android or iOS in general. That iOS feature is hardly used and the harder to use Android feature is actually used more broadly due to apps like Molly... showing that making this easier and more efficient is not everything.

    When IPR makes this point almost irrelevant, iPhone Supersonic BF is still limited by Secure Enclave Processor's power. "The iteration count is calibrated so that one attempt takes approximately 80 milliseconds." Such 80ms delay is enforced by cryptography and cannot be bypassed by gaining code execution on secure enclave and can only be bypassed by extracting UID physically from the fuses of the SoC.

    Android has hardware-bound key derivation too. You should really read our encryption FAQ. Such a short delay makes no difference with the typical lock methods that are being used. It only helps with a decent passphrase, which few people use, and the few people that do are likely using fingerprint unlock to make it more convenient which is a major weakness.

    What's the iteration count and the cryptography delay of passcode derivation on Titan M2 used by graphene os? The standard delay (5 failed attempts: 30 second delay, etc) is enforced by Titan M2 and can be bypassed once code execution is obtained on the Titan M2

    That's not how the encryption is implemented. Hardware-bound key derivation happens on the SoC from the TEE. With a random 6-8 digit PIN, the key derivation work factor from scrypt in the OS and hardware-bound key derivation in the TEE doesn't make it secure. It requires a passphrase with more entropy to truly work. The hardware-bound aspect can only be bypassed through extracting the key from the hardware if it's correctly implemented and cannot be obtained through TEE code execution.

      • [deleted]

      • Edited

      GrapheneOS The conclusion you're drawing from this information is wrong. The OS implements the lockscreen and no issues with the secure element are needed for data to persist in the OS and be obtained via an OS exploit. They do appear to have secure enclave exploits but that's not an inherent requirement for this capability.

      I'm talking specifically about iOS here. iOS in contrast to Android has NSFileProtectionComplete. " Shortly after the user locks a device (10 seconds, if the Require Password setting is Immediately), the decrypted class key is discarded, rendering all data in this class inaccessible until the user enters the passcode again or unlocks (logs in to) the device using Face ID or Touch ID." https://support.apple.com/guide/security/data-protection-classes-secb010e978a/web
      What you're referring to is an AFU extraction. The difference between an AFU extraction and FFS extraction is that data protected by NSFileProtectionComplete such as keychain data is not available in AFU extraction on iOS. On Android, AFU and FFS are the same due to lack of NSFileProtectionComplete class keys and no secure element exploit is needed for AFU or FFS extraction.

      Full File System (FFS) Extraction:

      The most comprehensive type of extractions you can get on these devices.
      Required to gain access to deeper information like health, Keychain data (on iOS), and location/breadcrumb data that shows where the device has been.
      AFU Extraction:

      On Android: Get the same data as a full file system extraction.
      On iOS: Different levels of access depending on the device state can limit the information you can extract. (For example, Keychain, location data, and email accounts that may require passcode access)
      https://cellebrite.com/en/episode-23-i-beg-to-dfir-data-extractions-explained-ffs-afu-bfu-advanced-logical-digital-forensics-webinar/

      Furthermore, even with secure enclave code execution, the plaintext passcode shouldn't be recovered if Apple had implemented it correctly. Because only the passcode verifier value i.e hash of (passcode + salt) is saved by the Secure Storage Component. However, Cellebrite specifically claims they can get iPhone plaintext passcode, suggesting Secure Enclave RAM is not clearing iPhone passcode properly after user input. Please see Cellebrite's explanation of IPR IPR.jpg

      GrapheneOS That's not how encryption works on Pixels and isn't how the secure element is integrated. It doesn't have access to the lock method but rather receives a key derived by the OS from the initial key derived via scrypt from the lock method. The final key derivation happens in the TEE via a hardware-bound algorithm.

      Please forgive my ignorance about how key derivation works on Android. All iOS data protection key derivation happens in secure enclave on iOS. You're saying on Android, the first part of key derivation happens on AP, then on TEE in Titan M2?

      GrapheneOS The keys aren't available to the OS or in regular memory. You're missing that the SoC provides hardware encryption features, not only the secure element. Wrapped keys are used for disk encryption rather than them being in regular memory or accessible to the OS at any point.

      You're right. Cellebrite claimed they can do FFS on pixel 8. They must have at least gained code execution on AP on AFU mode and used AP to communicate with Titan M2 to decrypt data if not code execution on Titan M2 itself, correct?

      GrapheneOS Android already does provide apps with the ability to keep data at rest while the locked is locked. You've been misinformed about this. Android does not yet have a data class for keeping data at rest when locked but that doesn't mean it isn't a supported feature already. It can be done via the hardware keystore. Android is in the process of adding a data class for this purpose to make it more efficient and easier to implement. It would serve no purpose for GrapheneOS to add an API that's not actually going to be used by apps, and we'd be stuck maintaining that forever instead of using the upcoming standard implementation. It's already possible for apps to keep data at rest while the device is locked via the keystore APIs across Android, so it's highly unlikely they would use a GrapheneOS specific API. It's not a realistic approach to improving things. If app developers cared about this, they'd already be keeping data at rest while locked but they aren't doing it on Android or iOS in general.

      Thank you informing me that Android is already implementing something similar

      GrapheneOS That iOS feature is hardly used and the harder to use Android feature is actually used more broadly due to apps like Molly... showing that making this easier and more efficient is not everything.

      According to you, even without setting any additional database password on Molly, AFU extraction will not yield the plaintext database? https://github.com/mollyim/mollyim-android/wiki/Data-Encryption-At-Rest Didn't say anything about this. Possible for @Nuttso to clarify?

      GrapheneOS ndroid has hardware-bound key derivation too. You should really read our encryption FAQ. Such a short delay makes no difference with the typical lock methods that are being used.

      GrapheneOS That's not how the encryption is implemented. Hardware-bound key derivation happens on the SoC from the TEE. With a random 6-8 digit PIN, the key derivation work factor from scrypt in the OS and hardware-bound key derivation in the TEE doesn't make it secure. It requires a passphrase with more entropy to truly work. The hardware-bound aspect can only be bypassed through extracting the key from the hardware if it's correctly implemented and cannot be obtained through TEE code execution.

      I've read FAQ multiple times before posting. In fact, I specifically referred to the 5 failed attempts: 30 second delay in my original posting.
      Let me rephrase my question to make it more concrete.

      My question is, if secure enclave/Tian M2 is totally compromised with full code execution, and a brute force attack is performed on device, how long does it take to brute force a six-character alphanumeric passcode with lowercase letters and numbers?

      Apple claims that The passcode or password is entangled with the device’s UID, so brute-force attempts must be performed on the device under attack. A large iteration count is used to make each attempt slower. The iteration count is calibrated so that one attempt takes approximately 80 milliseconds. In fact, it would take more than five and one-half years to try all combinations of a six-character alphanumeric passcode with lowercase letters and numbers.

      How long does it take for a pixel phone with standard os and graphene os respectively?
      Keep in mind that the delay mentioned in FAQ reproduced below is all bypassed due to code execution on Titan M2

      Standard delays for encryption key derivation enforced by the secure element:
      0 to 4 failed attempts: no delay
      5 failed attempts: 30 second delay
      6 to 9 failed attempts: no delay
      10 to 29 failed attempts: 30 second delay
      30 to 139 failed attempts: 30 × 2⌊(n - 30) ÷ 10⌋ where n is the number of failed attempts. This means the delay doubles after every 10 attempts. There's a 30 second delay after 30 failed attempts, 60s after 40, 120s after 50, 240s after 60, 480s after 70, 960s after 80, 1920s after 90, 3840s after 100, 7680s after 110, 15360s after 120 and 30720s after 130
      140 or more failed attempts: 86400 second delay (1 day)

        • [deleted]

        GrapheneOS The OS implements the lockscreen and no issues with the secure element are needed for data to persist in the OS and be obtained via an OS exploit.

        I agree that it's conceivable that an AP exploit by itself exploit enables this. However, on iOS 15-15.8.x, A11-A13 has no IPR while on the same exact OS version, A14-A15 does. Perhaps it's more of an engineering constrain rather than a technical limitation.

          [deleted]

          I'm talking specifically about iOS here. iOS in contrast to Android has NSFileProtectionComplete. " Shortly after the user locks a device (10 seconds, if the Require Password setting is Immediately), the decrypted class key is discarded, rendering all data in this class inaccessible until the user enters the passcode again or unlocks (logs in to) the device using Face ID or Touch ID." https://support.apple.com/guide/security/data-protection-classes-secb010e978a/web

          Android currently provides this feature via the keystore API. It is possible for apps to keep data at rest while the device is locked, just like iOS, and if they use StrongBox it's based on the secure element. Whether the keystore keeps keys marked as requiring user authentication and an unlocked device at rest rather than just disallowing usage is a quality of implementation issue specific to devices which we don't know much about at this point.

          Android is adding a similar data class for data that's at rest while locked to avoid apps needing to use another layer of encryption via the keystore as they currently do. This DOES NOT mean that more iOS apps keep data at rest while locked. Signal doesn't do it on either Android or iOS, but Molly exists on Android which in addition to their app passphrase uses the StrongBox keystore with a key requiring authentication and an unlocked device. Apps could transparently use the keystore that way without an app passphrase too.

          What you're referring to is an AFU extraction. The difference between an AFU extraction and FFS extraction is that data protected by NSFileProtectionComplete such as keychain data is not available in AFU extraction on iOS. On Android, AFU and FFS are the same due to lack of NSFileProtectionComplete class keys and no secure element exploit is needed for AFU or FFS extraction.

          That's not correct. Android supports keeping data at rest while locked. Those are not the same thing. Neither OS has much data kept at rest while locked in practice. iOS makes this easier for app developers but easier does not mean more apps actually use it. Molly and several Android TOTP apps are counterexamples to that assumption.

          Furthermore, even with secure enclave code execution, the plaintext passcode shouldn't be recovered if Apple had implemented it correctly. Because only the passcode verifier value i.e hash of (passcode + salt) is saved by the Secure Storage Component. However, Cellebrite specifically claims they can get iPhone plaintext passcode, suggesting Secure Enclave RAM is not clearing iPhone passcode properly after user input. Please see Cellebrite's explanation of IPR IPR.jpg

          The info you're quoting isn't correct since it ignores the hardware keystore on Android. Not everything they claim is correct.

          Please forgive my ignorance about how key derivation works on Android. All iOS data protection key derivation happens in secure enclave on iOS. You're saying on Android, the first part of key derivation happens on AP, then on TEE in Titan M2?

          Android does the first part of key derivation in the OS via scrypt. It derives keys with a simple personalized hash approach for each purpose from the scrypt key derivation, which is then passed along to the different use cases. One of those is the Weaver feature implemented by the secure element. Another is the initial authentication of the Owner user with the secure element which authorizes signed updates with a newer version of secure element firmware. Another is unlocking the keystore.

          The Weaver token is how Android implements time-based throttling via the secure element. The secure element has no direct involvement in key derivation. It's not super high performance hardware and is a poor place to do that for protecting against attackers able to bypass all the hardware-based security features.

          The final key derivation is done in the Trusted Execution Environment (TEE) on the main SoC, which means in TrustZone. It uses an SoC cryptographic acceleration feature providing hardware-bound key derivation as one of the features. The hardware-bound aspect means that exploiting the TEE shouldn't provide any way either direct or indirect via side channels to get access to the key used in the hardware-bound key derivation algorithm. On Snapdragon, the Qualcomm Crypto Engine provides these features and is meant to be used in the TEE applet for this.

          On Pixels and most modern Snapdragon devices, the OS does not receive the decrypted disk encryption keys but rather handles for using them. This is called the wrapped key feature. This prevents a kernel information leak, etc. from leaking the encryption keys. They're also presumably not meant to be available via a memory dump. We haven't verified this but the information from XRY, Cellebrite, etc. appears to indicate that works properly. Otherwise, XRY would not have had to use a leftover hash or something like that in memory to brute force the lock method. They did not appear capable of doing this with GrapheneOS, but further work has been done on GrapheneOS to rule out other ways this could happen by running full compacting GC which zeroes the old heap for SystemUI / system_server when the screen is locked, not just the standard of doing it a bit after unlocking to wipe leftovers of the lock method, etc. We also made some other changes and began auditing this.

          You're right. Cellebrite claimed they can do FFS on pixel 8. They must have at least gained code execution on AP on AFU mode and used AP to communicate with Titan M2 to decrypt data if not code execution on Titan M2 itself, correct?

          FFS is when they have the user's lock method already, so all it has to involve is exploiting the device from ADB shell after enabling developer options with the lock method, enabling ADB and authorizing their access. It's not any kind of fancy exploitation.

          Thank you informing me that Android is already implementing something similar

          The Android hardware keystore is what to look into if you want to know more. There are 2 hardware keystores on Pixels: the traditional TEE (TrustZone) keystore which encrypts data and stores it in regular storage and the StrongBox keystore provided through the secure element since the Pixel 3. Pixel 2 had a secure element with insider attack protection (Owner authentication needed to update firmware via valid signed updates with newer version) and Weaver (covered in https://grapheneos.org/faq#encryption) but predates StrongBox being available.

          According to you, even without setting any additional database password on Molly, AFU extraction will not yield the plaintext database? https://github.com/mollyim/mollyim-android/wiki/Data-Encryption-At-Rest Didn't say anything about this. Possible for @Nuttso to clarify?

          It uses the StrongBox keystore and should be setting the key as requiring an unlocked device. You can check the code to see if it does that.

          My question is, if secure enclave/Tian M2 is totally compromised with full code execution, and a brute force attack is performed on device, how long does it take to brute force a six-character alphanumeric passcode with lowercase letters and numbers?

          On Android, this is based on the combination of the standard scrypt key derivation which can be offloaded elsewhere without an exploit and the final hardware-bound key derivation done within the TEE (TrustZone). There cannot be a specific answer for how long it takes and an attacker could extract the key from the hardware to offload the hardware-bound key derivation part. The amount of time spent on hardware-bound key derivation varies by device and we don't know specifically how long it's configured to take in each Pixel generation.

          How long does it take for a pixel phone with standard os and graphene os respectively?

          It's currently the same and we can only increase the scrypt part. The hardware-bound key derivation part is in the TEE and we can't change that firmware beyond requesting improvements as we've done successfully in several areas.

          Keep in mind that the delay mentioned in FAQ reproduced below is all bypassed due to code execution on Titan M2

          The FAQ explains that there's scrypt key derivation in the OS and then at the end there's hardware-bound key derivation in the TEE similar to the secure enclave key derivation on iOS. The details of the hardware-bound part vary by device and are drastically different on Snapdragon vs. Tensor along with evolving significantly over time on Snapdragon. We know more about how it works on Snapdragon than Tensor.

            [deleted]

            I agree that it's conceivable that an AP exploit by itself exploit enables this. However, on iOS 15-15.8.x, A11-A13 has no IPR while on the same exact OS version, A14-A15 does. Perhaps it's more of an engineering constrain rather than a technical limitation.

            As can be seen from the Android table, they have limited resources to deal with older generations. That also explains why they haven't developed an exploit for a several year old Titan M2 firmware version. They want something working against current and future versions, not only increasingly irrelevant past versions. If they only had to develop an exploit against the earliest Titan M2 firmware before major improvements happened it wouldn't be nearly as difficult.

            [deleted] According to you, even without setting any additional database password on Molly, AFU extraction will not yield the plaintext database? https://github.com/mollyim/mollyim-android/wiki/Data-Encryption-At-Rest Didn't say anything about this. Possible for @Nuttso to clarify?

            GrapheneOS It uses the StrongBox keystore and should be setting the key as requiring an unlocked device. You can check the code to see if it does that.

            Molly doesn't enforce the unlock state for its keystore key. Otherwise Molly wouldn't be able to start in background when the phone is locked and wouldn't be able to show notifications. Same with Signal.

            When the database isn't protected by a passphrase, Molly should be able to open its database, even if the device is locked

            So it can't set up its keystore key with the authentication protection (aka require the unlock state).

            A phone in AFU state, screen locked, app unlocked -> Molly and Signal are the same thing. There's one difference, that makes some exploits to work for Signal case, but not for Molly, because Molly enforces the StrongBox keystore.

            They'd need the phone in AFU state and exploit some vulnerability, typically a LPE.

            Perfect scenario is where an app does its encryption stuff once the user is authenticated (device is unlocked). But for messaging apps that need to show notifications on background, while device is locked, it's not that simple.

              • [deleted]

              • Edited

              GrapheneOS Signal doesn't do it on either Android or iOS, but Molly exists on Android which in addition to their app passphrase uses the StrongBox keystore with a key requiring authentication and an unlocked device. Apps could transparently use the keystore that way without an app passphrase too.

              That's incorrect regarding signal on iOS. Signal chat database on iOS uses NSFileProtectionComplete. Only FFS yields Signal database on iOS not AFU extraction.

              [[this is not correct, it doesn't keep data at rest after first unlock on either Android or iOS even though it could]

              File system: Signal database is encrypted. The encryption key is stored in the keychain with the highest protection class. The only way to extract Signal conversations requires extracting the file system images and decrypting the keychain.
              https://blog.elcomsoft.com/2020/04/forensic-guide-to-imessage-whatsapp-telegram-signal-and-skype-data-acquisition/

              GrapheneOS That's not correct. Android supports keeping data at rest while locked.

              GrapheneOS The info you're quoting isn't correct since it ignores the hardware keystore on Android. Not everything they claim is correct.

              I agree. They're only talking about most situations probably.

              GrapheneOS Android does the first part of key derivation in the OS via scrypt. It derives keys with a simple personalized hash approach for each purpose from the scrypt key derivation, which is then passed along to the different use cases. One of those is the Weaver feature implemented by the secure element. Another is the initial authentication of the Owner user with the secure element which authorizes signed updates with a newer version of secure element firmware. Another is unlocking the keystore.

              The Weaver token is how Android implements time-based throttling via the secure element. The secure element has no direct involvement in key derivation. It's not super high performance hardware and is a poor place to do that for protecting against attackers able to bypass all the hardware-based security features.

              The final key derivation is done in the Trusted Execution Environment (TEE) on the main SoC, which means in TrustZone. It uses an SoC cryptographic acceleration feature providing hardware-bound key derivation as one of the features. The hardware-bound aspect means that exploiting the TEE shouldn't provide any way either direct or indirect via side channels to get access to the key used in the hardware-bound key derivation algorithm. On Snapdragon, the Qualcomm Crypto Engine provides these features and is meant to be used in the TEE applet for this.

              Thank you for the thorough explanation. Could you point me to more resources on this to learn more about how Weaver token does time-based throttling or how hardware bound key derivation works in TEE for pixel?
              Apple has a compiled page on how all the key derivations works in different subsystems https://support.apple.com/guide/security/secure-enclave-sec59b0b31ff/1/web/1

              GrapheneOS The amount of time spent on hardware-bound key derivation varies by device and we don't know specifically how long it's configured to take in each Pixel generation.

              GrapheneOS The hardware-bound key derivation part is in the TEE and we can't change that firmware beyond requesting improvements as we've done successfully in several areas.

              I feel this is a critical last line of defense and this is the only timing delay enforced by cryptography instead of code integrity albeit on processors with smaller attack surface. The Supersonic BF on iPhone has almost the same speed as the 80ms theoretical hardware key derivation speed quoted by Apple. This means all other secure enclave based mitigations have been bypassed and the only line of defense against a truly unlimited speed brute force is this cryptography enforced iteration count. Perhaps @GrapheneOS team can ask Pixel team to publish this data and increase the timing delay to at least Apple's 80ms standard, which has negligible user impact

                • [deleted]

                • Edited

                Nuttso Molly doesn't enforce the unlock state for its keystore key.

                On iOS, Signal chat database on iOS uses NSFileProtectionComplete. Only FFS yields Signal database on iOS not AFU extraction.

                [this is not correct, it doesn't keep data at rest after first unlock on either Android or iOS even though it could]

                File system: Signal database is encrypted. The encryption key is stored in the keychain with the highest protection class. The only way to extract Signal conversations requires extracting the file system images and decrypting the keychain.
                https://blog.elcomsoft.com/2020/04/forensic-guide-to-imessage-whatsapp-telegram-signal-and-skype-data-acquisition/

                [this is not correct, it doesn't keep data at rest after first unlock on either Android or iOS even though it could]

                Nuttso Otherwise Molly wouldn't be able to start in background when the phone is locked and wouldn't be able to show notifications. Same with Signal.

                Why wouldn't notification work? The push token can be saved with NSFileProtectionCompleteUntilFirstUserAuthentication while the main database key is saved with NSFileProtectionComplete.

                To show a preview of the notification however, I speculate that some ephemeral private key can be saved temporarily with NSFileProtectionCompleteUntilFirstUserAuthentication and then saved to a separate pending database. But to only notify user a new message is received without any preview, almost everything can be secured with NSFileProtectionComplete.

                Nuttso So it can't set up its keystore key with the authentication protection (aka require the unlock state).

                But Signal on iOS works exactly like this. See above

                [this is not correct, it doesn't keep data at rest after first unlock on either Android or iOS even though it could]

                  If a strong passphrase is used in the Owner profile, a 6 digit PIN is used in a secondary user profile, device is in BFU mode. Would the secondary user profile remain secure as time goes by and new exploits against the secure element and the OS are developed?

                  • de0u replied to this.

                    evalda I do not recall where (I believe!) I saw this, but I think the situation is:

                    1. At present the only way to use the phone to access the secondary profile is to unlock the owner profile and then switch to the secondary profile, but
                    2. If somebody disassembles the device (etc.), the secondary profile's storage is encrypted with keys that are not derived from the owner profile's PIN/passphrase.

                    So I think if it is desired for Profile X's storage to be highly secure then Profile X needs a long random passphrase.

                    If I'm wrong, I'm sure I'll be corrected!

                      de0u Thank you for your reply!

                      Reading https://grapheneos.org/faq#encryption

                      The owner profile is special and is used to store sensitive system-wide operating system data. This is why the owner profile needs to be logged in after a reboot before other user profiles can be used.

                      I was hoping that some key material for secondary user profiles would be stored in the Owner profile and encrypted with the Owner passphrase. I guess that is not really the case, would be good to get an official confirmation from GrapheneOS team.

                      If we assume that secondary profiles encryption key is based solely on 1) User lock method (6-digit PIN in my example) and 2) Secret in secure element that it won't release until supplied correct 1) or compromised. If this is how it works and no further secret from the Owner profile is needed, it does sound like a compromised secure element would lead to brute-forcing PIN and decryption of the secondary profile data. Now the question is whether device has to be disassembled to perform this attack?

                      • de0u replied to this.

                        evalda Now the question is whether device has to be disassembled to perform this attack?

                        I don't believe so (which is why I wrote "etc."). Compromising hardware security via an exploit is also theoretically possible.

                        evalda I was hoping that some key material for secondary user profiles would be stored in the Owner profile and encrypted with the Owner passphrase. I guess that is not really the case, would be good to get an official confirmation from GrapheneOS team.

                        Ok, I went and searched harder.

                        @evalda, I believe you received an official answer to this question, or to a very similar question, a year ago: https://discuss.grapheneos.org/d/5274-is-second-user-profile-encrypted-also-by-first-user/9 That is, I believe the answer received then to the two-part question you posed then works out to "Both #1 and #2 are false".

                        If a question still remains in light of that answer, might it be possible to phrase it differently so it is clear which part(s) are not yet addressed?

                          de0u I believe you received an official answer to this question, or to a very similar question

                          Thank you for digging it up! I do remember that discussion, but the rationale given there included:

                          While it is true that currently Owner has to be unlocked before attempts on secondary user profiles can be made, it isn't out of the question for AOSP to change that behavior in the future if they regard it as a limitation, which they likely do (from a UX standpoint and considering the fact that multiple users are meant to be used by individual people, having to have the owner present before you can unlock your own profile after a reboot isn't great).

                          It's understood that it's safer not to rely on the Owner profile for protecting data on secondary profiles. But I am curious if Owner passphrase adds any additional protection to secondary profiles as of now, not taken into account any possible change in AOSP in the future.

                          For context, the reason I am asking it is remembering a second (or more) random passphrases for secondary user profiles is a lot of cognitive load for an older lady like me lol.

                          • de0u replied to this.

                            evalda It's understood that it's safer not to rely on the Owner profile for protecting data on secondary profiles. But I am curious if Owner passphrase adds any additional protection to secondary profiles as of now, not taken into account any possible change in AOSP in the future.

                            Based on the previous official statement, I believe that as of now in order to unlock a secondary profile's data it is necessary to first unlock the owner profile or else to brute-force the secondary PIN/passphrase given an image of the storage, and/or to compromise some part of the hardware security.

                            But I think it's pretty clear that as of now a strong owner passphrase does not increase the strength of a weak secondary-profile PIN if one assumes a well-resourced attacker who has an exploit or is willing to disassemble a device. Thus I believe the answer to your core question is "no".

                            Some people might wish it were "yes" (e.g., you) and some people are glad it's "no" (people who hope someday there will be a way to boot straight to a non-admin profile, perhaps for a child). But it does appear that the present answer is "no".

                            evalda For context, the reason I am asking it is remembering a second (or more) random passphrases for secondary user profiles is a lot of cognitive load for an older lady like me lol.

                            How necessary this might be depends on one's goals and threat model. If one has genuinely important data (perhaps multiple banking apps) in a secondary profile, a strong key may be necessary. How strong depends on one's presumed attackers and how much it's worth to keep the secondary profile secure. If the secondary profile has access to just one bank account containing one month's shopping money, auto-filled monthly by a different bank, maybe a medium-length PIN is enough.