• General
  • What is current GrapheneOS's security strength compared to iPhone circa 2016?

What is current GrapheneOS's security strength compared to the iPhone circa 2016? The FBI hacked the iPhone in 2016.

The iPhone used by a terrorist in the San Bernardino shooting was unlocked by a small Australian hacking firm in 2016, ending a momentous standoff between the U.S. government and the tech titan Apple.

https://bgr.com/tech/san-bernardino-iphone-hack-how-azimuth-broke-encryption/

Two Azimuth hackers teamed up to break into the San Bernardino iPhone, according to the people familiar with the matter, who like others quoted in this article, spoke on the condition of anonymity to discuss sensitive matters. Founder Mark Dowd, 41, is an Australian coder who runs marathons and who, one colleague said, ‘can pretty much look at a computer and break into it.’ One of his researchers was David Wang, who first set hands on a keyboard at age 8, dropped out of Yale, and by 27 had won a prestigious Pwnie Award — an Oscar for hackers — for ‘jailbreaking’ or removing the software restrictions of an iPhone.

Dowd had found a bug in open-source code from Mozilla even before the San Bernardino events. Apple relied on Mozilla’s software to allow accessories to be plugged into the iPhone’s Lightning port.

Wang used the Mozilla bug to create an exploit that allowed access to the phone. A different bug was then used for “greater maneuverability.” A final exploit gave them complete control over the phone’s processor. A piece of brute force software was then used to try all possible password combinations, bypassing the security feature that would erase the device’s storage after 10 failed attempts. The exploit was named Condor.

The researchers tested the tool on a dozen iPhone 5C devices, including phones that were bought on eBay. They then showed Condor to the FBI, and agency experts tested Condor on other devices to ensure it would work. Every test was successful, and that’s how Condor netted Azimuth a $900,000 payout.

The report notes that FBI officials were relieved but disappointed that they could not advance the encryption backdoor fight. Separately, Apple might be unhappy with security experts building tools that could be used to break into its devices. But the Post explains Azimuth’s success helped Apple, as the company never had to face a court order to build a backdoor into that particular iPhone 5C, which would have set a dangerous precedent.

Mozilla never knew a security bug in its software was used to advance the iPhone 5C hack. The company patched the problem about a month after the FBI unlocked the iPhone 5C, rendering the flaw useless. Without that bug, the whole chain of exploits would not have worked.

    Intellectual2 changed the title to What is current GrapheneOS's security strength compared to iPhone circa 2016? .
    • [deleted]

    https://www.unchainedinnovations.io/privacy-phone-grapheneos-vs-iphone

    Although quite a bit has changed since then and the degoogle/fdroid bit seems no longer the case. As of late even GrapheneOS seems to be giving in to Google ecosystem with its sandboxed Google Play with the pretext of increased app compatibility (all in order to make the platform available to wider adience).

    With Graphene while data is at rest, you currently don't get anything better in terms of security. But while data is in use, even with Graphene, your device is only as secure as the weakest link app you choose to install. Therefore, choose carefully. There are always alternatives. Some people seem comforted by the fact that they can isolate certain "evil" but necessary apps in different profiles and then they go on and use the same internet access point. It is really naive to think that Google won't be able to figure that out and link two separate identities together and put a face and an address to them.

    I admit the fact that I am running several proprietary apps with known trackers and network access at the same time. But I really am nobody and the worst thing that concerns me is that third parties may get access to my identity/payment information. The rest like where I have been and what I have done and with whom, doesn't bother me (I don't try to engage in illegal activity and by the way who is really to say what is illegal with so much harmful content freely available), especially knowing that there are people out there who do much worse stuff and walk free.

    • [deleted]

    Nuttso That case kind of renders device unusable. But not if u use fingerprint and/or if your app ecosystem actively steals data while in use. In that case you don't even need to get past lockscreen.

    Hacking the device with a whole chain of vulnerabilities is something totally different. San Bernardino was accessed by forensic. Your threat model seems to be hacking the device. Gos provides a few enhancements to counter this. Hardened malloc etc. But an sophisticated attacker could still bypass this. There is no 100% working solution for this. At least not with the current hardware.

    You're mixing up two totally different attack types.

    Intellectual2 Back around October people became aware that GrapheneOS suffered from a really bad lockscreen bug that an attacker with physical access could quickly exploit. That's the bad news.

    The really bad news is that some large undetermined number of non-GrapheneOS Android devices also had that bug, which was co-discovered by GrapheneOS and responsibly disclosed to Google. I suspect millions of people are still carrying unpatched phones.

    At present, security is genuinely hard. GrapheneOS is trying genuinely harder than other people. I'm not sure who can provide much more assurance. There are government "secure" phones, but honestly it's unclear how hard people try to break into them, and regardless you and I can't have them.

    Be careful out there!

      de0u

      Back around October people became aware that GrapheneOS suffered from a really bad lockscreen bug that an attacker with physical access could quickly exploit.

      That's not accurate. We discovered it in June. It was fixed for Android as a whole in the November security patches. It was a High, not Critical severity vulnerability due to not being remotely exploitable. It did not in any way bypass encryption or allow obtaining data from a device that's at rest. Any OS vulnerability able to be used to exploit the OS while logged in (not at rest) and locked can do the same thing as that lockscreen bypass bug. Remote code execution vulnerabilities are regularly patched, so one that requires physical access and achieves much less is lower severity. You're misinterpreting media coverage and exaggerations as reflecting the actual severity of an issue. This is not an area where Android is doing worse than iOS...

      The really bad news is that some large undetermined number of non-GrapheneOS Android devices also had that bug, which was co-discovered by GrapheneOS and responsibly disclosed to Google. I suspect millions of people are still carrying unpatched phones.

      It impacted every Android device not substantially changing the lockscreen code, which almost no one would be doing.

      At present, security is genuinely hard. GrapheneOS is trying genuinely harder than other people. I'm not sure who can provide much more assurance. There are government "secure" phones, but honestly it's unclear how hard people try to break into them, and regardless you and I can't have them.

      That bug was not relevant to a device that's at rest which is what the question was about. If the device is not at rest then any exploit giving control over the OS / application processor gives access to the data since it's not at rest and the OS can access it. It's not the topic.

      Intellectual2 The question in your thread title is overly vague and doesn't really make sense. Thread has been unlisted because it started getting answers about different things than what you actually seem to want to know.

      Based on the post itself, what you're asking is about the security of encryption for a device that's at rest not security of the OS itself. GrapheneOS provides an auto-reboot feature for getting the device at rest automatically before an attacker has time to exploit it, especially without taking the risk of causing a reboot with a failed/detected exploit.

      Please read https://grapheneos.org/faq#encryption which covers how the encryption is implemented. You don't need to worry about an attacker breaking the encryption itself. An attacker has to brute force the PIN/passphrase. For the iPhone case you bring up, the person had a low entropy PIN as their credential, which is not secure against a brute force itself but rather relies on anti-brute-force features provided by the device. Pixels and iPhones provide this with a secure element, requiring a sophisticated exploit to brute force even just a random 6 digit PIN. This is why a random 6 digit PIN provides secure encryption with Pixels and iPhones.

      You can do better with a random passphrase. There's also hardware bound key derivation as the final phase of key derivation to prevent easily offloading brute forcing to a server farm instead of only doing it on the phone itself. If an attacker can exploit the secure element to bypass Weaver (aggressive throttling forcing 1 day between attempts after the initial ramp up) and can extract the hardware bound key from the SoC hardware, which is meant to be burned into silicon, then they can do a brute force on a server farm, in which case you want a strong random passphrase. Make it strong enough and it can't be brute forced. If you go as far as using 7 diceware words or 18 lowercase letters + numbers, then that's secure against any brute force itself. You can also go for something more convenient but less secure which relies on the key derivation work factor.

      Worth noting encryption keys (derived key encryption keys and random disk encryption keys) are per-user-profile and encrypt the data within the profile. Owner (initial user) is special and sensitive OS data is encrypted inside the Owner profile data. This is the reason you must log into Owner before other users.

        Nuttso 128-bit entropy is the standard value for extreme overkill that's secure far into the future. It doesn't really make sense to generate a random passphrase with more than 128-bit entropy. The amount of entropy is a different thing from the size of the derived key encryption key. The useful reason for security levels above 128-bit is because algorithms get broken in ways that substantially reduce their security. 256-bit cipher which experiences a quite extreme break can still end up providing more than 128-bit security, thus still preserving indefinitely overkill security in practice. This is why using AES256 instead of AES128 makes sense, but generating a 256-bit entropy passphrase / seed phrase which just gets fed into a key derivation algorithm to derive arbitrary length keys really doesn't make sense.

        Significantly lower than 128-bit security is still considered secure and is just below the standard for extreme overkill that most people have settled on as making sense. Since passphrases go through key derivation adding a substantial work factor, passphrases with 90-bit entropy such as 7 diceword words or 18 lowercase letters/numbers are still highly secure and don't depend on hardware security features to prevent brute forcing. If you really want, you can raise that to 128-bit for extreme overkill but it is substantially more inconvenient and not really giving you real world benefits.

        On the other hand, something like a 64-bit entropy passphrase is not secure against brute forcing if the attacker can bypass the hardware security features. Passphrases below around 90-bit entropy can still be secure if the secure element is compromised but if both the secure element and hardware bound key derivation are bypassed, there's a serious problem.

        If the secure element isn't compromised, a random 6 digit PIN is secure against brute forcing. This is the kind of scenario in the original post where they exploit the OS and then need to exploit the secure element to bypass the brute forcing, leaving them needing a rare/sophisticated kind of exploit for the secure element firmware.

          GrapheneOS Based on the post itself, what you're asking is about the security of encryption for a device that's at rest not security of the OS itself

          Yeah I assumed this is the actual question also.

          GrapheneOS 128-bit entropy is the standard value for extreme overkill that's secure far into the future.

          Good to know.

          The security posture against law enforcement seizing the phone (the OP attack scenario) hinges on the key feature that grapheneOS has that no other OS natively does,... the ability to reboot the phone if no successful login after X hours.

          This puts the phone, "at rest" as the developers above state.
          This means no user apps are running to be exploited (so nothing like that Firefox exploit could even be possible). Plus all of that full encryption and secure element that was discussed above.

          A seizure of the phone is likely to happen when the phone is not at rest, giving law enforcement a certain amount of time. They will probably fail to exploit if the user has set a short time out for the automatic reboot.

          My question is this....
          How is this timeout coded? Is it based on system time that can be manipulated with network time (NTP)?
          Can law enforcement simply stand up a fake mobile network and spoof the time so that the OS always believes the timeout has not been reached?

            Graphite How is this timeout coded? Is it based on system time that can be manipulated with network time (NTP)?
            Can law enforcement simply stand up a fake mobile network and spoof the time so that the OS always believes the timeout has not been reached?

            Very intriguing question, so I checked it out.

            Short answer without citing the code and stuff, no, I don't believe they can spoof time from anywhere to make the reboot not happen. It appears the system clock has an internal timer, one that starts at boot. The auto reboot feature basically tells the phone to reboot when the phone's internal timer has reached n milliseconds, i.e. milliseconds since boot + auto reboot interval in milliseconds.

            Anyway, here's the stuff I found if you're curious:

            The method to reboot uses elapsedRealtime() (doc), sets an alarm using setExactAndAllowWhileIdle() (doc), which uses the type ELAPSED_REALTIME_WAKEUP (doc), which basically means use the internal system timer for the alarm, not the normal seconds since unix epoch.

              Graphite unwat

              How is this timeout coded? Is it based on system time that can be manipulated with network time (NTP)?
              Can law enforcement simply stand up a fake mobile network and spoof the time so that the OS always believes the timeout has not been reached?

              It's based on a proper monotonic timer, not real time. Timers for intervals of time should never be based on real time. GrapheneOS also doesn't trust time from the cellular network and doesn't use unauthenticated NTP. One of the features provided by GrapheneOS is that we use HTTPS-based network time which provides authentication and also more precision than the AOSP approach to fetching time despite being HTTPS-based instead of NTP.

              https://grapheneos.org/features#other-features

                GrapheneOS
                Thank you. Found this based on your explanation.
                https://developer.android.com/reference/android/os/SystemClock

                elapsedRealtime() and elapsedRealtimeNanos() return the time since the system was booted, and include deep sleep. This clock is guaranteed to be monotonic, and continues to tick even when the CPU is in power saving modes, so is the recommend basis for general purpose interval timing

                I assume this is the clock used.

                Thank you everyone for commenting :) Very interesting.

                GrapheneOS ,
                You said:

                "This is the reason you must log into Owner before other users."

                Are you saying that if a user wanted to show someone only their decoy profile, they would have to log into their master Owner profile first? Wouldnt that be dangerous/compromising?

                  • [deleted]

                  Intellectual2
                  The owner profile would be locked the same way as it would be if you used only one profile.

                  So if you showed anyone your "decoy" profile, to actually get into the owner profile, you or the person that you are showing it to would need to switch users first and than provide password to the owner profile.

                  So in this thread we talked about lengthy passwords.
                  I believe & also someone else commented too about how troublesome it would be to have to remember & to enter that lengthy 90 to 128 bit passphrase every single time one logs in. Here is an idea I've had for a long time, which is to create an encrypted peripheral login shell GUI, that you log into, which uses like a one character or even five character passphrase. Then the 2nd later that you log into, has countless files, of random code, many gigabytes of it, maybe some decoy pdfs also, or txt files or jpg photos, program files,etc.
                  And you either use a file itself, drag & drop it into the real 90 to 128 bit password box, use the file itself as the passphrase or key, or alternatively you search in this random code files for a searchphrase, like "swordfish7", then that will bring you to a section buried in these millions of lines of random code/gigabytes, & right after that word swordfish7, indented & easily selectable/copy pastable, is a 90 to 128 bit passphrase, and it looks just like the millions of code surrounding it. Its not hiding a needle in a haystack, its hiding a needle in a stack of other needles. This is by far the best way to store passwords I believe, that I know of, but I am not nearly as trained or experienced as any of the experts commenting here.

                  I also think fingerprint recognition is a bad idea, as this method is particularly vulnerable to some government's legal right to force/order/compel you to provide your fingerprint to unlock, or a non government adversary may use your fingerprint by force. Whereas a password can be forgotten(plausibly deniability), or it can be considered violating The Fifth Amendment right to protection from being compelled to incriminate oneself. The different courts across the US have disagreed with each other about this, but its much better than a fingerprint. The only thing better is plausible deniability with a hidden/decoy LUKS or veracrypted OS, or linux based phone can provide?

                  Or maybe grapheneOS's decoy profiles offer plausible deniability? is there a way to hide the Owner/real profile? and only show the decoy profile?

                    Intellectual2 https://github.com/GrapheneOS/os-issue-tracker/issues/28 This will help with adding an extra layer to fingerprint unlock as a second factor.

                    You could have your secure passphrase as the primary unlock method (will need to be used after a reboot etc.) but use a combo of PIN + Fingerprint as a second factor, instead of just fingerprint.