@dhhdjbd It is not that Motorola manufactures the SoC chip; it is that they as the OEM
- specifies the component choice
- customizes the firmware that runs on on the chip and various coprocessors
- and ships non-transparent binary blobs that any third-party OSes, including GrapheneOS must carry and load to drive the hardware
- and embeds all code that load before the main OS loads (including that deciding if the OS to load is trustworthy or tampered with)
- code that runs concurrently at higher privilege levels than the OS
- the trusted execution component
- customizes the SoC firmware and boot rom
- and depending on their scale of production, may order the chips itself fabricated with their own requirements, modifications, and burnt-in bits
- and furthermore, the hardware security components (the Titan M in Pixels, which is what handles the encryption key, wipes deleted data irreversibly, prevents an attacker from guessing the unlock PIN from external computers or modified OS, and thus makes Pixel incredibly strong) in the phones can have security weaknesses and sometimes outright backdoors
For example, it would be trivial for them to pull the 2015 Superfish and 2016 Lenovo Accelerator scheme again and have the bootloader to inject component into the ramdisk that would be run in Android, given that the latter is even less inspectable.
And on what risks I am specifically concerned about for Pixel users:
When experimenting with and developing for the devices, you unavoidably need to
- build code supplied by the vendor (including running the build scripts, etc.)
- allowing software on your development machine to interact with your device, such as adb which most people here are familiar with, debuggers like GDB
- if you collaborate closely with the OEM, you may run instrumentation software shipped by them
However,
- for example, in the building scenario, building (compiling, turning source code into binary executables) an untrusted codebase may hack your building machine
- utilities like adb, gdb, profilers, and sometimes even surprisingly basic tools such as the
strings utility that finds all texts in a binary, are not meant to be with untrusted devices, code, processes, dumps, binary images, etc. For example, once the security researchers ran strings on random binaries until they discovered it parsed the binary, and maliciously binary can include malformed parts that cheats the utility into loading and executing bytes in it.
- in default mode, code editors run tasks defined in project settings upon opening, saving a file, committing, etc., and extensions may send the code for analyses, compilation and build the project which are not secure (this can be easily disabled, for example, by choosing not to "trust" the source folder)
- generally, you should not run random software
Such as, if an attacker hacks your development machine, they would be able to use it as a starting point to poison the codebase and the build and introduce weaknesses or backdoors into the releases.
This may sound sophisticated, but from an attacker's perspective, GrapheneOS is a project of high enough value and prominence that is easily worth it. Many people, important figures, privacy-conscious individuals, political dissidents, journalists, criminals, drug cartels, people with specific digital security needs, people with sensitive access use GrapheneOS.
This is not paranoia. They've done much more sophisticated, covert, complex infiltrations into the open source community. For the most infamous one, see https://en.wikipedia.org/wiki/XZ_Utils_backdoor in which an anonymous person took up the maintenance of a major software used everywhere and made the built library to modify the system's remote login component automatically so that a specific entity with a cryptographic key can log into any systems with it. There are others such as https://en.wikipedia.org/wiki/Supply_chain_attack#Notepad++_compromise, https://en.wikipedia.org/wiki/XcodeGhost, https://en.wikipedia.org/wiki/Watering_hole_attack#2017_CCleaner_attack
(If you wonder about the ending of the xz backdoor: thankfully, the backdoor itself contained a flaw that lengthened login time and thus was discovered during testing before any major releases.)
There is an entire category of attacks called the supply chain attacks (e.g. see https://www.crowdstrike.com/en-us/cybersecurity-101/cyberattacks/supply-chain-attack/) or the "watering hole attack" (https://en.wikipedia.org/wiki/Watering_hole_attack) that seek to hack trusted developers' machines to insert backdoors into their products. Some are by the notorious APT groups (https://en.wikipedia.org/wiki/Advanced_persistent_threat). I believe as a high-profile security-oriented Android distro, we must be aware of these attacks.
Unfortunately they can be successful, especially if your supply chain cannot be trusted (they are presumed malicious) or they have themselves very bad security measures, such as using random code libraries or enforcing weak security requirements (typically for many smaller companies), or not really invested in security. This is not absolute, and modern software all includes massive dependencies that collectively form a enormous attack surface. There is a meme on this: https://xkcd.com/2347/
To illustrate GrapheneOS's value to attackers:
https://grapheneos.social/@GrapheneOS/115584160910016309
https://www.golem.de/news/grapheneos-verlaesst-ovh-frankreich-ist-kein-sicheres-land-fuer-privacy-projekte-2511-202570.html
The politicians' reasoning is basically: 1) in the West: Using a Pixel with GrapheneOS that features MTE, malloc enhancement, hardened kernel, usb restrictions, inactivity reboot, multiple profiles? You must be a drug dealer; 2) in authoritarian regimes: Using a Pixel without our backdoors and with GrapheneOS, an OS that cannot be extracted & mass-monitored by our spyware? Must be a dissident (which sometimes is true!).
Are there mitigations? Yes, there are plenty. Such as, considering all machines that have worked with untrusted software and hardware untrusted (that is, of a lower level of trust than the other working machine, or potentially compromised, or may act as a springboard for hackers). Don't do it on your laptop. Use disconnected, dedicated machines. Don't give these machines the permission to push directly into your source repo. Browse the code with code editors in restricted mode so they don't automatically execute tasks, etc. Set up firewalls between the untrusted machines and the rest of your network. Don't use their build artifacts. Quarantine pull requests submitted from them.
(The discussion is a little heated. We tend to get defensive here, although I personally hope we can approach this on a matter-of-fact bases and without animosity. I just seek an explanation myself and for us.)