de0u de0u Great response, very informative. Here's what I found, and why I wrote this post to begin with (I want to emphasize that I'm still learning about Android security, but I do have a little experience with security practices after using things like PGP for other purposes over the years).
The system is very complicated. It is possible that parts of it aren't fully documented. That said, it's not clear that documentation is scarce. What has been consulted so far?
I've mainly been using build guides and sources like the one at source.android.com you linked above. I've found that grapheneos.org and source.android.com are perhaps the only reliable sources of information, and other sources seem to reference these (i.e. stackoverflow). Lately I've been doing most of my reading at source.android.com. The documentation is good, but examples are scarce and some things aren't elaborated on. AI will honestly just go tell you to break your phone if you want to talk to it about low-level Android development and build signing.
An important part of the Android security model is that system images are signed by a key for which the private half is not present on the device. That is, the system is explicitly designed so that builds happen off-device and are signed by keys that are very carefully protected. This limits the ability of malware compromising a device to hide itself. In this way (and others) the Android model is arguably superior to the standard Linux model.
This makes a lot of sense. Keeping the keys off-device is no doubt more secure than keeping them onboard. It definitely makes sense to not expose these keys whenever possible. Depending on your use-cases, it can be useful to have your own set though, as you mentioned here.
Not really. If you sign your own build and then later generate an OTA image signed by the same key, the security model would be 100% intact.
The only way to know what goes into your OS to sign updates yourself. I understand that this isn't for everyone, but it allows you take more ownership over your software and makes you responsible for examining and shipping updates yourself. This ensures that a malicious actor (whether it is Google, Samsung, or anyone else with the keys) cannot overwrite signed apps or partitions on the device without your consent. Maintaining your own keychain is also the only way to gain control of signing packages yourself, which can be useful for security in it's own right. The less widely distributed your signing keys are, the more difficult your device is to compromise. I'd argue that an ideal security model requires not trusting any third parties at all.
As Graphene OS users, many of started using it because we didn't trust Google, who can push OTA updates whenever they want, which sometimes include spyware. A lot of what Google does is not outright malicious, but it can harm the privacy of end-users. I do some freelance work as an AI trainer for the companies sometimes, and I've come to realize that all of these companies are basically just the private branch of the surveillance state. A lot of people don't understand how valuable their data is; giving it away to Google for free is a bad move imo. I can elaborate on this, but I'm sure that many people on this forum already share the same consensus.
In reality, I found a metric fuckload (hundreds of undocumented private keys) in the AOSP repo by using a grep command.
Run this command in the Android source repo:
ls -R | grep pk8
It should display hundreds of private keys. It's possible to get an idea of what some of these keys are for based on names and context (such as inbuilt APEX packages, system utilities, etc.). But for others, the name doesn't give very much information. Even then, the name means nothing without seeing the underlying code yourself. It isn't exactly clear what these apps and keys do, but a lot of them have elevated permissions just by virtue of the package type. And since we didn't sign them, we can't tell any of these apps what to do. It might be possible to resign them or maybe by compiling Android with fake keys (I haven't tried this yet, but assuming they get signed at build-time it should work).
To be honest, it seems kind of crazy to me that people blindly trust Google with their private keys and see this as completely normal.
Linux has always allowed users to become their own root of trust - perhaps I'm old-fashioned, but I don't know why we so willingly gave this up on Android and other mobile OSes. Furthermore, I'm utterly terrified at what these companies are doing with our data. Under the harmless premise of "improving Microsoft/Google/OpenAI products by collecting usage data" they're taking as much as they can without sharing any details. Google docs, Microsoft Teams, and daily computer usage on Windows all send these companies data for free - rarely anything is private anymore. I know first hand that these companies pay millions of dollars for relatively small/insignificant datasets compared to what they get for free because we basically agree to give it to them. Meta already got in trouble for torrenting books and the other ones have had similar issues. The individual has a lot less power compared to publishing companies (i.e. books, software, etc.), so it's even easier for your data to be stolen without any recourse. We don't ask, they don't tell - all we know is that they're "improving products".