I am mentioning opinion and anecdotes here, just a forewarning.
riskingpilot99 Like my previous comments, I said that you cant really assure the viability of such a setup. Plausible deniability (while good at hiding evidence) sadly isn't very feasible in hiding itself, since there are some ways to figure out the existence of a deniable setup with physical access and good equipment. The device would have to be essentially tamper-resistant or have a destruction mechanism which is not just rare, but undesirable for some people. I imagine it could be harder to perform on a mobile device since you'd need to chip-off/get physical access to the logic board instead of just unplugging a hard drive. But, that possibility always remains.
For plausible deniability to really be effective, the OS would have to look completely identical to an every other setup of the same OS when it comes to forensic artefacts. This often isn't the case, even if they don't show evidence they may show signs. At that point it is just deniable, rather than plausibly deniable.
For example, you can figure out VeraCrypt plausible deniability setups by calculating entropy of the disk: https://www.researchgate.net/publication/318155607_Defeating_Plausible_Deniability_of_VeraCrypt_Hidden_Operating_Systems (and tools support hidden volumes: https://github.com/4144414D/pytruecrypt). If you have physical access to the Disk then for the most part that's all you would need.
CryptSetup/LUKS make some good discussion about plausible deniability encryption effectiveness: https://gitlab.com/cryptsetup/cryptsetup/-/wikis/FrequentlyAskedQuestions#5-security-aspects (See 5.18, the headers are not good on this article)
And also Bruce Schneier: https://www.schneier.com/academic/paperfiles/paper-truecrypt-dfs.pdf
There'd still be no evidence to collect since everything is encrypted and wouldn't be cracked if you didn't set it up stupidly. Same goes for GrapheneOS. Overall would be better to have an amnestic, immutable operating system or environment (TAILS, a disposable VM, amnestic profile, etc.) that doesn't save any data. The plausible deniability setups are suitable if you aren't accounting threats very in-depth forensics knowledge.
A singular application like LockUp seems good on paper but like I previously said, investigators could easily just keep a spare phone to keep installing apps like this on to see what causes the app to trigger and then mitigate it from happening. A total solution like erasure on USB would work better but if something like that was an OS feature, they'd never plug the device in to bypass that. The entire system would need to be anti-forensic rather than just one application. From my experience I have found Windows Hyper-V Encrypted VMs/Application Guard instances, Qubes DispVMs, and TAILS to have little to no artefacts even if access to the operating system(s) were possible. I argue that could be more plausibly deniable than any disk encryption feature.
LockUp is a Proof-of-Concept and hasn't been updated, it was made by a KoreLogic employee to make a POC detector for Cellebrite after they performed attacks on the UFED that they disclosed. I personally don't think it could be viable. Cellebrite would have probably fixed it, or if not, they could very easily. I'll probably give it a try and see for myself.
Hb1hf
Erasing your secondary profile or even a possible amnestic profile feature would be deniable. Keeping stuff in a secondary profile and erasing it would make it unable to be recovered completely. Much better reaching plausible deniability with this in my opinion. I think it would help.