hellodolly
It doesn't matter, really, because as I said earlier, it's a fundamental constraint because LLMs cannot reason about code and its security: It's just a text generator which guesses words in a sequence based on what it's been trained on.
So, a model has to be trained, and the material has to be picked*, and so it works best for vulnerabilities that are common, because there is more material to be trained upon for say, memory leaks in C code, than a super-specific flaw in a Bluetooth driver written in Assembly. Not much code is available for the latter.
* mostly from the hellscape of the internet, scoured without any rights whatsoever.