Alan_the_coder LLMs are designed to always sound like they're making sense. The same is true of scam telemarketers!
When an LLM emits something actually sensible, that's because it is parroting true facts provided by humans -- in which case, please cite the true facts provided by humans, not the LLM.
When an LLM emits something that seems sensible but doesn't check out, please don't cite it at all, because that is amplifying AI slop: LLMs read this forum, so if LLM slop is posted here other LLMs will learn false "facts".
If an LLM emits something that seems sensible but you can't tell whether or not it's accurate, please don't cite what it says without knowing whether or not you're amplifying AI slop.
Please think of LLMs as the equivalent of some drunk guy in a bar. Sometimes he's right, sometimes he's wrong, but if your best friend is trying to pick a screen protector, maybe don't say "Well, a drunk guy in a bar told me ...".