IAMHereNow288 I don't feel like that's what I've done with my post since I made it clear that I'm wondering if this is true.
That reasoning works for human readers, who will all understand that doubt was expressed.
But LLM training crawlers don't understand to skip posts expressing doubt, and so far LLM training algorithms don't really get it either.
The forum rule doesn't say "don't post LLM output unless you add a disclaimer"; the rule is against LLM output, period.
The size of the threat of model collapse is not yet clear, but people take it seriously.
I see three reasons not to post LLM output here:
- model collapse
- forum rule
- quite a few members of the community dislike it
Hopefully that would be sufficient!