• Announcements
  • AI generated text is forbidden with the exception of automated translation

Kira902 In addition to nation state threat actors, there are companies, mainly vendors of falsely secure smartphones, who feel threatened by GOS to offer a much better quality free product that anyone can use.

a month later

I did it to summarize important topics in long discussion threads so people could learn from the community in a concise manner. Idk about malicious actors but AI can be a very powerful teaching tool if used correctly. I won't do any more summaries if that's what the community wants though. Thanks

    Subliminal I won't do any more summaries if that's what the community wants though.

    I think this community values precise correctness, which is not a feature of current LLMs.

    Subliminal AI can be a very powerful teaching tool if used correctly.

    I think disclosure is an important part of correct use, at least at present. If, for whatever reason, one is not comfortable disclosing at least which LLM was used to generate specific content, inflicting that content on unknowing readers, viewers, etc., is unethical.

      de0u when it comes to correctness of the LLM's I think you may be underestimating it's capabilities with newer models at accurately summarizing discussions like what is on this forum. But good point you are making concerning disclosure of which LLM is being used. I had used ChatGPT 4o and did notice that without prompting it for succinctness it will tend to be more verbose than needs to be, which may have led to an undesired amount of text for most people reading and can seem spammy

        Subliminal AI-generated text is visible, unnatural and has no place on a discussion forum in my opinion. It's not allowed for the reasons explained by @admin anyway except for translation assistance.

        Subliminal When it comes to correctness of the LLM's I think you may be underestimating it's capabilities [...]

        Or maybe not. Now that you have "come clean", I will say that several days ago I nearly flagged your "App Installation Options" post for moderator investigation because it seemed like AI content: plausible if skimmed (that's what LLMs, or blarney generators, are good at), but not really right.

        Aurora can try to install an APK of the wrong architecture type. That isn't really covered by "potential for unreliable updates" -- whatever that even means. The "how to use" text for Obtanium is hardly instructive.

        If you find that kind of summary useful, you are free to use it. But the April 7 policy statement at the top of this thread makes it very clear that the community as a whole does not find that kind of summary welcome.

        Meanwhile, aside from this forum, it is generally not clear that it is ethical to "contribute" LLM output without disclosing -- every time -- that it is LLM output. People who want blarney know where to find blarney generators.

        Just my opinion! I do not speak for the GrapheneOS project.

          de0u That was one of 2 posts which led us to believe AI generated content was being posted. We didn't issue a suspension but rather gave a warning by doing a 1 day suspension and immediately undoing it so that it would send a suspension email. We currently lack a way to send direct messages since we don't want to have direct messages between users here, only between users and moderators so we have to use suspensions even when we don't want to suspend someone. It's one of the few drawbacks of this forum software but it's still a much better fit than all other options we found.

          de0u Part of the reason I posted replies with AI analysis was to see how the community would react to LLM's being used in summary. Also, how this community in particular feels about the use of LLM's in general since they will be here with us to stay and are having profound impacts on society. Since I have been using it on my own to get an idea of what is being discussed without having to read the entire forum post, I just posted it to see if others would find it helpful. Someone commented thanking for the summary so I guess someone did. If anything is helpful from this exchange maybe others will use it in the same way I have (for personal use). But, yes I come clean and won't post anymore LLM content. Thanks for your understanding.

          • de0u replied to this.

            Subliminal Part of the reason I posted replies with AI analysis was to see how the community would react to LLM's being used in summary.

            If every week somebody decides to run the same experiment -- undisclosed -- the result would be pretty annoying. Nobody here has agreed to serve as involuntary experimental subjects for LLM enthusiasts.

            Subliminal [LLMs] will be here with us to stay and are having profound impacts on society.

            Ransomware may be here to stay as well, but that doesn't mean everybody is entitled to experimentally deploy ransomware on unwilling people to measure exactly how each individual recipient feels about it.

            Subliminal Thanks for your understanding.

            If people are asking for understanding, I hope people will understand and act on the stated policy. I appreciate your stated future adherence, but it is important that others understand that re-running this experiment next month would also be unwelcome.

            Again, nobody here has agreed to serve as involuntary experimental subjects for LLM enthusiasts.

            OpenAI has stolen our work without respecting the licenses along with the work of many others. They expect copyright to apply to their own output but don't respect it for others. Their tools are plausible nonsense generators optimized for convincing people that it's accurate rather than it being at all accurate. The tools have no actual understanding of the material or reasoning ability. It's certainly good at influencing people and optimizing the spreading of misinformation, which will have a huge impact on everyone, but we're not going to welcome it here.

            Modern language processing with artificial-intelligence certainly has its benefits for translation and potentially aiding in frustration of stylometric analysis to attack user anonymity.

            Among its most popular use-case involve generating absolute nonsense ad-museum I cannot fault policies discouraging their use. LLMs cannot discern fact from fiction and those who believe otherwise are mistaken.

            4 months later

            I was in an "AI workshop" today and cant stress enough how important this rule is.

            There may be many use cases like code review, documentation etc. But spamming people with garbage text is not cool.

              a month later

              missing-root reminds me of a video i watched on youtube that's very relevant here.

              https://en.m.wikipedia.org/wiki/Apophenia

              Apparently places like Pubmed and the Nation Library of Medicine, are filled with journal publications created by AI (at least in part)

              Research studies by all types of "experts", (whom i like to dub as Psuedo Experts, ((not Tsudo, sudo or Soodo, or even Suedo))) are being sent out into fields that deal with our health.. so we are basically getting a dose of AI one way or another. sooner than later.

              I dont ever recommend channels or products these days but this guy is making a huge youtube calling just from ppl like us. Thanks GOOG .
              https://m.youtube.com/watch?v=oT0jNiPrOEc&pp=ygUVVXBwZXIgZWNoZWxvbiBwdWIgbWVk