cross-posted from: https://lemmings.world/post/21993947

Since I suggested that I’m willing to hook my computer to an LLM model and to a mastodon account, I’ve gotten vocal anti AI sentiments. Im wondering if fediverse has made a plug in to find bots larping as people, as of now I haven’t made the bot and I won’t disclose when I do make the bot.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 day ago

    Lemmy and Fediverse software have a box to tick in the profile settings. That shows an account is a bot. And other people can then choose to filter them out or read the stuff. Usually we try to cooperate. Open warfare between bots and counter-bots isn’t really a thing. We do this for spam and ban-evasion, though.

    • PixelPilgrim@lemmings.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      23 hours ago

      I know and it’s up to the accounts author to do that. I know when I made my bot on mastodon. And counter measure against ai should start

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        21 hours ago

        I really don’t think this place is about bot warfare. Usually our system works well. I’ve met one person who used ChatGPT as a form of experiment on us, and I talked a bit to them. Most people come here to talk or share the daily news or argue politics. With the regular Linux question in between. It’s mostly genuine. Occasionally I have to remind someone to tick the correct boxes, mostly for nsfw, because the bot owners generally behave and set this correctly, on their own. And I mean for people who like bot content, we already have X, Reddit and Facebook… I think that would be a good place for this, since they already have a goot amount of synthetic content.

        • PixelPilgrim@lemmings.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          20 hours ago

          Lol not the place for bot warfare 😆. That’s like saying America isn’t a place for class warfare and yet the rich already mobilized. Plus someone is probably doing the same thing as me and not disclosed it

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            18 hours ago

            It is like I said. People on platforms like Reddit complain a lot about bots. This platform on the other hand is kind of supposed to be the better version of that. Hence not about the same negative dynamics. And I can still tell ChatGPT’s uniquie style and a human apart. And once you go into detail, you’ll notice the quirks or the intelligence of your conversational partner. So yeah, some people use ChatGPT without disclosing it. You’ll stumble across that when reading AI generated article summaries and so on. You’re definitely not the first person with that idea.

            • PixelPilgrim@lemmings.worldOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              18 hours ago

              Reddit is different than fediverse. They work on different principles and I argue fediverse is very libertarian.

              Is there anyway you can rule out survivorship bias? Plus I’m already doing preliminary stuff and I looking into making response shorter so that there’s less information to go on and trying different models

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                17 hours ago

                What kind of models are you planning to use? Some of the LLMs you run yourself? Or the usual ChatGPT/Grok/Claude?

                • PixelPilgrim@lemmings.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  14 hours ago

                  So far I’ve experimented with ollama3.2 (I don’t have enough ram for 3.3). Deepseek r1 7b( discovered that it’s verbose and asks a lot of questions) and I’ll try phi4 later. I could use the chat-gpt models since I have tokens. Ironically I’m thinking about making a genetic algorithm of prompt templates and a confidence check. It’s oddly meta

                  • hendrik@palaver.p3x.de
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    edit-2
                    6 hours ago

                    I often recommend Mistral-Nemo-Instruct. I think that one strikes a good balance. But be careful with it, it’s not censored. So given the right prompt, it might yell at people, talk about reproductive organs etc. All in all it’s a job that takes some effort. You need a good model, come up with a good prompt. Maybe also give it a persona. And the entire framework to feed in the content, make decisions what to respond to. And if you want to do it right, and additional framework for safety and monitoring. I think that’s the usual things for an AI bot.

                    If you manage to do it, maybe write some blog post about the details and what you did. I think other people might be interested.