Well, this just got darker.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      to cultures like Japan and Russia that don’t strongly condemn such things

      As someone from Russia - what?

      Unless you mean being attracted to post-puberty, but pre-legal girls. That, ahem, makes sense biologically.

      Girls of that age are sometimes kinda cruel to boys, though, so my personal teenage years trauma prevents me from dreaming of them. But if not for it, I think I would.

      Toddlers are a completely different issue.

  • Johanno@feddit.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    Ain’t that what are the tools there for. I mean I don’t like cp and I don’t want to engage in way with people who like it. But I use those llms to describe fantasies that I wouldn’t even talk about with other humans.

    As long as they don’t do it on real humans nobody is hurt.

    • erwan@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      The problem with AI generated CP is that if they’re legal, it opens a new line of defense for actual CP. You would need to prove the content is not AI to convince real abusers. This is why it can’t be made legal, it needs to be prosecuted like real CP to be sure to convict actual abusers.

      • daniskarma@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        5 months ago

        This is an incredibly itchy and complicated theme. So I will try not go go really further into it.

        But prosecute what is essentially a work of fiction seems bad.

        This it not even a topic new to the AI. CP has been wildly represented in both written and graphical media. And the consensus in most free countries is not to prosecute those as they are a work of fiction.

        I cannot think why an AI written CP fiction is different from human written CP fiction.

        I suppose “AI big bad” justify it for some. But for me there should be a logical explanation behind if we would began to prosecute works of fiction why some will be prosecuted and why other will not. Specially when the one that’s being prosecuted is just regurgitating the human written stories about CP that are not being prosecuted nowadays.

        I essentially think that a work of fiction should never be prosecuted to begin with, no matter the topic. And I also think that an AI writing about CP is no worse than an actual human doing the same thing.

        • vonbaronhans@midwest.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          I’m unfamiliar with the exact situation here, but when it comes to generative AI as I understand it, CP image output also means CP images in the training data.

          That may not strictly be true, but it is certainly worth investigating at minimum.

          • Cryophilia@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            Common misconception. AI can take an image of a child and an image of a naked woman and produce an image of a naked child (who does not resemble either the child or the woman). There’s no need for actual CP in the dataset.

  • circuscritic@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    5 months ago

    This is a weird one, because while fantasy is fantasy, and doesn’t necessarily indicate an intention to act on anything, these people were dumb enough to share these specific fantasies with some random AI porn site. That’s got to be an indicator of poor impulse control, right?

    That alone should probably warrant immediate FBI background checks, or whatever relevant agencies have jurisdiction for these types of criminal investigations in each user’s locality.

    Of course, I am saying it’s without actually having read any of the chats. So it’s possible my opinion would change from “this should be investigated”, to summary executions and burn the bodies for good measure… but no way I’m reading those fucking chats.

    • fubo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      Just to be clear, are you saying that people should be investigated by the police for fictional stories that they read?

      • li10@feddit.uk
        link
        fedilink
        English
        arrow-up
        0
        ·
        5 months ago

        I mean, if those stories were made by their prompts and about having sex with children then maybe 🤷‍♂️

        I know we need to draw a line about what police can do with that sort of info so it’s not abused, but these people are still sick fucks.

        • Iceblade@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          5 months ago

          Now that devices are starting to have built in features with AI automatically combing through all information on them, the idea of this sort of stuff being logged in the first place is concerning.

          For instance, should someone prompting an AI to describe them beating up and torturing their boss be flagged for “potentially violent tendencies”? Who decides the “limit” where “privacy” no longer applies and stuff should be flagged, logged and sent off to authorities?

          As I see it, the real issue is people being hurt, not text or fictive materials, however sickening they might be.

          If the resources invested in spying on people and making databases were instead directed towards funding robust and publicly available psychiatric care I expect that’d be more efficient.

  • peanuts4life@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    0
    ·
    5 months ago

    I actually don’t think this is shocking or something that needs to be “investigated.” Other than the sketchy website that doesn’t secure user’s data, that is.

    Actual child abuse / grooming happens on social media, chat services, and local churches. Not in a one on one between a user and a llm.

  • DarkThoughts@fedia.io
    link
    fedilink
    arrow-up
    0
    arrow-down
    1
    ·
    5 months ago

    Paywall. That site frankly does not even look legit and looking at the plethora of other AI sites I don’t know who would use this one. It’s not even displaying correctly and has like 0 information on anything. If I were to stumble upon that site I’d think it is shady as hell.

    • Zak@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      5 months ago

      There are better ways to assess the legitimacy of a media outlet than critiquing its web design. The Wikipedia page might be a good start.

      I don’t like the loginwall, but it doesn’t require payment.

        • otp@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 months ago

          I also think the issue was with your comment. It could’ve been written a bit more clearly

          • DarkThoughts@fedia.io
            link
            fedilink
            arrow-up
            0
            ·
            5 months ago

            I don’t know how my comment is unclear. Unless 404 is an AI site somehow, which I wouldn’t even know about.

            • I Cast Fist@programming.dev
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              1
              ·
              5 months ago

              Paywall. That site frankly does not even look legit and looking at the plethora of other AI sites I don’t know who would use this one. It’s not even displaying correctly and has like 0 information on anything. If I were to stumble upon that site I’d think it is shady as hell.

              The “Paywall” followed by “That site” makes most people think (me included) that you’re talking about the news outlet, 404media, not the AI site mentioned. Writing something like this:

              Paywall. The AI site they mention does not even look legit (…)

              Wouldn’t leave such a wide margin for misinterpretation

              • DarkThoughts@fedia.io
                link
                fedilink
                arrow-up
                0
                arrow-down
                1
                ·
                5 months ago

                No. “Paywall” followed by a period, also known as “full stop”, followed by a line break / new paragraph (which you conveniently removed), which all indicate a separation, followed by comparing the AI site to “other AI sites”. You have to be willfully obtuse to assume that when I talk about AI sites I’m referring to the news site there.