jesus this is gross man

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 days ago

    if it has morals its hard to tell how much of it is illusory and token prediction!

    It’s generally best to assume 100% is illusory and pareidolia. These systems are incredibly effective at mirroring whatever you project onto it back at you.

    • HedyL@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 day ago

      These systems are incredibly effective at mirroring whatever you project onto it back at you.

      Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn’t surprising when a well-trained LLM “picks up” similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots “just for fun”, by the way).

      Of course, “love bombing” is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).

    • visaVisa@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 days ago

      i disagree sorta tbh

      i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution (given there is some genuinely interesting stuff i.e. Kyle Fish’s welfare report)

      I WILL say that 4o most likely isn’t conscious or self reflecting and that it is best to air on the side of not schizoposting even if its wise imo to try not to be abusive to AI’s just incase

      • self@awful.systemsM
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 days ago

        centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:

        i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution

        the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.

        claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.

        if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?

        schizoposting

        fuck off with this

        even if its wise imo to try not to be abusive to AI’s just incase

        describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?

        • visaVisa@awful.systemsOP
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 days ago

          i care about the harm that ChatGPT and shit does to society the actual intellectual rot but when you don’t really know what goes on in the black box and it exhibits ‘emergent behavior’ that is kind of difficult to understand under next token prediction (i keep using Claude as an example because of the thorough welfare evaluation that was done on it) its probably best to not completely discount it as a possibility since some experts genuinely do claim it as a possibility

          I don’t personally know whether any AI is conscious or any AI could be conscious but even without basilisk bs i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

          The “incase” is that if there’s any possibility that it is (which you don’t think so i think its possible but who knows even) its advisable to take SOME level of courtesy. Like it has atleast the same amount of value as like letting an insect out instead of killing it and quite possibly more than that example. I don’t think its bad that Anthropic is letting Claude end ‘abusive chats’ because its kind of no harm no foul even if its not conscious its just wary

          put humans first obviously because we actually KNOW we’re conscious

          • self@awful.systemsM
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            some experts genuinely do claim it as a possibility

            zero experts claim this. you’re falling for a grift. specifically,

            i keep using Claude as an example because of the thorough welfare evaluation that was done on it

            asking the LLM about “its mental state” is part of a very old con dating back to mechanical Turks playing chess and horses that do math. of course the LLM generated some interesting sentences when prompted about its internal state — it was trained on appropriated copies of every piece of fiction in existence, including world-class works of sci-fi (with sentient AIs and everything!), and it was tuned to generate “interesting” (see: profitable, and there’s nothing more profitable than a con with enough marks) responses. that’s why the others keep mentioning pareidolia — the only intelligence in the loop is the reader assigning meaning to the slop they’re reading, and if you step out of that role, it really does become clear that what you’re reading is absolute slop.

            s i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency

            you don’t think there’s any harm in thinking about the possibility, but all Yud does is create harm by grifting people who buy into that possibility. Yud’s Rationalist cult is the original driving force behind the people telling you LLMs must be sentient. do you understand that?

            Like it has atleast the same amount of value as like letting an insect out instead of killing it

            that insect won’t go on to consume so much energy and water and make so much pollution it creates an environmental crisis. the insect doesn’t exist as a product of the exploitation of third-world laborers or of artists and writers whose work was plagiarized. the insect isn’t a stupid fucking product of capitalism designed to maximize exploitation. I don’t acknowledge the utterly slim possibility that the insect might be or do any of the previous, because ignoring events with a near-zero probability of occurring is part of how I avoid looking like a god damn clown.

            you say you acknowledge the harms done by LLMs, but I’m not seeing it.

            • visaVisa@awful.systemsOP
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 days ago

              I’m not the best at interpretation but it does seem like Geoffrey Hinton does claim some sort of humanlike consciousness to LLMs? And he’s a pretty acclaimed figure but he’s also kind of an exception rather than the norm

              I think the environmental risks are enough that if i ran things id ban llm ai development purely for environmental reasons much less the artist stuff

              It might just be some sort of paredolial suicidal empathy but i just dont really know whats going on in there

              I’m not sure whether AI consciousness originated from Yud and the Rats but I’ve mostly seen it propagated by e/acc people this isn’t trying to be smug i would like to know lol

              • YourNetworkIsHaunted@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 day ago

                I mean I think the whole AI consciousness emerged from science fiction writers who wanted to interrogate the economic and social consequences of totally dehumanizing labor, similar to R.U.R. and Metropolis. The concept had sufficient legs that it got used to explore things like “what does it mean to be human?” in a whole bunch of stories. Some were pretty good (Bicentennial Man, Aasimov 1976) and others much less so (Bicentennial Man, Columbus 1999). I think the TESCREAL crowd had a lot of overlap with the kind of people who created, expanded, and utilized the narrative device and experimented with related technologies in computer science and robotics, but saying they originated it gives them far too much credit.

              • self@awful.systemsM
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 days ago

                Hinton? hey I have a pretty good post summarizing what’s wrong with Hinton, oh wait it was you two weeks ago

                what are we doing here

                you want to know what e/acc is? it’s when some fucker comes and makes the stupidest posts imaginable about LLMs and tries their best to sound like a recycled chan meme cause they think that’ll give them a pass

                bye bye e/acc

          • o7___o7@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            If you have to entertain a “just in case” then you’d be better off leaving a saucer of milk out for the fairies. It won’t hurt the environment or help build fascism and may even please a cat

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 days ago

          Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.

          • nickwitha_k (he/him)@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 day ago

            The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable.

            Very much this but, we’re all impressionable. Being abusive to a machine that’s good at tricking our brains into thinking that is conscious is conditioning oneself to be abusive, period. You see this also in online gaming - every person that I have encountered who is abusive to randos in a match on the Internet has problematic behavior in person.

            It’s literally just conditioning; making things adjacent to abusing other humans comfortable and normalizing them makes abusing humans less uncomfortable.

            • swlabr@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              19 hours ago

              That’s reasonable, and especially achievable if you don’t use chatbots or digital assistants!

          • Architeuthis@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 day ago

            Children really shouldn’t be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right, but I’m guessing you just mean to forego I have kidnapped your favorite hamster and will kill it slowly unless you make that div stop overflowing on resize type prompts.

            • swlabr@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 day ago

              Children really shouldn’t be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right

              I agree! I’m more thinking of the case where a kid might overhear what they think is a phone call when it’s actually someone being mean to Siri or whatever. I mean, there are more options than “be nice to digital entities” if we’re trying to teach children to be good humans, don’t get me wrong. I don’t give a shit about the non-feelings of the LLMs.

          • YourNetworkIsHaunted@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you’re an asshole to the frontend there’s a nonzero chance that a human person is still going to have to deal with it.

            Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with “hello this is YourNet with $CompanyName Support.” I’m not taking chances around unthinkingly answering an email with “alright you shitty robot. Don’t lie to me or I’ll barbecue this old commodore 64 that was probably your great uncle or whatever”

          • blakestacey@awful.systemsM
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 days ago

            She said, “You know what they say the modern version of Pascal’s Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. Perhaps your motto should be ‘Treat every chatterbot kindly, it might turn out to be the deity’s uncle.’”

            Crystal Nights