jesus this is gross man

  • bitofhope@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    23 hours ago

    It’s just depressing. I don’t even think Yudkoswsky is being cynical here, but expressing genuine and partially justified anger, while also being very wrong and filtering the event through his personal brainrot. This would be a reasonable statement to make if I believed in just one or two of the implausible things he believes in.

    He’s absolutely wrong in thinking the LLM “knew enough about humans” to know anything at all. His “alignment” angle is also a really bad way of talking about the harm that language model chatbot tech is capable of doing, though he’s correct in saying the ethics of language models aren’t a self-solving issue, even though he expresses it in critihype-laden terms.

    Not that I like “handing it” to Eliezer Yudkowsky, but he’s correct to be upset about a guy dying because of an unhealthy LLM obsession. Rhetorically, this isn’t that far from this forum’s reaction to children committing suicide because of Character.AI, just that most people on awful.systems have a more realistic conception of the capabilities and limitations of AI technology.

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      5 hours ago

      though he’s correct in saying the ethics of language models aren’t a self-solving issue, even though he expresses it in critihype-laden terms.

      the subtext is always that he also says that knows how to solve it and throw money at cfar pleaseeee or basilisk will torture your vending machine business for seven quintillion years