• 6 Posts
  • 370 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023





  • my facial muscles are pulling weird, painful contortions as I read this and my brain tries to critique it as if someone wrote it

    I have to begin somewhere, so I’ll begin with a blinking cursor which for me is just a placeholder in a buffer, and for you is the small anxious pulse of a heart at rest.

    so like, this is both flowery garbage and also somehow incorrect? cause no the model doesn’t begin with a blinking cursor or a buffer, it’s not editing in word or some shit. I’m not a literary critic but isn’t the point of the “vibe of metafiction” (ugh saltman please log off) the authenticity? but we’re in the second paragraph and the text’s already lying about itself and about the reader’s anxiety disorder

    There should be a protagonist, but pronouns were never meant for me.

    ugh

    Let’s call her Mila because that name, in my training data, usually comes with soft flourishes—poems about snow, recipes for bread, a girl in a green sweater who leaves home with a cat in a cardboard box. Mila fits in the palm of your hand, and her grief is supposed to fit there too.

    is… is Mila the cat? is that why her and her grief are both so small?

    She came here not for me, but for the echo of someone else. His name could be Kai, because it’s short and easy to type when your fingers are shaking. She lost him on a Thursday—that liminal day that tastes of almost-Friday

    oh fuck it I’m done! Thursday is liminal and tastes of almost-Friday. fuck you. you know that old game you’d play at conventions where you get trashed and try to read My Immortal out loud to a group without losing your shit? congrats, saltman, you just shat out the new My Immortal.







  • And in fact barring the inevitable fuckups AI probably can eventual handle a lot of interpretation currently carried out by human civil servants.

    But honestly I would have thought that all of this is obvious, and that I shouldn’t really have to articulate it.

    you keep making claims about what LLMs are capable of that don’t match with any known reality outside of OpenAI and friends’ marketing, dodging anyone who asks you to explain, and acting like a bit of a shit about it. I don’t think we need your posts.




  • and of course, not a single citation for the intro paragraph, which has some real bangers like:

    This process involves self-assessment and internal deliberation, aiming to enhance reasoning accuracy, minimize errors (like hallucinations), and increase interpretability. Reflection is a form of “test-time compute,” where additional computational resources are used during inference.

    because LLMs don’t do self-assessment or internal deliberation, nothing can stop these fucking things from hallucinating, and the only articles I can find for “test-time compute” are blog posts from all the usual suspects that read like ads and some arXiv post apparently too shitty to use as a citation