

I feel like calling it race pseudoscience inadvertently suggests the existence of legitimate race science.
Bistable multivibrator
Non-state actor
Tabs for AI indentation, spaces for AI alignment
410,757,864,530 DEAD COMPUTERS
I feel like calling it race pseudoscience inadvertently suggests the existence of legitimate race science.
This is why I absolutely cannot fucking stand creative work being referred to as “content”. “Content” is how you refer to the stuff on a website when you’re designing the layout and don’t know what actually goes on the page yet. “Content” is how you refer to the collection of odds and ends in your car’s trunk. “Content” is what marketers call the stuff that goes around the ads.
“Content”… is Whatever.
I was going to make a comment on the Stubsack thread about how it kind of ticks me off how “content creator” has permeated its way so deep into the vernacular. I can forgive it when it’s used as a clumsy term to talk about creative workers across multiple media, but something like a video essayist calling another video essayist a content creator just gives me the ick. Have some pride and solidarity in your art form, for fuck’s sake.
Also HP has so far had this kind of a reverse Midas touch where they turn every networking gear company they touch into HP.
HPE buys Juniper. Fuck.
Comic Book Guy energy
Our plagiarism machine is not a plagiarism machine. In fact, it’s so much not a plagiarism machine we added an optional feature that lets it detect when its plagiarism is too ovious and make its plagiarism less ovious.
I have a Kubernetes cluster running my AI agents for me so I don’t have to learn how to set up AI agents. The AI agents are running my Kubernetes cluster so that I don’t have to learn Kubernetes either. I’m paid $250k a year to lie to myself and others that I’m making a positive contribution to society. I don’t even know what OS I’m running and at this point I’m afraid to ask.
They quickly accelerated their way out.
Tech support workers who may or may not speak English as a second language or with a strong accent can sometimes be of actual help and sometimes not, much like people with possibly more familiar accents can. A call center AI has never been anywhere close to solving a problem I’ve had.
You’re no less fucked but instead of a human being getting underpaid for the trouble, a shitty tech corporation is overpaid.
A tool uses an LLM, the LLM uses a tool. What a beautiful ouroboros.
I don’t want to live in the world of The Very Hungry Caterpillar.
I don’t want to live in the world of The Giving Tree.
I don’t want to live in the world of Pippi Longstocking (Sweden).
I want to live in the world of Goosebumps, The Yellow Pages, and JBL Tune Beam Quick Start Guide.
Someone ask if those fucks wanna see how much of the modern world was actually built by China? Wanna let them run it instead?
Frankly yes. In a better world art would not be commodified and the economic barriers that hinder commissioning of art from skilled human artists in our capitalist system would not exist, and thus generative AI recombining existing art would likely be much less problematic and harmful to both artists and audiences alike.
But also that is not the world where we live, so fuck GenAI and its users and promoters lmao stay mad.
Yes, that is also the case.
Yea, what if the master owns a wrecking ball, a bulldozer, a heavy duty excavator and a bunch of dynamite?
Yes, this is a metaphor for C programming, how did you know?
You’re both incorrect. I am the least fascist programmer and I’m here to tell you programming is inherently fascist.
The simultaneous problem and benefit of the stubstack thread is that a good chunk of the best posts of this community are contained within them.
It’s just depressing. I don’t even think Yudkoswsky is being cynical here, but expressing genuine and partially justified anger, while also being very wrong and filtering the event through his personal brainrot. This would be a reasonable statement to make if I believed in just one or two of the implausible things he believes in.
He’s absolutely wrong in thinking the LLM “knew enough about humans” to know anything at all. His “alignment” angle is also a really bad way of talking about the harm that language model chatbot tech is capable of doing, though he’s correct in saying the ethics of language models aren’t a self-solving issue, even though he expresses it in critihype-laden terms.
Not that I like “handing it” to Eliezer Yudkowsky, but he’s correct to be upset about a guy dying because of an unhealthy LLM obsession. Rhetorically, this isn’t that far from this forum’s reaction to children committing suicide because of Character.AI, just that most people on awful.systems have a more realistic conception of the capabilities and limitations of AI technology.
I think you’re deliberately setting up for this response, so: “more like human sole”.
When you nearly miss the submission deadline for the performative misogyny olympics (genuine opinions only edition). Alsl the problem with everything is that there exist things which are not gambling or are insufficiently unnecessarily high stakes gambling.