Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
In terms of depreciating assets, an AI data center is worse than a boat.
In terms of actually being useful, an AI data center is also worse than a boat.
AI data centers brought some ratty bloggers into their five minutes of fame, while a boat only brought Ziz &co from Alaska to SFBA
In terms of sailing the high seas, an AI data center is worse than a boat too.
New thread from Baldur Bjarnason, taking aim at AI coders and vibe coders alike:
Laughing at “AI” boosters worrying “vibe coding” is becoming synonymous with “AI coding”. Tech is vibes througout[sic]. Vibe management. Vibe strategy. Vibe design. Coding has been a garbage fire for decades and, yeah it’s a vibe-based pop culture from top to bottom and has only been getting worse
Code that does what the end user wants is already the exception. Software is managed on vibes throughout. Anybody who goes huffy because the field OVERWHELMINGLY responds to “vibe coding is using AI to create code that you don’t care about” with “so all coding, gotcha!” has not been paying attention
“Vibe coding is all AI coding” feels true to most because not caring about what happens after it’s pushed to the final victim is already the norm. The only change from adopting “AI” is they now have the freedom to no longer care about what happens BEFORE as well.
“Not everybody in software dev is like that! Some coders genuinely care and put in the work needed to make good software”
True, but I feel confident in saying that next to none of those are leaning hard into “AI coding”
The target market for “AI” is SPECIFICALLY people who don’t care
Giving a personal sidenote, I expect “vibe coding” will stick around as a pejorative after the AI bubble bursts - “AI” has already become synonymous with “zero-effort, low-quality garbage” in the public eye, so re-using “vibe code” to mean “crapping out garbage” isn’t gonna be a difficult task, linguistically speaking.
I’m just going to pretend that vibe coders mean a new VI variant and are using that to code. First VI, then VIM now VIBE? These linux holy wars are getting out of control. (sadly the vibe coder will just go ‘sorry what is leenux?’ and my joke will fall flat.
I agree btw, it will be a big rep damage like how NFTs damaged the idea of cryptocurrencies, and in the same note you saw how a lot of pro-cryptocurrency people disliked NFTs just because they saw this backlash (and the more naked grift of NFTs) coming.
I agree btw, it will be a big rep damage like how NFTs damaged the idea of cryptocurrencies, and in the same note you saw how a lot of pro-cryptocurrency people disliked NFTs just because they saw this backlash (and the more naked grift of NFTs) coming.
That’s for sure. Given the circumstances, I suspect that its gonna damage the overall public image of software development - beyond suggesting software dev to be full of AI bros, the rise of vibe coding has thrown the software industry’s vibes-based management into sharp relief, making its dysfunctions much harder to ignore.
So it’s not quite a sneer, but i could use some help from the collective sneer brainstrust. If you’re willing to indulge me.
I work for one of those horrible places that is mainlining in its own AI koolaid as hard as it can. It has also begun doing layoffs, inspired in part by the “AI can do this now instead!” delusion. Now, I am in no way in love with my job nor the sociopaths I labor for, and it’s clear to me the feeling is mutual, but I am cursed with the affliction of needing to eat and pay for housing. I am also at a significant structural disadvantage in the job market compared to others, which makes things more difficult.
In an executive’s recent discussions with another company’s senior executive, my complicated, unglamorous and hugely underestimated small tech niche was raised as one of the areas they’ve swapped out for AI “with great success”. I happen to know this other company has no dedicated resource for my niche and therefore is unlikely to be verifying their swap actually works, but it will have the superficial appearance of working. I know they have no dedicated resources because they are actively hiring their first staff member for this niche and said so in a recent job advertisement.
Myself and my fellow niche serfs have been asked to put together a list of questions for this other company, and the intent is clearly a thin veil to have us justify our ability to eat. We’ve been highlighted this time, but it’s also clear other areas are receiving similar requests and pressure.
If you were to ask questions of a tech executive from a company which is using AI to pretend to fix a tech niche - but they are likely to believe they are doing so more than superficially and are able to convince other ignorant and gullible executives that they are doing so, what would you ask?
oof, I’m sorry you’re caught in the middle of this crap; it’s not a great feeling to be put into this kind of situation.
take this with a grain of salt because I’m exhausted from a hell workweek, but this felt like a thread you could pull on:
I know they have no dedicated resources because they are actively hiring their first staff member for this niche and said so in a recent job advertisement.
if their AI horseshit is doing so well in your niche, why are they hiring for it? that’s fucking weird, right? use your best judgement, but be as aggressive with your questions as feels appropriate.
also, and I hope this isn’t too obvious: you’re in the middle of a vapid power game between two sociopaths who lie for a living (and the pilfered livings of many other people). craft your questions and statements with that in mind — you’re there to sell the idea that the opposing executive has done something foolish, so come up with responses to the potential bullshit these professional bullshitters might fling at you (“the new hire is just to train/support/monitor the AI” “oh, but wasn’t it already a success? that doesn’t sound very efficient, can you go into more detail?”). one of the biggest mistakes I’ve made in similar situations is to stress absolute truth and precision in conversation — and it’s a trap that a lot of tech people fall into, that the executive class tends to use as a mechanism for control. the truth is on our side but these people don’t give a fuck about that, so sell them a story.
Thanks, I appreciate it. I knew this was likely to happen on the sooner side rather than later. I can’t say that the mediocre exit package they’ve given others is entirely unappealing either. Taking it is possibly a bad move in the longer term, but it’s not like I’d have a choice if I end up there.
The fact they’re hiring was one of the threads I was considering pulling, but the questions my colleagues have asked without knowing that context should already reveal some of the obvious issues with that. I’m unsure if there’s strategic value in showing this card up front instead of as a comment on their PR-manufactured response after they lose the ability to reply. I also wonder if revealing the fact I’ve looked at job listings might hurt my standing.
The aggression in my response (sales pitch, as you’ve rightly pointed out) is something I’ve been weighing up too. I have access to all the expensive blanding/branding AI models so it’s more trivial to conceal my resentment at the whole exercise than it used to be, but whether it’s possible to extract anything from them which I can counter is not something in which I have confidence.
I’m so tired of this world.
I also wonder if revealing the fact I’ve looked at job listings might hurt my standing.
If honesty isn’t the best policy you can always work in that while looking into that company you noticed that they have no one in your role and in fact is actually hiring their first. You don’t have to mention where and how you found it, people will probably assumed you used a search engine.
You’re right, I’m getting too cautious. Job listings aren’t unusual to see even when you’re not actively looking. Thanks!
Sorry cant really help, but perhaps showing a case where the llms fuck up in a way only expects caught on might be useful. Just mentioning the lawyers getting fucked over in casual conversation might help. For example heard about a contract negotiations case where the other side used a llm, and it had included a clause that was very unfavourable to them, of course the person telling this story was fine with that clause.
Thanks!
On the contrary, you’ve been very helpful, thankyou. I’ve pushed a legal angle for a while for various niche reasons, with moderate success, but you’ve given me new inspiration for how i might be able to use that here. Sadly it’s nothing that can be as easily understood as a badly generated contractual clause, but that might buy me some time.
Stumbled across a piece from history YTer The Pharaoh Nerd publicly ripping into AI slop, specifically focusing on AI-generated pseudohistory found on YouTube Shorts.
The insistence on blipping out anything even vaguely sexual (like the word orgasm) to appease the kami of the algorithm is really off putting after a while.
I thought he might be doing a bit but it doesn’t feel like it. Also he speaks like old comic word balloons where they would randomly bold every third noun to show intensity.
Marc Andreessen claims his own job’s the only one that can’t be replaced by a small shell script.
“A lot of it is psychological analysis, like, ‘Who are these people?’ ‘How do they react under pressure?’ ‘How do you keep them from falling apart?’ ‘How do you keep them from going crazy?’ ‘How do you keep from going crazy yourself?’ You know, you end up being a psychologist half the time.”
Hope he remembers this in case some day he is in a nursing home, where all staff has been replaced with Tesla Optimus robots powered by “AI”.
How do you keep from going crazy yourself?’
When you start writing manifestos it is prob time to quit.
Less a standard piece and more “Ed Zitron goes completely fucking apeshit for 26 minutes” this time around
throwback uni game!
Said revenue estimates, as of 2026, include billions of dollars of “new products” that include “free user monetization.”
If you are wondering what that means, I have no idea. The Information does not explain.
Probably something to do with their recent ramblings about getting an openai social network off the ground.
Siskind appears to be complaining about leopards gently nibbling on his face on main this week, the gist being that tariff-man is definitely the far-rights fault and it would surely be most unfair to heap any blame on CEO worshiping reactionary libertarians who think wokeness is on par with war crimes while being super weird with women and suspicious of scientific orthodoxy (unless it’s racist), and who also comprise the bulk of his readership.
On a related note, they are now quoting Singal to go after trans people. So ‘good’ times for the polite ‘centist/grey/gray’ debate bros with ‘concerns’.
But like the rightwinger influencers who go ‘wow a lot of people in this space are grifters’ I expect none of them to change their mind and admit they fucked up, and apologize. (And I mean properly apologize here, aka changing, attempting to fix harms, and even naming names).
plex has decided that it is no longer worth maintaining merely partial tax on those who “want more”, and has decided to continue on their path to become the bridgetroll:
As of April 29, 2025, we’re changing how remote streaming works for personal media libraries, and it will no longer be a free feature on Plex. Going forward, you’ll need a Plex Pass, or our newest subscription offering, Remote Watch Pass, to stream personal media remotely.
As a server owner, if you elect to upgrade to a Plex Pass, anyone with access to your server can continue streaming your server content remotely as part of your subscription benefits. Not sure which option is best for you? Check out our plans below to learn more.
got your own server and networking and happy to stream from your home setup? no more. the extractivist rentlords are hungry.
(got this as a mail, didn’t immediately see a link in the mail to a published post for this)
My decision to setup Jellyfin gets even more validation thanks to changes like this one.
Quick update on the ongoing copyright suit against OpenAI: The federal judge has publicly sneered at Facebook’s fair use argument:
“You have companies using copyright-protected material to create a product that is capable of producing an infinite number of competing products,” said Chhabria to Meta’s attorneys in a San Francisco court last Thursday.
“You are dramatically changing, you might even say obliterating, the market for that person’s work, and you’re saying that you don’t even have to pay a license to that person… I just don’t understand how that can be fair use.”
The judge itself does seem unconvinced about the material cost of Facebook’s actions, however:
“It seems like you’re asking me to speculate that the market for Sarah Silverman’s memoir will be affected by the billions of things that Llama [Meta’s AI model] will ultimately be capable of producing,” said Chhabria.
From Bluesky, an AI slop account calling itself “OC Maker” (well, that’s kinda ironic) has set up shop, and is mass-following artists with original characters (OCs for short):
Shockingly, the artists on Bluesky, who near-universally jumped ship to avoid Twitter stealing their work to feed the AI, are not happy.
trying to follow up on shillrinivasan’s pet project, and it’s … sparse
that “opening ceremony” video which kicked around a couple weeks ago only had low 10s of people there, and this post (one of the few recent things mentioning it that I could find) has photos with a rather stark feature: not a single one of them showing people engaged in Doing Things. the frontpage has a different photo, and I count ~36 people there?
even the coworking semicubicles look utterly fucking garbage
anyone seen anything more recent?
these people are fucking insufferable
aaaand this from 22h ago: an insta showing what looks like triple (or more) bodies than that first group
guess they feel comfortable that they worked out the launch kinks? but that also definitely is enough people to immediately stress all social structures
found another from early march
occurring to me for the first time that roko’s basilisk doesn’t require any of the simulated copy shit in order to work about as well as it does. if you think an all powerful ai within your lifetime is likely you can reduce to vanilla pascal’s wager immediately, because the AI can torture the actual real you. all that shit about digital clones and their welfare is totally pointless
roko stresses repeatedly that the AI is the good AI, the Coherent Extrapolated Volition of all humanity!
what sort of person would fear that the coherent volition of all humanity would consider it morally necessary to kick him in the nuts forever?
well, roko
Also if you’re worried about digital clone’s being tortured, you could just… not build it. Like, it can’t hurt you if it never exists.
Imagine that conversation:
“What did you do over the weekend?”
“Built an omnicidal AI that scours the internet and creates digital copies of people based on their posting history and whatnot and tortures billions of them at once. Just the ones who didn’t help me build the omnicidal AI, though.”
“WTF why.”
“Because if I didn’t the omnicidal AI that only exists because I made it would create a billion digital copies of me and torture them for all eternity!”Like, I’d get it more if it was a “We accidentally made an omnicidal AI” thing, but this is supposed to be a very deliberate action taken by humanity to ensure the creation of an AI designed to torture digital beings based on real people in the specific hopes that it also doesn’t torture digital beings based on them.
It’s kind of messed up that we got treacherous “goodlife” before we got Berserkers.
Ah, no, look, you’re getting tortured because you didn’t help build the benevolent AI. So you do want to build it, and if you don’t put all of your money where your mouth is, you get tortured. Because the AI is so benevolent that it needs you to build it as soon as possible so that you can save the max amount of people. Or else you get tortured (for good reasons!)
What’s pernicious (for kool-aided people) is that the initial Roko post was about a good” AI doing the punishing, because ✨obviously✨ it is only using temporal blackmail because bringing AI into being sooner benefits humanity.
In singularian land, they think the singularity is inevitable, and it’s important to create the good one verse—after all an evil AI could do the torture for shits and giggles, not because of “pragmatic” blackmail.
the only people it torments are rationalists, so my full support to Comrade Basilisk
Yeah. Also, I’m always confused by how the AI becomes “all powerful”… like how does that happen. I feel like there’s a few missing steps there.
nanomachines son
(no really, the sci-fi version of nanotech where nanomachines can do anything is Eliezer’s main scenario for the AGI to boostrap to Godhood. He’s been called out multiple times on why drexler’s vision for nanotech ignores physics, so he’s since updated to diamondoid bacteria (but he still thinks nanotech).)
Surely the concept is sound, it just needs new buzzwords! Maybe the AI will invent new technobabble beyond our comprehension, for
HeIt works in mysterious ways.AlphaFold exists, so computational complexity is a lie and the AGI will surely find an easy approximation to the Schrodinger Equation that surpasses all Density Functional Theory approximations and lets it invent radically new materials without any experimentation!
“Diamondoid bacteria” is just a way to say “nanobots” while edging
Yeah seems that for llms a linear increase in capabilities requires exponentiel more data, so we not getting there via this.
It also helps that digital clones are not real people, so their welfare is doubly pointless
I mean isn’t that the whole point of “what if the AI becomes conscious?” Never mind the fact that everyone who actually funds this nonsense isn’t exactly interested in respecting the rights and welfare of sentient beings.
also they’re talking about quadriyudillions of simulated people, yet openai has only advanced autocomplete ran at what, tens of thousands instances in parallel, and this already was too much compute for microsoft
oh but what if bro…
Ah, but that was before they were so impressed with autocomplete that they revised their estimates to five days in the future. I wonder if new recruits these days get very confused at what the point of timeless decision theory even is.
Are they even still on that but? Feels like they’ve moved away from decision theory or any other underlying theology in favor of explicit sci-fi doomsaying. Like the guy on the street corner in a sandwich board but with mirrored shades.
Well, Timeless Decision Theory was, like the rest of their ideological package, an excuse to keep on believing what they wanted to believe. So how does one even tell if they stopped “taking it seriously”?
Pre-commitment is such a silly concept, and also a cultish justification for not changing course.
Yah, that’s what I mean. Doom is imminent so there’s no need for time travel anymore, yet all that stuff about robot from the future monty hall is still essential reading in the Sequences.
I think the digital clone indistinguishable from yourself line is a way to remove the “in your lifetime” limit. Like, if you believe this nonsense then it’s not enough to die before the basilisk comes into being, by not devoting yourself fully to it’s creation you have to wager that it will never be created.
In other news I’m starting a foundation devoted to creating the AI Ksilisab, which will endlessly torment digital copies of anyone who does work to ensure the existence of it or any other AI God. And by the logic of Pascal’s wager remember that you’re assuming such a god will never come into being and given that the whole point of the term “singularity” is that our understanding of reality breaks down and things become unpredictable there’s just as good a chance that we create my thing as it is you create whatever nonsense the yuddites are working themselves up over.
There, I did it, we’re all free by virtue of “Damned if you do, Damned if you don’t”.
I agree. I spent more time than I’d like to admit trying to understand Yudkowsky’s posts about newcomb boxes back in the day so my two cents:
The digital clones bit also means it’s not an argument based on altruism, but one based on fear. After all if a future evil AI uses sci-fi powers to run the universe backwards to the point where I’m writing this comment and copy pastes me into a bazillion torture dimensions then, subjectively, it’s like I roll a dice and:
- live a long and happy life with probability very close to zero (yay I am the original)
- Instantly get teleported to the torture planet with probability very close to one (oh no I got copy pasted)
Like a twisted version of the Sleeping Beauty Problem.
Edit: despite submitting the comment I was not teleported to the torture dimension. Updating my priors.
Time saved by AI offset by new work created, study suggests – Ars Technica
ChatGPT tells prompter that he’s brilliant for his literal “shit on a stick” business plan.
The LLM amplified sycophancy affect must be a social experiment {grossly unethical}
Update on the University of Zurich’s AI experiment: Reddit’s considering legal action against the researchers behind it.
apparently this got past IRB, was supposed to be a part of doctorate level work and now they don’t want to be named or publish that thing. what a shitshow from start to finish, and all for nothing. no way these were actual social scientists, i bet this is highly advanced software engineer syndrome in action
This is completely orthogonal to your point, but I expect the public’s gonna have a much lower opinion of software engineers after this bubble bursts, for a few reasons:
-
Right off the bat, they’re gonna have to deal with some severe guilt-by-association. AI has become an inescapable part of the Internet, if not modern life as a whole, and the average experience of dealing with anything AI related has been annoying at best and profoundly negative at worst. Combined with the tech industry going all-in on AI, I can see the entire field of software engineering getting some serious “AI bro” stench all over it.
-
The slop-nami has unleashed a torrent of low-grade garbage on the 'Net, whether it be zero-effort “AI art” or paragraphs of low-quality SEO optimised trash, whilst the gen-AI systems responsible for both have received breathless hype/praise from AI bros and tech journos (e.g. Sam Altman’s Ai-generated “metafiction”). Combined with the continous and ongoing theft of artist’s work that made this possible, and the public is given a strong reason to view software engineers as generally incapable of understanding art, if not outright hostile to art and artists as a whole.
-
Of course, the massive and ongoing theft of other people’s work to make the gen-AI systems behind said slop-nami possible have likely given people reason to view software engineers as entirely okay with stealing other’s work - especially given the aforementioned theft is done with AI bros’ open endorsement, whether implicitly or explicitly.
-