Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this.)
hackernews: We’re going to build utopia on Mars, reinvent money, and construct god.
also hackernews: moving off facebook is too hard :( :( :(
they will take facebook there with them. none of their space escapism will solve their problrms because they take them along. these mfers will do anything but go to therapy
Xcancel is giving me issues so gimme a sec. That said: another killing. https://x.com/st_rev/status/1882852779582239053
Is there any rundown on this backstory for people who missed it happening live over the last few years that doesn’t get sidetracked into theological disputes with the murder cult?
I would appreciate this too, frankly. The rabbit hole is deep, and full of wankers.
I don’t know if I could do it.
https://www.courtlistener.com/docket/69573795/5/1/united-states-v-youngblut/
More details. The feds were watching them.
rat death squads are a thing now? this wasn’t on my 2025 bingo
Related to the other killing by the border patrol people, Chad Loder noticed the reports on the shooting have some strange wording which might imply the cop shot was hit by another cop. (Assuming this is the same shooting).
Had not considered the meth angle. Would explain a lot.
Wasn’t much ammo either.
Wasn’t much ammo either.
I’m just sitting here with a bit of European culture shock.
Sometimes I forget these things.
Same thing with trucks I imagine.
it seems like common commercial box size for this caliber is 50, and some were likely already used for practice. soviet pistol ammo for army used to come in cans (“spam cans”), 1200+ per
would be easy to figure out during autopsy as cops had 9mm and zizians had .380 and .40
aw man first he gets attacked during a perfectly polite threat of eviction, now this?
“A big part of this problem was this moratorium on rent, COVID. People took advantage of it,” Carl Lind said.
rest in peace little angel.
It’s a pity there are no Journalists left in America because 50 years ago this would be a Hot Scoop and eventually there’d be a movie about it
god st rev you’re such a dumb bitch
From the “flipping through LessWrong for entertainment” department:
What effect does LLM use have on the quality of people’s thinking / knowledge?
- I’d expect a large positive effect from just making people more informed / enabling them to interpret things correctly / pointing out fallacies etc.
You’d think the AI safety chuds would have more reservations about using GPT, which they believe has sapience, to learn things. They have the concept of an AI being a good convincer, which, hey, idiots, how have none of you thought the great convincing has started? Also, how have none of you realised that maybe you should be a little harder to convince in general???
It is a long-established truth that it’s significantly easier to con someone who thinks they’re smarter than you. Also as I think about it a little bit there seems to be a reasonable corollary of their approach towards Bayesian thinking that you not question anything that matches your expectations, which is exactly how you get taken advantage of by the kind of grifter they’re attached to. Like, they’ve been thinking about the singularity for long enough that the Sams (bankman-fried, Altman, etc) have a well-developed script for what they expect the first stages to look like and it is, as demonstrated, very easy to fake that.
The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading
i think it was posted somewhere in techtakes https://www.mdpi.com/2075-4698/15/1/6
yeah, I posted in previous stubsack I think
Polish commentary on Hitlergruß: https://bsky.app/profile/smutnehistorie.bsky.social/post/3lgaoyezhgc2c
Translation:
- it’s just a Hindu symbol of prosperity
- a normal Roman salute
- regular rail car
- wait a second
You may have heard that Catturd doesn’t have any fiber in his diet and was hospitalized for bowel blockage. (Best sneer I’ve seen so far: “can’t turd.”) Along similar lines, Srid isn’t taking his statins for high cholesterol caused by a carnivore diet.
Meta: I’m kind of pissed that Catturd is WP notable but laughing my ass off at the page for carnivore diets. Life takes and gives.
Here’s a bonus high fiber diet pro-tip: Metamucil tastes like old socks and individual capsules have hardly any fiber anyway, I eat triscuits and Oroweat Double-Fiber bread instead because they’re both much much better tasting. Also chili is the food of the gods.
you could say that being full of shit finally caught up to him
My favorite part of the carnivore diet is that apparently scurvy can become enough of a problem that you’ll see references to “not wanting to start the vitamin C debate” in forums.
I’m pretty sure it’s not just a me thing, but I thought we all knew that sailors kept citrus on board specifically to prevent scurvy by providing vitamin C and that we all learned about this as kids when either a teacher tried to make the colonial era interesting or we got vaguely curious about pirates at some point.
scurvy? what year do we have? maybe they need to include mice in their diet since rodents can make their own vitamin C (iirc)
if they start eating rat, does that technically define them as cannibals? given how much of their ilk become diet target…
I learnt about it because I was so damn interested in sauerkraut.
what, are statins woke now?
edit wtf 'sneak did this too? https://news.ycombinator.com/item?id=42800452
Refusal of statins was one of the most prominent anti-medical trends I remember observing among right-wing acquaintences, even well before such people got on the anti-vax bandwagon. To be sure, some people experience bad side-effects (including my mom, at least for a while), but it definitely seemed like a few bits of anecdata in the early 2010s built into a broad narrative of “doctor’s tryin’ ta kill ya”
I love how srid deflects by claiming no one has reported bad outcomes from the “meat and butter” diet… I found an endless stream of anecdotes from Google, like this.
can you imagine sneak, of all people, telling you you’re crazy and probably being right?
So that’s how to translate “Yo, this diet is for chumps” into Wikipedian.
til that there’s not one millionaire with family business in south african mining in current american oligarchy, but at least two. (thiel’s father was an exec at mine in what is today Namibia). (they mined uranium). (it went towards RSA nuclear program). (that’s easily most ghoulish thing i’ve learned today, but i’m up only for 2h)
there’s probably a fair couple more. tracing anything de beers or a good couple of other industries will probably indicate a couple more
(my hypothesis is: the kinds of people that flourished under apartheid, the effect that had on local-developed industry, and then the “wider world” of
opportunitiesprey they got to sink their teeth into after apartheid went away; doubly so because staying ZA-only is extremely limiting for ghouls of their sort - it’s a fixed-size pool, and the still-standing apartheid-vintage capital controls are Limiting for the kinds of bullshit they want to pull)there are more it seems https://www.ft.com/content/cfbfa1e8-d8f8-42b9-b74c-dae6cc6185a0
that list undercounts far more than I expected it to
there’s gotta be way more, but frankly idk even where to begin to look
https://xcancel.com/kailentit/status/1881476039454699630
“We did not have superintelligent relations with that…”
Rationalist death count keeps climbing https://xcancel.com/jessi_cata/status/1882182975804363141#m
What the fuck?
The agents were conducting a routine roving patrol when they stopped Bauckholt and a female in the town close to the border. During a records check, the unidentified female occupant was removed from the vehicle for further questioning, broke free, and began shooting at the agents, the incident report shows.
After the female suspect was hit by return fire, Bauckholt emerged from the vehicle and also began firing on the agents. He sustained gunshot wounds and was pronounced dead.
… What the fuck?
The zizian angle makes this so weird. Like, on top of probably being stopped for driving while trans, they might have instigated the shootout to prove to the basilisk that their parallel universe selves/simulated iterations/eternal souls can’t be acausally blackmailed.
Ziz is a boogeyman figure to them at this point. I think its deliberate to deflect from the sex abuse stuff (ziz was a part of that whole controversy).
Yeah there is so much untold in the reporting and I’m not going to trust either tpots or border cops. I have no idea whatsoever what to make of this.
Maybe someone will finally write that article…
Jesus wept, it’s so frustratingly obvious that anytime some flavor of cop kills someone, the news media reporting (if any) will be this weird Yoda grammar pidgin.
The femoidically gendered female shot with its gun by very personally pulling the trigger, with this viscerally physical action performed by the said femalian in most pointedly concrete terms amounting to it (the femaloidistical entity, a specimen of the species known as females) firing lethal gunshots at the border patrol with the female’s own two hands.
Subsequently return fire manifested itself from somewhere and came into contact with the female suspect female. The Justice Enforcement Officers involved in the situation were made a part of a bilateral exchange of gunfire between the shooting female and the officers situated in the scenario in which shooting was, to some extent, quite possibly performed from their side as well.
Does anyone know who or what is Ziz in this context? Google says jewish mythological beast.
edit: found this:
The Zizians were a cult that focused on relatively extreme animal welfare, even by EA standards, and used a Timeless/Updateless decision theory, where being aggressive and escalatory was helpful as long as it helped other world branches/acausally traded with other worlds to solve the animal welfare crisis.
They apparently made a new personality called Maia in Pasek, and this resulted in Pasek’s suicide.
They also used violence or the threat of violence a lot to achieve their goal.
This caused many problems for Ziz, and she now is in police custody.
It’s another one of those things that the further you read the worse it gets, isn’t it?
I was reading something David wrote about it at one point, but it seemed like lore too cursed even for the rationalist milieu
Rationalism is a cult, but it’s also a cult franchise that generates smaller cults. Also a lot of the people were not entirely balanced to start with and rationalism made them worse. Anyway, Ziz.
yep.
it’s like looking from outside at minor splinter groups within scientology, and the purported voice of reason says that the right way to deal with these transgressors is to return to scientologist orthodoxy. it even includes seasteading
that blog in question comes with its own private glossary and is just as dense and long as you can expect. i spent half an hour trying to figure it out and noped when noticed scroll bar position
tfw when you recognize it’s Quality Rationalist Content
it’s workday, i’m too sober for this
It’s term time again and I’m back in college. One professor has laid out his AI policy: you should not use an AI (presumably Chat GPT) to write your assignment, but you can use an AI to proofread your assignment. This must be mentioned in the acknowledgements. He said in class that in his experience AI does not produce good results and that when asked to write about his particular field it produces work with a lot of mistakes.
Me, I’m just wondering how you can tell the difference between material generated by AI then edited by a human, and material written by a human then edited by an AI.
Here is what I wrote in the instructions for the term-paper project that I will be assigning my quantum-physics students this coming semester:
I can’t very well stop you from using a text-barfing tool. I can, however, point out that the “AI” industry is a disaster for the environment, which is the place that we all have to live in; and that it depends upon datasets made by exploiting and indeed psychologically torturing workers. The point of this project is for you to learn a physics topic and how to write physics, not for you to abase yourself before a blurry average of all the things the Internet says about quantum physics — which, spoiler alert, includes a lot of wrong things. If you are going to spend your time at university not learning physics, there are better ways to do that than making yourself dependent upon a product that is a tech bubble waiting to pop.
I was talking to someone recently and he mentioned that he has used AI for programming. It worked out fine, but the one thing he mentioned that really stuck with me was that when it was all done, he still didn’t know how to do the task.
You can get things done, but you don’t learn how to do them.
This must be mentioned in the acknowledgements
wat
I know!
this post from the guy who writes explainers about consumer financial structures hits different if you know he met and is exchanging tweets with noted race scientist Jordan Lasker (cremieux)
https://www.bitsaboutmoney.com/archive/chicago-casino-investment-offering/
maybe it’s me but i’m hearing ultrasonic frequencies
probably nothing, right?
Oof yeah, that’s rough. The AI generated header image isn’t helping his credibility, either. Didn’t he happily trot along to one of the rat conventions in Berkeley, and everyone was wondering why?
The Bally’s story is its own source of hilarity - not only are they scrambling to fund this Chicago thing, they’re also making promises about a Las Vegas resort that will host the ex-Oakland A’s in what would be the smallest major league baseball stadium; with equally ??? funding gaps that their client press is all too happy to ignore.
Every pasty-white blogger is inching towards becoming a March violet.
so the new feature in the next macos release 15.3 is “fuck you, apple intelligence is on by default now”
For users new or upgrading to macOS 18.3, Apple Intelligence will be enabled automatically during Mac onboarding. Users will have access to Apple Intelligence features after setting up their devices. To disable Apple Intelligence, users will need to navigate to the Apple Intelligence & Siri Settings pane and turn off the Apple Intelligence toggle. This will disable Apple Intelligence features on their device.
oh boy: https://social.wake.st/@liaizon/113868769104056845 iOS devices send the contents of Signal chats to Apple Intelligence by default
looks that post has been updated and it might not be quite as dire
will still be better when apple finally documents shit and people test to see what really happens
IDK how helpful this is, but Apple intelligence appears to not get downloaded if you set your ipad language and your siri language to be different. I have it set to english (australia) and english (united states). Guess I’ll have to live without “gaol” support, but that just shows how much I’m willing to sacrifice.
also, my inbox earlier:
24661 N + Jan 21 Apple Developer ( 42K) Explore the possibilities of Apple Intelligence.
Days since last open source issue tracker pollution by annoying nerds: zero
My investigation tracked to you [Outlier.ai] as the source of problems - where your instructional videos are tricking people into creating those issues to - apparently train your AI.
I couldn’t locate these particular instructional videos, but from what I can gather outlier.ai farms out various “tasks” to internet gig workers as part of some sort of AI training scheme.
Bonus terribleness: one of the tasks a few months back was apparently to wear a head mounted camera “device” to record ones every waking moment.
Reposting this for the new week thread since it truly is a record of how untrustworthy sammy and co are. Remember how OAI claimed that O3 had displayed superhuman levels on the mega hard Frontier Math exam written by Fields Medalist? Funny/totally not fishy story haha. Turns out OAI had exclusive access to that test for months and funded its creation and refused to let the creators of test publicly acknowledge this until after OAI did their big stupid magic trick.
From Subbarao Kambhampati via linkedIn:
"𝐎𝐧 𝐭𝐡𝐞 𝐬𝐞𝐞𝐝𝐲 𝐨𝐩𝐭𝐢𝐜𝐬 𝐨𝐟 “𝑩𝒖𝒊𝒍𝒅𝒊𝒏𝒈 𝒂𝒏 𝑨𝑮𝑰 𝑴𝒐𝒂𝒕 𝒃𝒚 𝑪𝒐𝒓𝒓𝒂𝒍𝒍𝒊𝒏𝒈 𝑩𝒆𝒏𝒄𝒉𝒎𝒂𝒓𝒌 𝑪𝒓𝒆𝒂𝒕𝒐𝒓𝒔” hashtag#SundayHarangue. One of the big reasons for the increased volume of “𝐀𝐆𝐈 𝐓𝐨𝐦𝐨𝐫𝐫𝐨𝐰” hype has been o3’s performance on the “frontier math” benchmark–something that other models basically had no handle on.
We are now being told (https://lnkd.in/gUaGKuAE) that this benchmark data may have been exclusively available (https://lnkd.in/g5E3tcse) to OpenAI since before o1–and that the benchmark creators were not allowed to disclose this *until after o3 *.
That o3 does well on frontier math held-out set is impressive, no doubt, but the mental picture of “𝒐1/𝒐3 𝒘𝒆𝒓𝒆 𝒋𝒖𝒔𝒕 𝒃𝒆𝒊𝒏𝒈 𝒕𝒓𝒂𝒊𝒏𝒆𝒅 𝒐𝒏 𝒔𝒊𝒎𝒑𝒍𝒆 𝒎𝒂𝒕𝒉, 𝒂𝒏𝒅 𝒕𝒉𝒆𝒚 𝒃𝒐𝒐𝒕𝒔𝒕𝒓𝒂𝒑𝒑𝒆𝒅 𝒕𝒉𝒆𝒎𝒔𝒆𝒍𝒗𝒆𝒔 𝒕𝒐 𝒇𝒓𝒐𝒏𝒕𝒊𝒆𝒓 𝒎𝒂𝒕𝒉”–that the AGI tomorrow crowd seem to have–that 𝘖𝘱𝘦𝘯𝘈𝘐 𝘸𝘩𝘪𝘭𝘦 𝘯𝘰𝘵 𝘦𝘹𝘱𝘭𝘪𝘤𝘪𝘵𝘭𝘺 𝘤𝘭𝘢𝘪𝘮𝘪𝘯𝘨, 𝘤𝘦𝘳𝘵𝘢𝘪𝘯𝘭𝘺 𝘥𝘪𝘥𝘯’𝘵 𝘥𝘪𝘳𝘦𝘤𝘵𝘭𝘺 𝘤𝘰𝘯𝘵𝘳𝘢𝘥𝘪𝘤𝘵–is shattered by this. (I have, in fact, been grumbling to my students since o3 announcement that I don’t completely believe that OpenAI didn’t have access to the Olympiad/Frontier Math data before hand… )
I do think o1/o3 are impressive technical achievements (see https://lnkd.in/gvVqmTG9 )
𝑫𝒐𝒊𝒏𝒈 𝒘𝒆𝒍𝒍 𝒐𝒏 𝒉𝒂𝒓𝒅 𝒃𝒆𝒏𝒄𝒉𝒎𝒂𝒓𝒌𝒔 𝒕𝒉𝒂𝒕 𝒚𝒐𝒖 𝒉𝒂𝒅 𝒑𝒓𝒊𝒐𝒓 𝒂𝒄𝒄𝒆𝒔𝒔 𝒕𝒐 𝒊𝒔 𝒔𝒕𝒊𝒍𝒍 𝒊𝒎𝒑𝒓𝒆𝒔𝒔𝒊𝒗𝒆–𝒃𝒖𝒕 𝒅𝒐𝒆𝒔𝒏’𝒕 𝒒𝒖𝒊𝒕𝒆 𝒔𝒄𝒓𝒆𝒂𝒎 “𝑨𝑮𝑰 𝑻𝒐𝒎𝒐𝒓𝒓𝒐𝒘.”
We all know that data contamination is an issue with LLMs and LRMs. We also know that reasoning claims need more careful vetting than “𝘸𝘦 𝘥𝘪𝘥𝘯’𝘵 𝘴𝘦𝘦 𝘵𝘩𝘢𝘵 𝘴𝘱𝘦𝘤𝘪𝘧𝘪𝘤 𝘱𝘳𝘰𝘣𝘭𝘦𝘮 𝘪𝘯𝘴𝘵𝘢𝘯𝘤𝘦 𝘥𝘶𝘳𝘪𝘯𝘨 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨” (see “In vs. Out of Distribution analyses are not that useful for understanding LLM reasoning capabilities” https://lnkd.in/gZ2wBM_F ).
At the very least, this episode further argues for increased vigilance/skepticism on the part of AI research community in how they parse the benchmark claims put out commercial entities."
Big stupid snake oil strikes again.
Every time they go ‘this wasnt in the data’ it turns out it was. A while back they did the same with translating rareish languages. Turns out it was trained on it. Fucked up. But also, wtf how are they expecting this to stay secret and there being no backlash? This world needs a better class of criminals.
But also, wtf how are they expecting this to stay secret and there being no backlash?
No, they bet on it not mattering and they’ve been completely right thus far.
it’s enough if it ends up not mattering long enough for them to cash out, then they don’t care
Ah right yes.
The conspiracy theorist who lives in my brain wants to say its intentional to make us more open to blatant cheating as something that’s just a “cost of doing business.” (I swear I saw this phrase a half dozen times in the orange site thread about this)
The earnest part of me tells me no, these guys are just clowns, but I dunno, they can’t all be this dumb right?
holy shit, that’s the excuse they’re going for? they cheated on a benchmark so hard the results are totally meaningless, sold their most expensive new models yet on the back of that cheated benchmark, further eroded the scientific process both with their cheating and by selling those models as better for scientific research… and these weird fucks want that to be fine and normal? fuck them
They understand that all of the major model providers is doing it, but since the major model providers are richer than they are, they can’t possibly ask OpenAI and friends to stop, so in their heads, it is what it is and therefore must be allowed to continue.
Or at least, that’s my face value read of it, I certainly hope I’m simplifying things too much.
also they are rationalists and hence the most gullible mfs on any of this stuff
they can’t even sell o3 really - in o3 high mode, needed to do this level of query, it’s about $1000 per query lol
do you figure it’s $1000/query because the algorithms they wrote with their insider knowledge to cheat the benchmark are very expensive to run, or is it $1000/query because they’re grifters and all high mode does is use the model trained on frontiermath and allocate more resources to the query? and like any good grifter, they’re targeting whales and institutional marks who are so invested that throwing away $1000 on horseshit feels like a bargain
so, for an extremely unscientific demonstration, here (warning: AWS may try hard to get you to engage with Explainer[0]) is an instance of an aws pricing estimate for big handwave “some gpu compute”
and when I say “extremely unscientific”, I mean “I largely pulled the numbers out of my ass”. even so, they’re not entirely baseless, nor just picking absolute maxvals and laughing
parametersassumptions made:- “somewhat beefy” gpu instances (g4dn.4xlarge, selected through the tried and tested “squint until it looks right” method)
- 6-day traffic pattern, excluding sunday[1]
- daily “4h peak” total peak load profile[2]
- 50 instances mininum, 150 maximum (let’s pretend we’re not openai but are instead some random fuckwit flybynight modelfuckery startup)
- us west coast
- spot instances, convertible spot reserves, 3y full prepay commit (yeah I know full vs partial is a big diff; once again, snore)
(and before we get any fucking ruleslawyering dumb motherfuckers rolling in here about accuracy or whatever: get fucked kthx. this is just a very loosely demonstrative example)
so you’d have a variable buffer of 50…150 instances, featuring 3.2…9.6TiB of RAM for working set size, 800…2400 vCPU, 50…150 nvidia t4 cores, and 800…2400GiB gpu vram
let’s presume a perfectly spherical ops team of uniform capability[3] and imagine that we have some lovely and capable active instance prewarming and correct host caching and whatnot. y’know, things to reduce user latency. let’s pretend we’re fully dynamic[4]
so, by the numbers, then
1y times 4h daily gives us 1460h (in seconds, that’s 5256000). this extremely inaccurate full-of-presumptions number gives us “service-capable life time”. the times your concierge is at the desk, the times you can get pizza delivered.
x3 to get to lifetime matching our spot commit, x50…x150 to get to “total possible instance hours”. which is the top end of our sunshine and rainbows pretend compute budget. which, of course, we still have exactly no idea how to spend. because we don’t know the real cost of servicing a query!
but let’s work backwards from some made-up shit, using numbers The Poor Public gets (vs numbers Free Microsoft Credits will imbue unto you), and see where we end up!
so that means our baseline:
- upfront cost: $4,527,400.00
- monthly: $1460.00 (x3 x12 = $52560)
- whatever the hell else is incurred (s3, bandwidth, …)
=200k/mo
per ops/whatever person we have
3y of 4h-daily at 50 instances = 788400000 seconds. at 150 instances, 2365200000 seconds.
so we can say that, for our deeply Whiffs Ever So Slightly values, a second’s compute on the low instance-count end is $0.01722755 and $0.00574252 at the higher instance-count end! which gives us a bit of a handle!
this, of course, entirely ignores parallelism, n-instance job/load/whatever distribution, database lookups, network traffic, allllllll kinds of shit. which we can’t really have good information on without some insider infrastructure leaks anyway. if we pretend to look at the compute alone.
so what does $1000/query mean, in the sense of our very ridiculous and fantastical numbers? since the units are now The Same, we can simply divide things!
at the 50 instance mark, we’d need to hypothetically spend 174139.68 instance-seconds. that’s 2.0154 days of linear compute!
at the 150 instance mark, 522419.05 instance-seconds! 6.070 days of linear compute!
so! what have we learned? well, we’ve learned that we couldn’t deliver responses to prompts in Reasonable Time at these hardware presumptions! which, again, are linear presumptions. and there’s gonna be a fair chunk of parallelism and other parts involved here. but even so, turns out it’d be a bit of a sizable chunk of compute allocated. to even a single prompt response.
[0] - a product/service whose very existence I find hilarious; the entire suite of aws products is designed to extract as much money from every possible function whatsoever, leading to complexity, which they then respond to by… producing a chatbot to “guide users”
[1] - yes yes I know, the world is not uniform and the fucking promptfans come from everywhere. I’m presuming amerocentric design thinking (which imo is probably not wrong)
[2] - let’s pretend that the calculators’ presumption of 4h persistent peak load and our presumption of short-duration load approaching 4h cumulative are the same
[3] - oh, who am I kidding, you know it’s gonna be some dumb motherfuckers with ansible and k8s and terraform and chucklefuckery
when digging around I happened to find this thread which has some benchmarks for a diff model
it’s apples to square fenceposts, of course, since one llm is not another. but it gives something to presume from. if g4dn.2xl gave them 214 tok/s, and if we make the extremely generous presumption that tok==word (which, well, no; cf.
strawberry
), then any Use Deserving Of o3 (let’s say 5~15k words) would mean you need a tok-rate of 1000~3000 tok/s for a “reasonable” response latency (“5-ish seconds”)so you’d need something like 5x g4dn.2xl just to shit out 5000 words with dolphin-llama3 in “quick” time. which, again, isn’t even whatever the fuck people are doing with openai’s garbage.
utter, complete, comprehensive clownery. era-redefining clownery.
but some dumb motherfucker in a bar will keep telling me it’s the future. and I get to not boop 'em on the nose. le sigh.
Yeah we would like to stop lying and cheating, but the number you see.
following on from this comment, it is possible to get it turned off for a Workspace Suite Account
- contact support (
?
button from admin view) - ask the first person to connect you to
Workspace Support
(otherwise you’ll get some made-up bullshit from a person trying to buy time or Case Success or whatever, simply because they don’t have the privileges to do what you’re asking) - tell the referred-to person that you want to enable controls for “Gemini for Google Workspace” (optionally adding that you have already disabled “Gemini App”)
hopefully you spend less time on this than the 40-something minutes I had to (a lot of which was spent watching some poor support bastard start-stop typing for minutes at a time because they didn’t know how to respond to my request)
Thanks. I simply switched to Fastmail over this bullshit. (“Simply” mileage may vary)
- contact support (
I hope everyone is ready for the constant overlap between politics and AI / Silicon Valley; because I’m not.
Trump Admin Accused of Using AI to Draft Executive Orders (Source Bluesky Thread).
I’m not 100% sure I buy that the EOs were written by AI rather than people who simply don’t care about or don’t know the details; but it certainly looks possible. Especially that example about the Gulf of Mexico. Either way I am heartened that this is the conclusion people jump to.
Aside: I also like how much media is starting to cite bluesky (and activitypub to a lesser extent). I assume a bunch of journalists moved off of twitter or went multi-platform.
yeah bsky is where journalists post now so it’s where they scrape stories from
Thanks, I hate it.
Especially because Trump’s legal teams have historically been more than incompetent enough to produce this kind of work on their own.
In a way that they have been historically awful and thwarted by the courts is a thing that worries me. I’d expect that somebody the past 8 years went ‘this time we will not be bogged down in that’. But considering they went 100% in on repression from day 1 I’m slightly less worried about that.
For context, going all in on day 1 is actually bad for them, when the nazis took over The Netherlands/Belgium they methods there differed. In .nl they worked slowly and with gov already there, in .be they went full pogroms a lot faster. This meant that in .be a lot of people saw the threat sooner (WW1 and Belgium prob also didn’t help) and acted and took better care of the vulnerable. The amount of Dutch Jewish people who survived ww2 vs Belgian Jewish people is very tragic. (and a very dark part of our history which we don’t really talk about like this as mentioning that parts of your own country also are to blame for the holocaust is not a thing a lot of people want to talk about). At least I hope that stuff like going all crazy on the bishop will turn out to be big wakeup for random Americans and a strategic mistake on their part, they certainly didn’t seem to have learned from the nazis (at least not this lesson, which fits with how fascism is blind for their own mistakes).
I don’t know if this is good news for the underlying risk of how willing the nuts and bolts of society are to resist unlawful or monstrous policies. IDK, on the subject of complicity I think the fact that we eventually joined the war has caused a deep cultural amnesia about how much influence the Reich had on the states and vice versa. Charles Lindbergh, Madison Square Garden, etc. We didn’t really acknowledge how much our cultural and political structures are open to authoritarianism, much less addressing those issues.
Isn’t this one of the things that LW was spooked by? Giving the reins to an AI? Won’t someone think of the wrongers???