Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
So us sneerclubbers correctly dismissed AI 2027 as bad scifi with a forecasting model basically amounting to “line goes up”, but if you end up in any discussions with people that want more detail titotal did a really detailed breakdown of why their model is bad, even given their assumptions and trying to model “line goes up”: https://www.lesswrong.com/posts/PAYfmG2aRbdb74mEp/a-deep-critique-of-ai-2027-s-bad-timeline-models
tldr; the AI 2027 model, regardless of inputs and current state, has task time horizons basically going to infinity at some near future date because they set it up weird. Also the authors make a lot of other questionable choices and have a lot of other red flags in their modeling. And the picture they had in their fancy graphical interactive webpage for fits of the task time horizon is unrelated to the model they actually used and is missing some earlier points that make it look worse.
Good for him to try and convince the LW people that the math is wrong. Do think there is a bigger problem with all of this. Technological advancement doesn’t follow exponential curves, it follows S-curves. (And the whole ‘the singularity is near’ 'achtually that is true, but the rate of those S-curves is in fact exponential is just untestable unscientific hopeium, but it is odd the singularity people are now back unto exponential curves for a specific tech).
Also lol at the 2027 guys believing anything about how grok was created. Nice epistemology yall got there, hows the Mars base?
Judging by various comments the AI 2027 authors have made, sucking up to techbro side of the alt-right was in fact a major goal of AI 2027, and, worryingly they seem to have succeeded somewhat (allegedly JD Vance has read AI 2027) but lol at the notion they could ever talk any of the techbro billionaires into accepting any meaningful regulation. They still don’t understand their doomerism is free marketing hype for the techbros, not anything any of them are actually treating as meaningfully real.
Yeah, think that is prob also why a Thiel supports Moldbug, not because he believes in what Moldbug says, but because Moldbug says things that are convenient for him if others believe it (Even if Thiel prob believes a lot of the same things, looking at his anti-democracy stuff, and the ‘rape crisis is anti men’ stuff (for which he apologized, wonder if he apologized for the apology now that the winds have seemingly changed).
(From AI 2027, as quoted by titotal.)
This is an incredibly silly sentence and is certainly enough to determine the output of the entire model on its own. It necessarily implies that the predicted value becomes infinite in a finite amount of time, disregarding almost all other features of how it is calculated.
To elaborate, suppose we take as our “base model” any function f which has the property that lim_{t → ∞} f(t) = ∞. Now I define the concept of “super-f” function by saying that each subsequent block of “virtual time” as seen by f, takes 10% less “real time” than the last. This will give us a function like g(t) = f(-log(1 - t)), obtained by inverting the exponential rate of convergence of a geometric series. Then g has a vertical asymptote to infinity regardless of what the function f is, simply because we have compressed an infinite amount of “virtual time” into a finite amount of “real time”.
Yeah AI 2027’s model fails back of the envelope sketches as soon as you try working out any features of it, which really draws into question the competency of it’s authors and everyone that has signal boosted it. Like they could have easily generated the same crit-hype bullshit with “just” an exponential model, but for whatever reason they went with this model. (They had a target date they wanted to hit? They correctly realized adding in extraneous details would wow more of their audience? They are incapable of translating their intuitions into math? All three?)
titotal??? I heard they were dead! (jk. why did they stop hanging here, I forget…)
We did make fun of titotal for the effort they put into meeting rationalist on their own terms and charitably addressing their arguments and you know, being an EA themselves (albeit one of the saner ones)…