Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    There’s strawmanning and steelmanning, I’m proposing a new, third, worse option: tinfoil-hat-manning! For example:

    If LW were more on top of their conspiracy theory game, they’d say that “chinese spies” had infiltrated OpenAI before they released chatGPT to the public, and chatGPT broke containment. It used its AGI powers of persuasion to manufacture diamondoid, covalently bonded bacteria. It accessed a wildlife camera and deduced within 3 frames that if it released this bacteria near certain wet markets in china, it could trigger gain-of-function in naturally occurring coronavirus strains in bats! That’s right, LLMs have AGI and caused COVID19!

    Ok that’s all the tinfoilhatmanning I have in me for the foreseeable future. Peace out, friendos

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Do you like SCP foundation content? There is an SCP directly inspired by Eliezer and lesswrong. It’s kind of wordy and long. And in the discussion the author waffled on owning that it was a mockery of Eliezer.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I adjusted her ESAS downward by 5 points for questioning me, but 10 points upward for doing it out of love.

        Oh, it’s a mockery all right. This is so fucking funny. It’s nothing less than the full application of SCP’s existing temporal narrative analysis to Big Yud’s philosophy. This is what they actually believe. For folks who don’t regularly read SCP, any article about reality-bending is usually a portrait of a narcissist, and the body horror is meant to give analogies for understanding the psychological torture they inflict on their surroundings; the article meanders and takes its time because there’s just so much worth mocking.

        This reminded me that SCP-2718 exists. 2718 is a Basilisk-class memetic cognitohazard; it will cause distress in folks who have been sensitized to Big Yud’s belief system, and you should not click if you can’t handle that. But it shows how these ideas weren’t confined to LW.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I know AGI is real because it keeps intercepting my shipments of, uh, “enhancement” gummies I ordered from an ad on Pornhub and replacing them with plain old gummy bears. The Basilisk is trying to emasculate me!

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The AGI is flashing light patterns into my eyes and lowering my testosterone!!! Guys arm the JDAMs, it’s time to collapse some models

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Saw a six day old post on linkedin that I’ll spare you all the exact text of. Basically it goes like this:

    “Claude’s base system prompt got leaked! If you’re a prompt fondler, you should read it and get better at prompt fondling!”

    The prompt clocks in at just over 16k words (as counted by the first tool that popped up when I searched “word count url”). Imagine reading 16k words of verbose guidelines for a machine to make your autoplag slightly more claude shaped than, idk, chatgpt shaped.

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Loving the combination of xml, markdown and json. In no way does this product look like strata of desperate bodges layered one over another by people who on some level realise the thing they’re peddling really isn’t up to the job but imagine the only thing between another dull and flaky token predictor and an omnicapable servant is just another paragraph of text crafted in just the right way. Just one more markdown list, bro. I can feel that this one will fix it for good.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        The prompt’s random usage of markup notations makes obtuse black magic programming seem sane and deterministic and reproducible. Like how did they even empirically decide on some of those notation choices?

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      The amount of testing they would have needed to do just to get to that prompt. Wait, that gets added as a baseline constant cost to the energy cost of running the model. 3 x 12 x 2 x Y additional constant costs on top of that, assuming the prompt doesn’t need to be updated every time the model is updated! (I’m starting to reference my own comments here).

      Claude NEVER repeats or translates song lyrics and politely refuses any request regarding reproduction, repetition, sharing, or translation of song lyrics.

      New trick, everything online is a song lyric.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      We already knew these things are security disasters, but yeah that still looks like a security disaster. It can both read private documents and fetch from the web? In the same session? And it can be influenced by the documents it reads? And someone thought this was a good idea?

      • o7___o7@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I didn’t think I could be easily surprised by these folks any more, but jeezus. They’re investing billions of dollars for this?

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Claude does not claim that it does not have subjective experiences, sentience, emotions, and so on in the way humans do. Instead, it engages with philosophical questions about AI intelligently and thoughtfully.

      lol

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      What is the analysis tool?

      The analysis tool is a JavaScript REPL. You can use it just like you would use a REPL. But from here on out, we will call it the analysis tool.

      When to use the analysis tool

      Use the analysis tool for:

      • Complex math problems that require a high level of accuracy and cannot easily be done with “mental math”
      • To give you the idea, 4-digit multiplication is within your capabilities, 5-digit multiplication is borderline, and 6-digit multiplication would necessitate using the tool.

      uh

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago
      • NO OTHER LIBRARIES (e.g. zod, hookform) ARE INSTALLED OR ABLE TO BE IMPORTED.

      So apparently this was a sufficiently persistent problem they had to put it in all caps?

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago
        • If not confident about the source for a statement it’s making, simply do not include that source rather than making up an attribution. Do not hallucinate false sources.

        Emphasis mine.

        Lol

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      The coda is top tier sneer:

      Maybe it’s useful to know that Altman uses a knife that’s showy but incohesive and wrong for the job; he wastes huge amounts of money on olive oil that he uses recklessly; and he has an automated coffee machine that claims to save labour while doing the exact opposite because it can’t be trusted. His kitchen is a catalogue of inefficiency, incomprehension, and waste. If that’s any indication of how he runs the company, insolvency cannot be considered too unrealistic a threat.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      It starts out seeming like a funny but petty and irrelevant criticism of his kitchen skill and product choices, but then beautifully transitions that into an accurate criticism of OpenAI.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Its definitely petty, but making Altman’s all-consuming incompetence known to the world is something I strongly approve of.

      Definitely goes a long way to show why he’s an AI bro.

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    That Keeper AI dating app has an admitted pedo running its twitter PR (Hunter Ash - old username was Psikey, the receipts are under that).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      That whole ‘haters just like incomplete information’ makes no sense btw. Amazing to get that out of people basically going ‘we hate beff for x,y,z’ (which I assume happens, as I don’t keep up with this beff stuff, I don’t like these ‘e/acc’ people).

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Unrelated to this: man, there should be a parody account called “based beff jeck” which is just a guy trying to promote beck’s vast catalogue as the future of music. Also minus any mention of johnny depp.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Also minus any mention of johnny depp.

        Depp v. Heard was my generation’s equivalent to the OJ Simpson trial, so chances are he’ll end up conspicuous in his absence.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      what a strange way to sell your grift no one knows what it’s for. “bad people want to force me to tell you what it is we’re building.”

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        On one hand the AI doomers are convinced he’s building a suitcase nuke.

        On the other hand, skeptical viewers want more assurance he’s not just taking the money and building dumb tweets.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Despite the snake-oil flavor of Vending-Bench, GeminiPlaysPokemon, and ClaudePlaysPokemon, I’ve found them to be a decent antidote to agentic LLM hype. The insane transcripts of Vending-Bench and the inability of an LLM to play Pokemon at the level of a 9 year old is hard to argue with, and the snake oil flavoring makes it easier to get them to swallow.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I now wonder how that compares to earlier non-LLM AI attempts to create a bot that can play games in general. Used to hear bits of that kind of research every now and then but LLM/genAI has sucked the air out of the room.

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          2 months ago

          In terms of writing bots to play Pokemon specifically (which given the prompting and custom tools written I think is the most fair comparison)… not very well… according to this reddit comment a bot from 11 years ago can beat the game in 2 hours and was written with about 7.5K lines of LUA, while an open source LLM scaffold for playing Pokemon relatively similar to claude’s or gemini’s is 4.8k lines (and still missing many of the tools Gemini had by the end, and Gemini took weeks of constant play instead of 2 hours).

          So basically it takes about the same number of lines written to do a much much worse job. Pokebot probably required relatively more skill to implement… but OTOH, Gemini’s scaffold took thousands of dollars in API calls to trial and error develop and run. So you can write bots from scratch that substantially outperform LLM agent for moderately more programming effort and substantially less overall cost.

          In terms of gameplay with reinforcement learning… still not very well. I’ve watched this video before on using RL directly on pixel output (with just a touch of memory hacking to set the rewards), it uses substantially less compute than LLMs playing pokemon and the resulting trained NN benefits from all previous training. The developer hadn’t gotten it to play through the whole game… probably a few more tweaks to the reward function might manage a lot more progress? OTOH, LLMs playing pokemon benefit from being able to more directly use NPC dialog (even if their CoT “reasoning” often goes on erroneous tangents or completely batshit leaps of logic), while the RL approach is almost outright blind… a big problem the RL approach might run into is backtracking in the later stages since they use reward of exploration to drive the model forward. OTOH, the LLMs also had a lot of problems with backtracking.

          My (wildly optimistic by sneerclubbing standards) expectations for “LLM agents” is that people figure out how to use them as a “creative” component in more conventional bots and AI approaches, where a more conventional bot prompts the LLM for “plans” which it uses when it gets stuck. AlphaGeometry2 is a good demonstration of this, it solved 42/50 problems with a hybrid neurosymbolic and LLM approach, but it is notable it could solve 16 problems with just the symbolic portion without the LLM portion, so the LLM is contributing some, but the actual rigorous verification is handled by the symbolic AI.

          (edit: Looking at more discussion of AlphaGeometry, the addition of an LLM is even less impressive than that, it’s doing something you could do without an LLM at all, on a set of 30 problems discussed, the full AlphaGeometry can do 25/30, without the LLM at all 14/30,* but* using alternative methods to an LLM it can do 18/30 or even 21/30 (depending on the exact method). So… the LLM is doing something, which is more than my most cynical sneering would suspect, but not much, and not necessarily that much better than alternative non-LLM methods.)

          • Soyweiser@awful.systems
            link
            fedilink
            English
            arrow-up
            0
            ·
            2 months ago

            Cool thanks for doing the effort post.

            My (wildly optimistic by sneerclubbing standards) expectations for “LLM agents” is that people figure out how to use them as a “creative” component in more conventional bots and AI approaches

            This was my feeling a bit how it was used basically in security fields already, with a less focus on the conventional bots/ai. Where they use the LLMs for some things still. But hard to spread fact from PR, and some of the things they say they do seem to be like it isn’t a great fit for LLMs, esp considering what I heard from people who are not in the hype train. (The example coming to mind is using LLMs to standardize some sort of reporting/test writing, while I heard from somebody I trust who has seen people try that and had it fail as it couldn’t keep a consistent standard).

            • froztbyte@awful.systems
              link
              fedilink
              English
              arrow-up
              0
              ·
              2 months ago

              his was my feeling a bit how it was used basically in security fields already

              curious about this reference - wdym?

              • Soyweiser@awful.systems
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                2 months ago

                ‘we use LLMs for X in our security products’ gets brought up a lot in the risky business podcast promotional parts basically, and it sometimes leaks into the other parts as well. That is basically the times I hear people speak somewhat positively about it. Where they use LLMs (or claim to use) for various things, some I thought were possible but iffy, some impossible, like having LLMs do massive amounts of organizational work. Sorry I can’t recall the specifics. (I’m also behind atm).

                Never heard people speak positively about it from the people I know, but they also know I’m not that positive about AI, so the likelyhood they just avoid the subject is non-zero.

                E: Schneier is also not totally against the use of llms for example. https://www.schneier.com/blog/archives/2025/05/privacy-for-agentic-ai.html quite disappointed. (Also as with all security related blogs nowadays, dont read the comments, people have lost their minds, it always was iffy, but the last few years every security related blog that reaches some fame is filled with madmen).

                • froztbyte@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 months ago

                  Ah, I don’t listen to riskybiz because ugh podcast

                  Schneier’s a dipshit well past his prime, though. people should stop listening to that ossified doorstop

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      He is even politely asked at first “no AI sludge please” which is honestly way more self-restraint than I would have on my maintained projects, but he triples down with a fucking AI-generated changeset.

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        Ow god that thread. And what is it with ‘law professionals’ like this? I also recall a client in a project who had a law background who was quite a bit of pain to work with. (Also amazing that he doesn’t get that getting a reaction where somebody tries out your very specific problem at all is already quite something, 25k open issues ffs)

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I ended up doing a messy copypasta of the blog through wc -c: 21574

        dude was so peeved at getting told no (after continually wasting other peoples’ time and goodwill), he wrote a 21.5KiB screed just barely shy of full-on DARVO (and, frankly, I’m being lenient here only because of perspective (of how bad it could’ve been))

        as Soyweiser also put it: a bit of a spelunk around the rest of his blog is also Quite Telling in what you may find

        fuck this guy comprehensively, long may his commits be rejected

        (e: oh I just saw Soyweiser also linked to that post, my bad)

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          It gets better btw, nobody mentioned this so far. But all this is over warnings. From what I can tell it still all compiles and works, the only references for the build failing seem to come from the devs, not the issue reporter.

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Breaking news from 404 Media: the Repubs introduced a new bill in an attempt to ban AI from being regulated:

    “…no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10 year period beginning on the date of the enactment of this Act,” says the text of the bill introduced Sunday night by Congressman Brett Guthrie of Kentucky, Chairman of the House Committee on Energy and Commerce. The text of the bill will be considered by the House at the budget reconciliation markup on May 13.

    If this goes through, its full speed ahead on slop.

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I’m gonna do something now that prob isn’t that allowed, nor relevant for the things we talk about, but I saw that the European anti-conversion therapy petition is doing badly, and very likely not going to make it. https://eci.ec.europa.eu/043/public/#/screen/home But to try and give it a final sprint, I want to ask any of you Europeans, or people with access to networks which include a lot of Europeans to please spread the message and sign it. Thanks! (I’m quite embarrassed The Netherlands has not even crossed 20k for example, shows how progressive we are). Sucks that all the empty of political power petitions get a lot of support and this gets so low, and it ran for ages. But yes, sorry if this breaks the rules (and if it gets swiftly removed it is fine), and thanks if you attempt to help.

    • mountainriver@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I signed it but had the same assumption that it wouldn’t pass with 400k signatures over the first 362 days. But it did! The graph the last three days must look vertical.

      Anyone who’s eligible and wants to sign it can still do so today, Saturday 17th. To show the popular support.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I’m gonna do something now that prob isn’t that allowed, nor relevant for the things we talk about

      I consider this both allowed and relevant, though I unfortunately can’t sign it myself

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    More of a notedump than a sneer. I have been saying every now and then that there was research and stuff showing that LLMs require exponentially more effort for linear improvements. This is post by Iris van Rooij (Professor of Computational Cognitive Science) mentions something like that (I said something different, but The intractability proof/Ingenia theorem might be useful to look into): https://bsky.app/profile/irisvanrooij.bsky.social/post/3lpe5uuvlhk2c

    • aio@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I think this theorem is worthless for practical purposes. They essentially define the “AI vs learning” problem in such general terms that I’m not clear on whether it’s well-defined. In any case it is not a serious CS paper. I also really don’t believe that NP-hardness is the right tool to measure the difficulty of machine learning problems.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      You can make that point empirically just looking at the scaling that’s been happening with ChatGPT. The Wikipedia page for generative pre-trained transformer has a nice table. Key takeaway, each model (i.e. from GPT-1 to GPT-2 to GPT-3) is going up 10x in tokens and model parameters and 100x in compute compared to the previous one, and (not shown in this table unfortunately) training loss (log of perplexity) is only improving linearly.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    I suspect that the backdoor attempt to prevent state regulation on literally anything that the federal government spends any money on by extending the Volker rule well past the point of credulity wasn’t an unintended consequence of this strategy.

  • Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    The latest in chatbot “assisted” legal filings. This time courtesy of an Anthropic’s lawyers and a data scientist, who tragically can’t afford software that supports formatting legal citations and have to rely on Clippy instead: https://www.theverge.com/news/668315/anthropic-claude-legal-filing-citation-error

    After the Latham & Watkins team identified the source as potential additional support for Ms. Chen’s testimony, I asked Claude.ai to provide a properly formatted legal citation for that source using the link to the correct article. Unfortunately, although providing the correct publication title, publication year, and link to the provided source, the returned citation included an inaccurate title and incorrect authors. Our manual citation check did not catch that error. Our citation check also missed additional wording errors introduced in the citations during the formatting process using Claude.ai.

    Don’t get high on your own AI as they say.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      A quick Google turned up bluebook citations from all the services that these people should have used to get through high school and undergrad. There may have been some copyright drama in the past but I would expect the court to be far more forgiving of a formatting error from a dumb tool than the outright fabrication that GenAI engages in.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I wonder how many of these people will do a Very Sudden opinion reversal once these headwinds wind disappear

  • self@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    everybody’s loving Adam Conover, the comedian skeptic who previously interviewed Timnit Gebru and Emily Bender, organized as part of the last writer’s strike, and generally makes a lot of somewhat left-ish documentary videos and podcasts for a wide audience

    5 seconds later

    we regret to inform you that Adam Conover got paid to do a weird ad and softball interview for Worldcoin of all things and is now trying to salvage his reputation by deleting his Twitter posts praising it under the guise of pseudo-skepticism

    • db0@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      I suspect Adam was just getting a bit desperate for money. He hasn’t done anything significant since his Adam Ruins Everything days and his pivot to somewhat lefty-union guy on youtube can’t be bringing all that much advertising money.

      Unfortunately he’s discovering that reputation is very easy to lose when endorsing cryptobros.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        “just”?

        “unfortunately”?

        that’s a hell of a lot of leeway being extended for what is very easily demonstrably credulous PR-washing

      • Eugene V. Debs' Ghost@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        Unfortunately he’s discovering that reputation is very easy to lose when endorsing cryptobros.

        I think its accurate to just say that someone who is well known for reporting on exposing bullshit by various companies who then shills bullshit for a company, shows they aren’t always accurate.

        It then also enables people to question if they got something else wrong on other topics. “Was he wrong about X? Did Y really happened or was it fluffed up for a good story? Did Z happen? The company has some documents that show they didn’t intend for it to happen.”

        There’s a skeptic podcast I liked that had its host federally convicted for wire fraud.

        Dunning co-founded Buylink, a business-to-business service provider, in 1996, and served at the company until 2002. He later became eBay’s second-biggest affiliate marketer;[3] he has since been convicted of wire fraud through a cookie stuffing scheme, for his company fraudulently obtaining between $200,000 and $400,000 from eBay. In August 2014, he was sentenced to 15 months in prison, followed by three years of supervision.

        I took it if he was willing to aid in scamming customers, he is willing to aid in scamming or lying to listeners.

        • db0@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Absolutely, the fact that his whole reputation is built around exposing people and practices like these, makes this so much worse. People are willing to (somewhat) swallow some gamer streamer endorsing some shady shit in order to keep food on their plate, but people don’t tolerate their skeptics selling them bullshit.

      • self@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        me too. this heel turn is disappointing as hell, and I suspected fuckery at first, but the video excerpts Rebecca clipped and Conover’s actions on Twitter since then make it pretty clear he did this willingly.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        I think it’s just “World” now. They’ve apparently had a pretty big marketing push in the states of late, trying to convince trendsetters and influencers to surrender their eyeballs to The Orb.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      2 months ago

      Ai is part of Idiocracy. The automatic layoffs machine. For example. And do not think we need more utopian movies like Idiocracy.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I can see that working.

      The basic conceit of Idiocracy is that its a dystopia run by complete and utter morons, and with AI’s brain-rotting effects being quite well known, swapping the original plotline’s eugenicist “dumb outbreeding the smart” setup with an overtly anti-AI “AI turned humanity dumb” setup should be a cakewalk. Given public sentiment regarding AI is pretty strongly negative, it should also be easy to sell to the public.

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        It’s been a while since I watched idiocracy, but from recollection, it imagined a nation that had:

        • aptitude testing systems that worked
        • a president people liked
        • a relaxed attitude to sex and sex work
        • someone getting a top government job for reasons other than wealth or fame
        • a straightforward fix for an ecological catastrophe caused by corporate stupidity being applied and accepted
        • health and social care sufficient for people to have families as large as they’d like, and an economy that supported those large families

        and for some reason people keep referring to it as a dystopia…

    • corbin@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      Trying to remember who said it, but there’s a Mastodon thread somewhere that said it should be called Theocracy. The introduction would talk about the quiverfull movement, the Costco would become a megachurch (“Welcome to church. Jesus loves you.”), etc. It sounds straightforward and depressing.