While this linear model’s overall predictive accuracy barely outperformed random guessing,

I was tempted to write this up for Pivot but fuck giving that blog any sort of publicity.

the rest of the site is a stupendous assortment of a very small field of focus that made this ideal for sneerclub and not just techtakes

  • BlueMonday1984@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    18 hours ago

    I was tempted to write this up for Pivot but fuck giving that blog any sort of publicity.

    On the one hand, I can see you not wanting to give the fucker attention, on the other hand, AI’s indelible link to fascism is something which needs to be hammered home and shit like this gives you a golden opportunity to do it.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      13 hours ago

      The post is using traditional orthodox frankincense scented machine learning techniques though, they aren’t just asking an LLM.

      This is AI from when we were using it to decide if an image is of a dog or a cat, not how to best disenfranchise all creatives.

    • David Gerard@awful.systemsOPM
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 hours ago

      I have a half written text about working definitions of intelligence in the AI field and whoops, it’s all racism!

      • David Gerard@awful.systemsOPM
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 hours ago

        wot i got so far:

        Current “artificial general intelligence” researchers have a repeated habit of using a definition of “intelligence” from psychologist and ardent race scientist Linda Gottfredson. The definition looks innocuous, but was from Gottfredson’s 1994 Wall Street Journal op-ed, “Mainstream Science on Intelligence,” a farrago of race science put forward as a defense of Charles Murray’s book The Bell Curve — signed off by 52 other race scientists, 20 of whom were from the Pioneer Fund.

        Gottfredson’s piece was cited in Shane Legg’s Ph.D dissertation “Machine Super Intelligence,” in which he called it “an especially interesting definition as it was given as part of a group statement signed by 52 experts in the field” and that it therefore represented “a mainstream perspective” — an odd way to refer to Pioneer Fund race scientists. Somehow, this passed Legg’s dissertation committee.

        The definition made it from Legg’s Ph.D into Microsoft and OpenAI’s “Sparks of AGI” paper, and from there to everyone else who copies citations to fill out their bibliography. When called out on this, Microsoft did finally remove the citation.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 hours ago

          Surely there have to be some cognitive scientists who are at least a little bit less racist who could furnish alternative definitions? The actual definition at issue does seem fairly innocuous from a layman’s perspective: “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.” (Aside: it doesn’t do our credibility any favors that for all the concern about the source I had to actually track all the way to Microsoft’s paper to find the quote at issue.) The core issue is obviously that apparently they either took it completely out of context or else decided the fact that their source was explicitly arguing in favor of specious racist interpretations of shitty data wasn’t important. But it also feels like breaking down the idea itself may be valuable. Like, is there even a real consensus that those individual abilities or skills are actually correlated? Is it possible to be less vague than “among other things?” What does it mean to be “more able to learn from experience” or “more able to plan” that is rooted in an innate capacity rather than in the context and availability of good information? And on some level if that kind of intelligence is a unique and meaningful thing not emergent from context and circumstance, how are we supposed to see it emerge from statistical analysis of massive volumes of training data (Machine learning models are nothing but context and circumstance).

          I don’t know enough about the state of non-racist neuroscience or whatever the relevant field is to know if these are even the right questions to ask, but it feels like there’s more room to question the definition itself than we’ve been taking advantage of. If nothing else the vagueness means that we haven’t really gotten any more specific than “the brain’s ability to brain good.”