• blakestacey@awful.systemsM
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    jhbadger:

    As Adam Becker shows in his book, EAs started out being reasonable “give to charity as much as you can, and research which charities do the most good” but have gotten into absurdities like “it is more important to fund rockets than help starving people or prevent malaria because maybe an asteroid will hit the Earth, killing everyone, starving or not”.

    I haven’t read Becker’s book and probably won’t spend the time to do so. But if this is an accurate summary, it’s a bad sign for that book, because plenty of them were bonkers all along.

    (Becker’s previous book, about the interpretation of quantum mechanics, irritated me. It recapitulated earlier pop-science books while introducing historical and technical errors, like getting the basic description of the EPR thought-experiment wrong, and butchering the biography of Grete Hermann while acting self-righteous about sexist men overlooking her accomplishments. See previous rant.)

    • blakestacey@awful.systemsM
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 days ago

      That Carl Shulman post from 2007 is hilarious.

      After years spent studying existential risks, I concluded that the risk of an artificial intelligence with inadequately specified goals dominates. Attempts to create artificial intelligence can be expected to continue, and to become more likely to succeed in light of increased computing power, neuroscience, and intelligence-enhancements. Unless the programmers solve extremely difficult problems in both philosophy and computer science, such an intelligence might eliminate all utility within our future light-cone in the process of pursuing a poorly defined objective.

      Accordingly, I invest my efforts into learning more about the relevant technologies and considerations, increasing my earnings capability (so as to deliver most of a large income to relevant expenditures), and developing logistical strategies to more effectively gather and expend resources on the problem of creating AI that promotes (astronomically) and preserves global welfare rather than extinguishing it.

      Because the potential stakes are many orders of magnitude greater than relatively good conventional expenditures (vaccine and Green Revolution research), and the probability of disaster much more likely than for, e.g. asteroid impacts, utilitarians with even a very low initial estimate of the practicality of AI in coming decades should still invest significant energy in learning more about the risks and opportunities associated with it. (Having done so, I offer my assurance that this is worthwhile.) Note that for materialists the possibility of AI follows from the existence proof of the human brain, and that an AI able to redesign itself for greater intelligence and copy itself would have the power to determine the future of Earth-derived life.

      I suggest beginning with the two articles below on existential risk, the first on relevant cognitive biases, and the second discussing the relation of AI to existential risk. Processing these arguments should provide sufficient reason for further study.

      The “two articles below” are by Yudkowsky.

      User “gaverick” replies,

      Carl, I’m inclined to agree with you, but can you recommend a rigorous discussion of the existential risks posed by Unfriendly AI? I had read Yudkowsky’s chapter on AI risks for Bostrom’s bk (and some of his other SIAI essays & SL4 posts) but when I forward them to others, their informality fails to impress.

      Shulman’s response begins,

      Have you read through Bostrom’s work on the subject? Kurzweil has relevant info for computing power and brain imaging.

      Ray mothersodding Kurzweil!

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        3 days ago

        The intelligence is magic and the higher your intelligence the more magic thinking is quite something.

        E: should have scrolled down, astrange said it better.