• henfredemars@infosec.pub
    link
    fedilink
    arrow-up
    6
    ·
    3 days ago

    That’s rather depressing to hear. AI is often used as a crutch used to pave over crappy code that would cost money to properly optimize. Maybe Nvidia is also using AI as a crutch instead of developing better GPUs that can actually render more pixels?

    • catloaf@lemm.ee
      link
      fedilink
      arrow-up
      4
      ·
      3 days ago

      Usually people are against just throwing more hardware at a problem.

      They’re going to keep making more powerful hardware either way, since parallel processing capability supports graphics and AI just fine. But if they can use a novel software solution to drastically increase performance, why not?

      • tunetardis@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        They’re going to keep making more powerful hardware either way, since parallel processing capability supports graphics and AI just fine.

        It’s not quite as simple as that. AI needs less precision than regular graphics, so chips developed with AI in mind do not necessarily translate into higher performance for other things.

        In science/engineering, people want more—not less—precision. So we look for GPUs with capable 64-bit processing, while AI is driving the industry in the other direction, from 32 down to 16.

        • catloaf@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          3 days ago

          For science and engineering, workstation cards like the A6000 aren’t going anywhere.