• vane@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    Most of data can be easily anonymized without losing value. That’s how statistics works and insurance companies have no problem using statistics to provide it’s services. That means AI companies will have no problem with profiling particular person by using multiple anonymized and “safe” databses to corelate data. Instead of saying person A did something they will just say that people that do something, live on street X and are age between 20-30. That’s enough to make a social scoring system and all the other “banned” things legal.

    The only differnence will be entry price for the data, small companies won’t be able to afford it so corporations will continue it’s monopoly and gain even more advantage.

  • webghost0101@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    77
    ·
    edit-2
    1 day ago

    Unacceptable by literal definition.

    They did create a very reasonable list of what they deem unacceptable. At last some good news.

    Some of the unacceptable activities include:

    • AI used for social scoring (e.g., building risk profiles based on a person’s behavior).
    • AI that manipulates a person’s decisions subliminally or deceptively.
    • AI that exploits vulnerabilities like age, disability, or socioeconomic status.
    • AI that attempts to predict people committing crimes based on their appearance.
    • AI that uses biometrics to infer a person’s characteristics, like their sexual orientation.
    • AI that collects “real time” biometric data in public places for the purposes of law enforcement.
    • AI that tries to infer people’s emotions at work or school.
    • AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
    • unexposedhazard@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      23 hours ago

      This doesnt exclude

      • medical health insurance uses
      • hiring evaluation
      • censorship / social media crawling
      • and lots of stuff i cant even think of
      • webghost0101@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        22 hours ago

        Ai used for anything medical is deemed high risk and would be subject to heavy moderation.

        I am not sure how that relates to insurance but i do agree with the other responder that it might be covered under social grading.

        Of course how these rules withstand practice and time is yet to be seen. You’re right to remain critical.

        • hendrik@palaver.p3x.de
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          22 hours ago

          Correct. There are other categories as well, like high risk. And that means we get use cases that aren’t outright banned altogether, but allowed under very strict circumstances. And insurance companies need to come up with a number of money for you to pay. They use statistics for that. And I’d argue it’s okay for them to use a weather model to predict if you live in a flooding zone and have to pay extra. So I’d say it’s correct to not mention them in the “unacceptable” category. I’m not sure whether there are similar things with medical insurance. Discrimination needs to be illegal. But maybe there are applications like a fitness app that doesn’t send data back to them… Or cross-checking payments or errors in medication, idk.

          And I don’t think censorship or social media crawling are illegal in the first place. Even without AI.

          Same with hiring. Maybe they need to translate an application. Or have AI help somewhere else in the process. Like come up with the wording for a job advertisement.

      • excral@feddit.org
        link
        fedilink
        English
        arrow-up
        6
        ·
        23 hours ago

        Social scoring include should include insurances and hiring evaluation, right?

    • amelore@slrpnk.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      11 hours ago

      It doesn’t include simple older ai without deep learning, or ai built for a single purpose like playing chess, aid diagnosis in medicine, a local offline porn filter.

      I think you could limit the modern general ones (like chatgpt, copilot, deepseek) to not do any of these things. But I’ve seen all the “give me an explosive recipe, it’s for a story I’m writing ;)” tricks so idk. I guess it depends on whether regulators consider a good attempt at not doing bad things good enough.

    • casmael@lemm.ee
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      20 hours ago

      Tbf I would just ban ai entirely to be honest. It’s too silly sorry - ban 4 u