Correct. There are other categories as well, like high risk. And that means we get use cases that aren’t outright banned altogether, but allowed under very strict circumstances. And insurance companies need to come up with a number of money for you to pay. They use statistics for that. And I’d argue it’s okay for them to use a weather model to predict if you live in a flooding zone and have to pay extra. So I’d say it’s correct to not mention them in the “unacceptable” category. I’m not sure whether there are similar things with medical insurance. Discrimination needs to be illegal. But maybe there are applications like a fitness app that doesn’t send data back to them… Or cross-checking payments or errors in medication, idk.
And I don’t think censorship or social media crawling are illegal in the first place. Even without AI.
Same with hiring. Maybe they need to translate an application. Or have AI help somewhere else in the process. Like come up with the wording for a job advertisement.
Unacceptable by literal definition.
They did create a very reasonable list of what they deem unacceptable. At last some good news.
Some of the unacceptable activities include:
This doesnt exclude
Ai used for anything medical is deemed high risk and would be subject to heavy moderation.
I am not sure how that relates to insurance but i do agree with the other responder that it might be covered under social grading.
Of course how these rules withstand practice and time is yet to be seen. You’re right to remain critical.
Correct. There are other categories as well, like high risk. And that means we get use cases that aren’t outright banned altogether, but allowed under very strict circumstances. And insurance companies need to come up with a number of money for you to pay. They use statistics for that. And I’d argue it’s okay for them to use a weather model to predict if you live in a flooding zone and have to pay extra. So I’d say it’s correct to not mention them in the “unacceptable” category. I’m not sure whether there are similar things with medical insurance. Discrimination needs to be illegal. But maybe there are applications like a fitness app that doesn’t send data back to them… Or cross-checking payments or errors in medication, idk.
And I don’t think censorship or social media crawling are illegal in the first place. Even without AI.
Same with hiring. Maybe they need to translate an application. Or have AI help somewhere else in the process. Like come up with the wording for a job advertisement.
Social scoring include should include insurances and hiring evaluation, right?