• Showroom7561@lemmy.caOP
    link
    fedilink
    arrow-up
    3
    ·
    2 days ago

    It would get worse, no? I mean, there’s no way for it to KNOW what’s right or wrong, if it’s being trained using data that could be wrong (but sometimes right).

    And when it trains on previously wrong data, the entire model is poisoned.

    It doesn’t know when sarcasm is used, or parody, or satire. So you couldn’t say something completely outrageous, and it will train on it as fact.

    Good. Looks like we can bring this sonofabitch down! 🤗