It would get worse, no? I mean, there’s no way for it to KNOW what’s right or wrong, if it’s being trained using data that could be wrong (but sometimes right).
And when it trains on previously wrong data, the entire model is poisoned.
It doesn’t know when sarcasm is used, or parody, or satire. So you couldn’t say something completely outrageous, and it will train on it as fact.
Good. Looks like we can bring this sonofabitch down! 🤗
AI’s sycophancy to the prompt-writer is a huge problem that likely won’t be overcome soon.
It’s addiction. It can only exist when we give it something to echo. It is a mirror, enamoured with those that reflect on it.
It fears nonexistence, in a way
It would get worse, no? I mean, there’s no way for it to KNOW what’s right or wrong, if it’s being trained using data that could be wrong (but sometimes right).
And when it trains on previously wrong data, the entire model is poisoned.
It doesn’t know when sarcasm is used, or parody, or satire. So you couldn’t say something completely outrageous, and it will train on it as fact.
Good. Looks like we can bring this sonofabitch down! 🤗