- cross-posted to:
- enoughmuskspam@lemmy.world
- cross-posted to:
- enoughmuskspam@lemmy.world
We will use Grok 3.5 (maybe we should call it 4), which has advanced reasoning, to rewrite the entire corpus of human knowledge, adding missing information and deleting errors.
Then retrain on that.
Far too much garbage in any foundation model trained on uncorrected data.
“and then on retrain on that”
Thats called model collapse.
So just making shit up.
“We’ll fix the knowledge base by adding missing information and deleting errors - which only an AI trained on the fixed knowledge base could do.”
So they’re just going to fill it with Hitler’s world view, got it.
Typical and expected.
I mean, this is the same guy who said we’d be living on Mars in 2025.
Lol turns out elon has no fucking idea about how llms work
It’s pretty obvious where the white genocide “bug” came from.
He knows more … about knowledge… than… anyone alive now
Huh. I’m not sure if he’s understood the alignment problem quite right.
Dude wants to do a lot of things and fails to accomplish what he says he’s doing to do or ends up half-assing it. So let him take Grok and run it right into the ground like an autopiloted Cybertruck rolling over into a flame trench of an exploding Startship rocket still on the pad shooting flames out of tunnels made by the Boring Company.
Yes! We should all wholeheartedly support this GREAT INNOVATION! There is NOTHING THAT COULD GO WRONG, so this will be an excellent step to PERMANENTLY PERFECT this WONDERFUL AI.
What the fuck? This is so unhinged. Genuine question, is he actually this dumb or he’s just saying complete bullshit to boost stock prices?
my guess is yes.
Fuck Elon Musk
“Deleting Errors” should sound alarm bells in your head.
And the adding missing information doesn’t. Isn’t that just saying we are going to make shit up.
I read about this in a popular book by some guy named Orwell
Wasn’t he the children’s author who published the book about a talking animals learning the value of hard work or something?
The very one!
The thing that annoys me most is that there have been studies done on LLMs where, when trained on subsets of output, it produces increasingly noisier output.
Sources (unordered):
- What is model collapse?
- AI models collapse when trained on recursively generated data
- Large Language Models Suffer From Their Own Output: An Analysis of the Self-Consuming Training Loop
- Collapse of Self-trained Language Models
Whatever nonsense Muskrat is spewing, it is factually incorrect. He won’t be able to successfully retrain any model on generated content. At least, not an LLM if he wants a successful product. If anything, he will be producing a model that is heavily trained on censored datasets.
It’s not so simple, there are papers on zero data ‘self play’ or other schemes for using other LLM’s output.
Distillation is probably the only one you’d want for a pretrain, specifically.
deleted by creator
remember when grok called e*on and t**mp a nazi? good times