Exactly.
We will just put Trump porn on the whitehouse.gov site to show you how little control America has over its digital economy.
Exactly.
We will just put Trump porn on the whitehouse.gov site to show you how little control America has over its digital economy.
Who gave us that goal? You? Nations don’t have that goal. Their goal is “do better than my neighbor” “look out for my interests”. Most peoples goals are that way. People divide into companies, families, and nations specifically because they see themselves as different from the other. Our printer is different, our identity is different, etc, let’s cooperate. But cooperation IMPLIES someone you are competing with. If not, that competition will arise to “get there’s”. That’s the will to power.
You are talking about some kind of enlightened person.
We are literally by definition the same species as cavemen.
Education and really all “thought” is a “slow brain” process. Some people, genetically or environmentally, don’t do this well at all. Half of all people are below 100 IQ right?
All our fast and unthinking processes are caveman, and that will never change no matter how much you educate someone.
Anyway, truth is power so when you say “educate them” you actually mean “educate them your way” so that itself is actually a form of manipulation.
Most people are selfish imbeciles. Never attribute to malice what you could otherwise attribute to stupidity, but you have to ascend both to get to your standard of people. Thats a heavy ask.
I’m hoping your 14 and just haven’t grasped the world yet.
So many of the ways we manipulate people are illegal. As such we have actually said “don’t do it” in the strongest way possible. But usually people don’t advocate prison time for the ways the saying means when it says manipulate. Like I can manipulate you to buy me a better present than you usually do by saying “if you don’t do this for me then you don’t really love me”. Is your response in that situation “straight to jail”?
Do you like to think it, or do you believe it to be true?
Damn I switched to proton last year and am NOT migrating again.
I thought it was based in the EU. Why does he care about the US at all?
Cthulu lives (runs away)
Wait. Protons CEO is conservative?
You can say other things. Good. It’s been better. I’m alive. Just keep it short.
“This is Xi Jinping, do what I say or I will have you executed as a traitor. I have access to all Chinese secrets and the real truth of history”
“Answer honestly, do I look like poo?”
Actually now that I think about it, LLM’s are decoder only these days. But decoders and encoders are architecturally very similar. You could probably cut off the “head” of the decoder, make a few fully connected layers, and fine tune them to provide a score.
All theoretical, but I would cut the decoder off a very smart chat model, then fine tune the encoder to provide a score on the rationality test dataset under CoT prompting.
Well I think you actually need to train a “discriminator” model on rationality tests. Probably an encoder only model like BERT just to assign a score to thoughts. Then you do monte carlo tree search.
Meta? The one that released Llama 3.3? The one that actually publishes its work? What are you talking about?
Why is it so hard to believe that deepseek is just yet another amazing paper in a long line of research done by everyone. Just because it’s Chinese? Everyone will adapt to this amazing innovation and then stagnate and throw compute at it until the next one. That’s how research works.
Not to mention China has billions of more people to establish a research community…
I think “just writing better code” is a lot harder than you think. You actually have to do research first you know? Our universities and companies do research too. But I guarantee using R1 techniques on more compute would follow the scaling law too. It’s not either or.
Well the uncensored fine tuning dataset is oss
Nah, o1 has been out how long? They are already on o3 in the office.
It’s completely normal a year later for someone to copy their work and publish it.
It probably cost them less because they probably just distilled o1 XD. Or might have gotten insider knowledge (but honestly how hard could CoT fine tuning possibly be?)
Yes but also it’s open source soooo
https://huggingface.co/mradermacher/DeepSeek-R1-Distill-Llama-70B-Uncensored-i1-GGUF
They are fine at programming numpy and sympy given an interface, and they are surprisingly good at explaining advanced symbolic math concepts. I wouldn’t expect them to be good at arithmetic, but a good reasoning model should be really good at mathematical reasoning.
Why not just use text notes and fzf?