• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle

  • fluent in Maltese (native) and English. Conversational in Italian. I was one of the last generations to grow up without the internet, so we had to watch TV. And we’re in close proximty to italy so we could get their channels. It is much less common nowadays for kids to also know Italian here. But people my age have no idea what Dragon Ball Z sounds like in english. We all watched it in Italian.






  • 1 - a markov chain only takes previous tokens as input.

    2 - It uses a function (in the mathematical sense, so same input results in same output, completely stateless) to generate a set of probabilities for what the next token might be.

    3 - The most probable token is picked, else randomness (temperature) is inserted here to choose a different token occasionally.

    an llm’s internals, the part that’s trained is literally the function used in step 2. You could have this function implemented a number of ways, ex you could build a huge table and consult it. Or you could generate it somehow. You could train a big neural network that takes previous tokens as input, and outputs probabilities of tokens as output. You then enumerate its outputs for every possible permutation of inputs and there’s your table. This would take too much time and space, so we just run the function on-demand instead. Exact same result. It can be very smart and notice correlations, but ultimately it generates a (virtual) huge static table. This is a completely deterministic process. A trained NN is still a (huge) mathematical function. So the big network that they spend resources training is basically the function used in step 2.

    Step 3 is the cause of hallucinations. It’s the only nondeterministic part. And it’s not part of the llm itself in any way. No matter how smarter the neural network gets, the hallucinations are introduced mainly in step 3. So no, they won’t be solving the LLM hallucination problem anytime soon.


  • and that is exactly how a predictive text algorithm works.

    • some tokens go in

    • they are processed by a deterministic, static statistical model, and a set of probabilities (always the same, deterministic, remember?) comes out.

    • pick the word with the highest probability, add it to your initial string and start over.

    • if you want variety, add some randomness and don’t just always pick the most probable next token.

    Coincidentally, this is exactly how llms work. It’s a big markov chain, but with a novel lossy compression algorithm on its state transition table. The last point is also the reason why, if anyone says they can fix llm hallucinations, they’re lying.











  • start with python to do what? learning a language is not the same a s learning programming. Heck most languages can be learned in an hour or two. Programming is another beast altogether.

    A person can learn to use a hammer in minutes, but it doesn’t make them a carpenter.

    Find a project you want to build, and start building it. solve problems, and learn along the way. Learning “python” on its own will not help you learn programming in any way. Programming stuff will.