llama.cpp works on windows too (or any os for that matter), though linux will vive you better performances
llama.cpp works on windows too (or any os for that matter), though linux will vive you better performances
Mistral modèles don’t have much filter don’t worry lmao
They is no chance they are the one training it. It costs hundreds of millions to get a descent model. Seems like they will be using mistral, who have scrapped pretty much 100% of the web to use as training data.
I use similar feature on discord quite extensively (custom emote/sticker) and i don’t feel they are just a novelty. Allows us to have inside joke / custom reaction to specific event and I really miss it when trying out open source alternatives.
Too be fair to Gemini, even though it is worse than Claude and Gpt. The weird answer were caused by bad engineering and not by bad model training. They were forcing the incorporattion off the Google search results even though the base model would most likely have gotten it right.
they released a search engine where the model reads the first link before trying to answer your request