I’ve just re-discovered ollama and it’s come on a long way and has reduced the very difficult task of locally hosting your own LLM (and getting it running on a GPU) to simply installing a deb! It also works for Windows and Mac, so can help everyone.
I’d like to see Lemmy become useful for specific technical sub branches instead of trying to find the best existing community which can be subjective making information difficult to find, so I created !Ollama@lemmy.world for everyone to discuss, ask questions, and help each other out with ollama!
So, please, join, subscribe and feel free to post, ask questions, post tips / projects, and help out where you can!
Thanks!
While I don’t think that llama.cpp is specifically a special risk, I think that running generative AI software in a container is probably a good idea. It’s a rapidly-moving field with a lot of people contributing a lot of code that very quickly gets run on a lot of systems by a lot of people. There’s been malware that’s shown up in extensions for (for example) ComfyUI. And the software really doesn’t need to poke around at outside data.
Also, because the software has to touch the GPU, it needs a certain amount of outside access. Containerizing that takes some extra effort.
https://old.reddit.com/r/comfyui/comments/1hjnf8s/psa_please_secure_your_comfyui_instance/
Ollama means sticking llama.cpp in a Docker container, and that is, I think, a positive thing.
If there were a close analog to ollama, like some software package that could take a given LLM model and run in podman or Docker or something, I think that that’d be great. But I think that putting the software in a container is probably a good move relative to running it uncontainerized.
I don’t understand.
Ollama is not actually docker, right? It’s running the same llama.cpp engine, it’s just embedded inside the wrapper app, not containerized. It has a docker preset you can use, yeah.
And basically every LLM project ships a docker container. I know for a fact llama.cpp, TabbyAPI, Aphrodite, Lemonade, vllm and sglang do. It’s basically standard. There’s all sorts of wrappers around them too.
You are 100% right about security though, in fact there’s a huge concern with compromised Python packages. This one almost got me: https://pytorch.org/blog/compromised-nightly-dependency/
This is actually a huge advantage for llama.cpp, as it’s free of python and external dependencies by design. This is very unlike ComfyUI which pulls in a gazillian external repos. Theoretically the main llama.cpp git could be compromised, but it’s a single, very well monitored point of failure there, and literally every “outside” architecture and feature is implemented from scratch, making it harder to sneak stuff in.
I’m sorry, you are correct. The syntax and interface mirrors docker, and one can run ollama in Docker, so I’d thought that it was a thin wrapper around Docker, but I just went to check, and you are right — it’s not running in Docker by default. Sorry, folks! Guess now I’ve got one more thing to look into getting inside a container myself.
Try ramalama, it’s designed to run models override oci containers