

2·
9 days agoIn your defense, I’ve thought the same joke every time I’ve seen it lol


In your defense, I’ve thought the same joke every time I’ve seen it lol


Funny to see someone else with an active distaste for his videos. He sets off predatory alarm bells in my head and feels smarmy to me.


Especially from a 7b model


run a local LLM like Claude!
Look inside
“Run ollama”
Ollama will almost always be slower than running vllm or llama.cpp, nobody should be suggesting it for anything agentic. On most consumer hardware, the availability of llama.cpp’s --cpu-moe flag alone is absurdly good and worth the effort to familiarize yourself with llamacpp instead of ollama.
Arch since 2018, mint for a few years before that, Ubuntu from 2008-2014