Local LLMs

One of the things that brought me back to exploring Linux was the possibility of running a local large language model like ChatGPT.

I say “like” ChatGPT because I really had no idea of how much computing power these things needed (or not!). The machine I’m running here is nothing groundbreaking – although it is maxed out with 64gb of RAM. Otherwise, it’s a Beelink mini-pc with an AMD Ryzen 5, six core CPU.

It’s an absolutely awesome little machine for the size and price. But it’s no server farm…

So when I fired up Ollama using the instructions I found here on the It’s FOSS blog I wasn’t sure what to expect. But man – my mind has been blown. There’s multiple models available, different tunings and options.


Posted

in

by

Tags: