According to two web analytics services, chatgpt.com is currently the 6th most visited website in the world. That means a hell of a lot of people are asking it questions, and it’s getting a hell of a lot of data.
Even though OpenAI’s Data Processing Addendum, released Feburary 2025, states OpenAI will “not “sell” […] or “share” […] Personal Data”, you might still want to keep absolute control over your data for peace of mind. Taking your interactions with large language models (LLMs) offline keeps the ball in your court.
But that’s not the only reason to take things offline. There are times when the internet fails you; webservers collapse and sometimes you just don’t have access to WIFI. Having an LLM ready to use offline gives you a great backup for when things go wrong.
And then there’s curiosity. There’s a wealth of open-source LLMs to play and experiment with, not just ChatGPT! Not only are there dozens of general purpose LLMs, but you can toy with specialised large language models like like IBM’s granite, built specifically for problem solving, or Meta’s codellama, designed to write and discuss code.
But how do you go about downloading an LLM to your laptop?
LLMs à la carte with Ollama
Get your offline chatbot dreams started with Ollama, a command line tool for building and interacting with open-source LLMs locally on your machine.
To make sure you can use it, you’ll need to check that your machine has the right specs. Even a small LLM will take up at least 1.5 GB of spare disk space and need at least 8GB of RAM to run. If you want to experiment with some beefier models, you might need up to 20GB of spare storage and 16GB RAM.
Once you’ve checked you’ve got the right specs, simply visit the Ollama website and follow the installation instructions for your operating system.
Actually using Ollama
Once you’ve downloaded Ollama, you can almost start using open-source LLMs rightaway! Browse through Ollama’s catalogue of models and find one that’s right for you (and your machine), then open up a terminal and type ollama run MODEL_NAME
to download your first, offline large language model!
There’s plenty more that Ollama can do, but for now, enjoy experimenting!
*Footnotes at https://github.com/benbutterworth/footnotes