Member-only story
Running AI Models on a Local Server
Configure AI Models (deepseek, llama, mistral,llava-llama,codestral, Gemma) in Mobile, Browser and VScode.
Configuring locally hosted AI models might seem challenging, but recent advancements have made it accessible for users with basic computer skills and a reasonably powerful GPU (with at least 16GB of memory). These AI models, capable of handling both text and image tasks, can be set up on a local server and accessed from various client devices such as mobile devices, PCs, or programming IDEs.
Benefits:
- Offline access gives you the flexibility to access these models without an external internet connection
- Privacy and security because the searches and data stays with you on your computer
- No extra cost to running a computer, no subscription is required.
- Flexibility to select and try different models such as deepseek, llama, llava
There are two main steps involved: first, installing Ollama, and second, configuring the application layer, which can be set up on mobile phones, web browsers, or programming tools like VSCode.
Note! You need a VPN connection if you are away from your local server.
Install Ollama
Install Ollama running this one-liner. If you want to update Ollama, it is the same command
curl -fsSL…