How to Install an LLM Model Locally: A Comprehensive Guide
Introduction
In this blog we will see how can we install LLM locally and use it for local learning purpose. Actually you can use it more than learning like code generation. We will explore two option to install LLM locally.
Installing an LLM using Ollama:
Download Ollama: You can download ollama from its official website for your specific operating system
Run the Model: You can download and run any supported open source LLM model. It is very easy. You just need to pull and run the model by running below two simple commands
ollama pull llama3
ollama run llama3
Command which you can use with ollama can be found on its github page
Interact with the LLM: Now you can ask any question like you ask to ChatGpt.

Installing an LLM using LM Studio:
Download LM Studio: Go to this link to download LM studio from official website
Open LM Studio: Open the GUI of LM studio.
Find and install the LLM: GUI lists all the available open source model. It also shows the compatibility with your machine hardware. You can download the model which is compatible with your hardware and use it.

Use the LLM: LM Studio offers a Chat window as well where you can interact with downloaded LLM. First load the model from among all downloaded models. Now, start chatting with it.

Additional Considerations:
- Hardware Requirements: You can run 8b parameter quantized version of many LLM model on 16GB RAM, Intel i5 PC very easily. Response will be slow. LM studio clearly shows you if the model is supported on machine.
- Customization: Both Ollama and LM Studio offer some degree of customization. Ollama via config file and LM studio via GUI.
- API support: Both supports API interaction as well. API can be further used to build and test applications locally. You can use either Spring AI or Langchain to quickly build an LLM application