Deploy LLM locally using Ollama !

Official website — https://ollama.ai/

Github page — https://github.com/jmorganca/ollama

API documentation — https://github.com/jmorganca/ollama/blob/main/docs/api.md

Install Ollama locally

Click Download -

Choose your OS -

Open the app and click Next-

Install command line tool -

Click Finish -

Deploy llama2 -

You can chat on the terminal directly -

In Ollama GitHub page, you can find all the supporting LLMs that can be deployed using Ollama, their size and deployment command -

Access the locally deployed model through API

Langchain to load locally deployed model

Ollama cookbooks — https://github.com/jmorganca/ollama/tree/main/examples

Use python langchain framework to load LLMs using ollama local api -

from langchain.llms import Ollama

llm = Ollama(model="llama2")
res = llm.predict('Who are you?')
print (res)

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

Shekhar Khandelwal
Shekhar Khandelwal

Written by Shekhar Khandelwal

Data Scientist with a majors in Computer Vision. Love to blog and share the knowledge with the data community.

No responses yet

Write a response