Ollama on Windows: How to install and use it with OpenWebUI

Аватар автора
Learning La-La Land
Ollama is one of the easiest ways to run large language models locally. Thanks to llama.cpp, it can run models on CPUs or GPUs, even older ones like my RTX 2070 Super. It provides a CLI and an OpenAI compatible API which you can use with clients such as OpenWebUI, and Python. In this video, you&learn how to install Ollama, load models via the command line and use OpenWebUI with it. Contents: 00:00 AI Announcements this Week 00:33 About Ollama 01:28 Ollama Installation 02:07 Using the Ollama CLI 03:28 Installing OpenWebUI 04:04 Downloading LLaVA 04:33 Seaching Ollama Models and Tags 05:32 Using OpenWebUI with Ollama 06:34 Wrapping Up Links: #openaiapi

0/0


0/0

0/0

0/0