Watch the video:
0:00 - Explanation
0:45 - Why this is unique
1:20 - Starting Vicuna & Oobabooga install
2:00 - Installing CPU-only Vicuna & Oobabooga (NEW)
3:00 - Vicuna CPU Desktop Shortcut (NEW)
3:45 - Using Vicuna & Oobabooga in CPU mode (NEW)
4:40 - GPU vs CPU-only install
5:38 - Installing Vicuna & Oobabooga for GPU & CPU
6:00 - Downloading GPU and/or CPU Vicuna model
7:07 - GPU vs CPU Vicuna speed
Vicuna has “90%* quality of OpenAI ChatGPT and Google Bard” while being uncensored, locally hosted and FAST (depending on hardware). This video shows my updated install script that should make life easy AND WORKS WITH CPU!
Install Oobabooga & Vicuna (UPDATED). UPDATED LINK COMING SOON
More info on Vicuna: https://vicuna.lmsys.org/
Oobabooga: https://github.com/oobabooga/text-generation-webui/
Previous video showing other features: Go to post or View on YouTube
Downloading the models manually?
CPU: https://huggingface.co/eachadea/ggml-vicuna-13b-4bit/tree/main – All files go into models\eachadea_ggml-vicuna-13b-4bit
GPU: https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g/tree/main – All files go into “models\anon8231489123_vicuna-13b-GPTQ-4bit-128g”