Loading...
アイコン

Shaw Talebi

チャンネル登録者数 5.95万人

8.3万 回視聴 ・ 2721いいね ・ 2024/02/27

QLoRA—How to Fine-tune an LLM on a Single GPU (w/ Python Code)

Get exclusive access to AI resources and project ideas: the-data-entrepreneurs.kit.com/shaw

In this video, I discuss fine-tuning an LLM using QLoRA (i.e. Quantized Low-rank Adaptation). Example code is provided for training a custom YouTube comment responder using Mistral-7b-Instruct.

More Resources:
▶️ Series Playlist:    • Large Language Models (LLMs)  
🎥 Fine-tuning with OpenAI:    • 3 Ways to Make a Custom AI Assistant ...  

📰 Read more: medium.com/towards-data-science/qlora-how-to-fine-…
💻 Colab: colab.research.google.com/drive/1AErkPgDderPW0dgE2…
💻 GitHub: github.com/ShawhinT/YouTube-Blog/tree/main/LLMs/ql…
🤗 Model: huggingface.co/shawhin/shawgpt-ft
🤗 Dataset: huggingface.co/datasets/shawhin/shawgpt-youtube-co…

[1] Fine-tuning LLMs:    • Fine-tuning Large Language Models (LL...  
[2] ZeRO paper: arxiv.org/abs/1910.02054
[3] QLoRA paper: arxiv.org/abs/2305.14314
[4] Phi-1 paper: arxiv.org/abs/2306.11644
[5] LoRA paper: arxiv.org/abs/2106.09685

--
Homepage: www.shawhintalebi.com/

Intro - 0:00
Fine-tuning (recap) - 0:45
LLMs are (computationally) expensive - 1:22
What is Quantization? - 4:49
4 Ingredients of QLoRA - 7:10
Ingredient 1: 4-bit NormalFloat - 7:28
Ingredient 2: Double Quantization - 9:54
Ingredient 3: Paged Optimizer - 13:45
Ingredient 4: LoRA - 15:40
Bringing it all together - 18:24
Example code: Fine-tuning Mistral-7b-Instruct for YT Comments - 20:35
What's Next? - 35:22

コメント

コメントを取得中...

コントロール
設定