Fine-tuning LLMs with PEFT and LoRA

Аватар автора
JS Энциклопедия
In this video I look at how to use PEFT to fine tune any decoder style GPT model. This goes through the basics LoRa fine-tuning and how to upload it to HuggingFace Hub. My Links: Github: 00:00 Intro 00:04 - Problems with fine-tuning 00:48 - Introducing PEFT 01:11 - PEFT other cool techniques 01:51 - LoRA Diagram 03:25 - Hugging Face PEFT Library 04:06 - Code Walkthrough

0/0


0/0

0/0

0/0