Master Language Models Fine-Tuning with Hugging Face
From beginner to expert in understanding, implementing, and optimizing fine-tuning techniques for large language models.
Course Overview
๐ฏ Learning Goals
Take learners from beginner to expert in fine-tuning large language models
โก Fast-Track Format
Smol but fast course designed for software developers and engineers
7-Unit Curriculum
Unit 1: Instruction Tuning
SFT, chat templates, instruction following
Unit 2: Evaluation
Benchmarks and custom domain evaluation
Unit 3: Preference Alignment
e.g., DPO
Unit 4: Reinforcement Learning
Optimize with reinforcement policies
Unit 5: Vision Language Models
Multimodal adaptation and use
Unit 6: Synthetic Data
Generate datasets for custom domains
Unit 7: Award Ceremony
Showcase and celebration
What You Need
๐ Prerequisites
Essential knowledge and skills to succeed in the course
๐ป Technical Requirements
Hardware and software needed for hands-on practice
Free Certification Paths
๐ Fundamentals Certificate
Quick achievement for core concepts
๐ Certificate of Completion
Full mastery demonstration
Course Philosophy
"Smol but fast - a concentrated learning experience that gets software developers and engineers from beginner to expert in LLM fine-tuning through practical, hands-on assignments and real-world challenges."โ Ben Burtenshaw, ML Engineer at Hugging Face
๐ What You'll Learn
๐ Study instruction tuning, supervised fine-tuning, and preference alignment in theory and practice.
๐งโ๐ป Learn to use established fine-tuning frameworks and tools like TRL and Transformers.
๐พ Share your projects and explore fine-tuning applications created by the community.
๐ Participate in challenges where you will evaluate your fine-tuned models against other students.
๐ Earn a certificate of completion by completing assignments.
๐ง Understand how to fine-tune language models effectively and build specialized AI applications using the latest fine-tuning techniques.