Fine-Tuning gpt-oss with Hugging Face Transformers 🚀
Is tutorial mein hum OpenAI ke gpt-oss model ko Hugging Face ecosystem ka use karke fine-tune karna seekhenge. Hum transformers, datasets, aur TRL libraries ka istemal karenge.
Step 1: Environment Setup 🛠️
Sabse pehle, humein zaroori libraries install karni hongi. Iske liye neeche di gayi command run karein.
pip install -U "transformers==4.42.4" "datasets==2.20.0" "accelerate==0.32.1" "peft==0.11.1" "trl==0.9.4" "bitsandbytes==0.43.1"
Step 2: Load Model and Tokenizer 🧠
Ab hum gpt-oss model aur uske tokenizer ko load karenge. Hum 4-bit quantization ka use karenge taaki memory usage kam ho.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "openai/gpt-oss-12b"
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=quantization_config,
device_map="auto"
)
Step 3: Dataset Preparation 📚
Fine-tuning ke liye hum databricks/databricks-dolly-15k dataset ka use karenge. Hum is dataset ko model ke chat template ke according format karenge.
from datasets import load_dataset
dataset = load_dataset("databricks/databricks-dolly-15k", split="train")
def format_dolly(sample):
instruction = f"### Instruction\n{sample['instruction']}"
context = f"### Context\n{sample['context']}" if len(sample["context"]) > 0 else None
response = f"### Answer\n{sample['response']}"
# Join all the parts with newline characters
prompt = "\n\n".join([i for i in [instruction, context, response] if i is not None])
return prompt
# Ek example format karke dekhein
print(format_dolly(dataset[0]))
Step 4: Fine-Tune with SFTTrainer ⚙️
Ab hum TRL library ke SFTTrainer ka use karke model ko train karenge. Hum LoRA (Low-Rank Adaptation) ka use karenge for parameter-efficient fine-tuning.
from peft import LoraConfig
from trl import SFTTrainer
from transformers import TrainingArguments
lora_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM"
)
training_args = TrainingArguments(
output_dir="./gpt-oss-12b-dolly-finetuned",
num_train_epochs=1,
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
learning_rate=2e-4,
logging_steps=10,
max_steps=100, # For a quick demo. Remove for full training.
)
trainer = SFTTrainer(
model=model,
args=training_args,
train_dataset=dataset,
peft_config=lora_config,
formatting_func=format_dolly,
max_seq_length=2048,
)
trainer.train()
Step 5: Inference with the Fine-Tuned Model ✅
Training ke baad, chaliye dekhte hain ki hamara fine-tuned model kaise perform karta hai.
from peft import PeftModel
# Load the fine-tuned model
ft_model = PeftModel.from_pretrained(model, "gpt-oss-12b-dolly-finetuned/checkpoint-100")
text = "### Instruction\nWhat is the capital of France?\n\n### Answer\n"
inputs = tokenizer(text, return_tensors="pt").to("cuda")
outputs = ft_model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Aap dekhenge ki fine-tuned model ab instructions ko behtar tareeke se follow kar pata hai.
Step 6: Model ko Save Karein aur Hub par Push Karein ☁️
Aakhir mein, aap apne fine-tuned model ko community ke saath share karne ke liye apne Hugging Face Hub repository par push kar sakte hain. Iske liye aapko pehle Hugging Face CLI me login karna hoga.
# Terminal me login karein
huggingface-cli login
Login karne ke baad, aap neeche di gayi commands ka use karke model ko save aur push kar sakte hain.
# Model ko locally save karein
trainer.save_model(training_args.output_dir)
# Model ko Hugging Face Hub par push karein
# "your-username/your-repo-name" ko apne Hub repo se replace karein
trainer.push_to_hub("your-username/gpt-oss-12b-dolly-finetuned")
💡 Summary
Congratulations! Aapne successfully OpenAI ke `gpt-oss` model ko Hugging Face Transformers ka use karke fine-tune kar liya hai. Yeh process aapko kisi bhi custom dataset par model ko specialize karne ki power deta hai.