Emoji generation ke liye Gemma 3 270M ko Fine-tune karein 🚀
Yah notebook text ko emoji mein translate karne ke liye Gemma ko fine-tune karta hai, jismein Quantized Low-Rank Adaptation (QLoRA) ka upyog kiya gaya hai, Hugging Face Transformer Reinforcement Learning (TRL) library ke madhyam se memory ke upyog ko kam karne aur fine-tuning process ko tez karne mein madad milti hai.
Google Colab T4 GPU accelerator par Gemma 3 270M ko train karte samay, yah process shuru se ant tak 10 minute se bhi kam samay le sakta hai. Har code snippet ko run karein:
- Colab environment set up karein
- Fine-tuning ke liye dataset taiyar karein
- Base Gemma 3 270M model load aur test karein
- Model ko fine-tune karein
- Model ko test, evaluate aur aage ke upyog ke liye save karein
Development environment set up karein 🛠️
Pahla kadam hai `pip` package installer ka upyog karke zaroori libraries install karna.
Nayi install ki gayi libraries ka upyog karne ke liye aapko apna session (runtime) restart karna pad sakta hai.
!pip install -qqq -U bitsandbytes transformers peft accelerate trl
!pip install -qqq datasets einops matplotlib
Hugging Face permissions enable karein 🔐
Gemma models ka upyog karne ke liye, aapko model upyog license swikar karna hoga aur ek Access Token banana hoga:
- Model page par license swikar karein.
- 'Write' access ke saath ek valid Access Token प्राप्त karein (bahut mahatvapurna!)
- Left toolbar mein ek naya Colab secret banaein. 'Name' ke roop mein `HF_TOKEN` specify karein, 'Value' ke roop mein apna unique token add karein, aur 'Notebook access' ko toggle on karein.
from huggingface_hub import notebook_login
notebook_login()
Dataset load karein 📚
Hugging Face training aur evaluating models ke liye datasets ka ek bada collection host karta hai. Agar aap custom dataset ka upyog nahi kar rahe hain, to aap text aur corresponding emoji translations ke examples wale premade dataset ko load kar sakte hain.
Agar aap apna custom dataset upyog karna chahte hain, to agle step par skip karein.
from datasets import load_dataset
dataset = load_dataset("kr15t3n/g-emoji")
dataset["train"]
Custom dataset upload karein ⬆️
Agar aapne pehle hi dataset load kar liya hai, to is step ko skip karein.
Aap Gemma 3 270M ko specific emoji ka upyog karne ke liye customize kar sakte hain, key-value pairs ke roop mein structure kiya gaya text-to-emoji dataset wala spreadsheet bana kar. Agar aap specific emoji ke memorization ko badhava dena chahte hain, to hum us emoji ke 10-20 examples alag-alag text variations ke saath pradan karne ki salah dete hain.
Apna khud ka dataset banane ke liye premade dataset ka template ke roop mein upyog karein, fir use left toolbar mein Files folder mein upload karein. File par right-click karke aur `custom_dataset_path` mein ise point karke iska path prapt karein.
import pandas as pd
from datasets import Dataset
# custom_dataset_path = "Emoji Translation Dataset - Dataset.csv" # Replace with your custom dataset path
# custom_dataset = pd.read_csv(custom_dataset_path)
# dataset = Dataset.from_pandas(custom_dataset)
# dataset = dataset.train_test_split(test_size=0.2) # Split your dataset into train and test sets
# dataset["train"]
Model load karein 🧠
Aap license terms swikar karke Hugging Face Hub se Gemma 3 270M tak pahunch sakte hain. Model ka instruction-tuned version directions follow karna sikhane ke liye pehle hi train kiya gaya hai, aur fine-tuning ke saath, aap ab ise ek naye task ke liye adapt karenge.
Agar aap GPU runtime ka upyog kar rahe hain to Device `cuda` print hona chahiye. Agar aapne abhi tak nahi kiya hai, to faster fine-tuning ke liye apne Colab mein free T4 GPU runtime ka upyog karein.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
model_id = "google/gemma-3-270m-it"
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map={"":0})
print(f"Device: {model.device}")
Training dataset format karein 📝
Ab jab aapne apna data load kar liya hai, training dataset ko conversational roles mein format karein, jismein text input aur emoji output shamil hai, plus ek system prompt jo model ko direction deta hai. Yah model ko aapke dataset se 'text' aur 'emoji' columns ko interpret karna sikhane mein madad karta hai.
def format_dataset(sample):
return {
"text": f"Translate this text to emoji: {sample['text']} \n {sample['emoji']}"
}
dataset["train"] = dataset["train"].map(format_dataset)
dataset["test"] = dataset["test"].map(format_dataset)
print(dataset["train"][0]["text"])
Recommended: Base model test karein 🧪
Chaliye pehle check karte hain ki base model "Translate this text to emoji" instruction ka response kaise deta hai.
Ise kuch baar test karne ka prayas karein.
Base model ka output aapki ummeedon par khara nahi utar sakta—aur yah theek hai!
Gemma 3 270M ko task specialization ke liye design kiya gaya tha, jiska matlab hai ki yah representative examples ke saath train kiye jaane par specific tasks ke liye performance mein sudhar kar sakta hai. Chaliye aur adhik reliable outputs ke liye model ko fine-tune karte hain.
text_to_translate = "I am so happy to be learning about fine-tuning Gemma!"
input_ids = tokenizer(text_to_translate, return_tensors="pt").to(model.device)
outputs = model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Model ko fine-tune karein ⚙️
Hugging Face TRL memory-efficient techniques jaise QLoRA (Quantized Low-Rank Adaptation) ka upyog karke LLM ko train aur fine-tune karne ke liye tools pradan karta hai, jo model ke frozen quantized version ke upar adapters ko train karta hai.
Tuning job configure karein 🛠️
Gemma 3 base model ke liye training configuration define karein:
- `BitsandBytesConfig` memory efficiency ke liye model ko quantize karne ke liye
- `LoraConfig` parameter-efficient fine-tuning ke liye
- `SFTConfig` supervised fine-tuning ke liye
from peft import LoraConfig
from trl import SFTConfig
peft_config = LoraConfig(
r=8,
target_modules=["q_proj", "o_proj", "k_proj", "v_proj", "gate_proj", "up_proj", "down_proj"],
task_type="CAUSAL_LM",
)
training_arguments = SFTConfig(
per_device_train_batch_size=1,
gradient_accumulation_steps=4,
optim="paged_adamw_32bit",
lr_scheduler_type="cosine",
logging_steps=1,
learning_rate=2e-4,
max_steps=100,
output_dir="gemma-3-270m-it-emoji",
# push_to_hub=True, # Uncomment to push to Hugging Face Hub
report_to="none",
dataset_text_field="text",
num_train_epochs=1,
packing=True,
max_seq_length=512,
eval_strategy="steps",
eval_steps=10,
save_steps=10,
load_best_model_at_end=True,
metric_for_best_model="eval_loss",
greater_is_better=False,
)
Training shuru karein 🚀
`SFTTrainer` datasets ko tokenize karta hai aur pichle step se hyperparameters ka upyog karke base model ko train karta hai.
Training time kai factors par nirbhar karta hai, jaise ki aapke dataset ka size ya epochs ki sankhya. T4 GPU ka upyog karke, 1000 training examples ke liye ismein lagbhag 10 minute lagte hain. Agar training dheere ho raha hai, to check karein ki aap Colab mein T4 GPU ka upyog kar rahe hain.
Har training checkpoint (epoch) ke liye LoRA adapters aapke temporary Colab session storage mein save kiye jaenge. Ab, aap training aur validation loss metrics ka evaluation kar sakte hain yah chunne ke liye ki kaun se adapters ko model ke saath merge karna hai.
from trl import SFTTrainer
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_arguments,
peft_config=peft_config,
train_dataset=dataset["train"],
eval_dataset=dataset["test"],
)
trainer.train()
Training results plot karein 📈
Model ka evaluation karne ke liye, aap Matplotlib ka upyog karke training aur validation losses ko plot kar sakte hain taaki training steps ya epochs par in metrics ko visualize kiya ja sake. Yah training process ko monitor karne aur hyperparameters ya early stopping ke baare mein informed decisions lene mein madad karta hai.
Training loss us data par error ko measure karta hai jispar model ko train kiya gaya tha. Validation loss ek alag dataset par error ko measure karta hai jo model ne pehle nahi dekha hai. Dono ko monitor karna overfitting (jab model training data par accha perform karta hai lekin unseen data par kharab) ka pata lagane mein madad karta hai.
- validation loss >> training loss: overfitting
- validation loss > training loss: kuch overfitting
- validation loss < training loss: kuch underfitting
- validation loss << training loss: underfitting
Agar aapke task ko specific examples ke memorization ki zaroorat hai, ya kisi diye gaye text ke liye specific emoji generate karne ki, to overfitting faydemand ho sakta hai.
import matplotlib.pyplot as plt
metrics = trainer.state.log_history
train_losses = [entry['loss'] for entry in metrics if 'loss' in entry]
eval_losses = [entry['eval_loss'] for entry in metrics if 'eval_loss' in entry]
steps = [entry['step'] for entry in metrics if 'loss' in entry]
plt.figure(figsize=(10, 6))
plt.plot(steps[:len(train_losses)], train_losses, label='Training Loss')
plt.plot(steps[:len(eval_losses)], eval_losses, label='Validation Loss')
plt.xlabel('Steps')
plt.ylabel('Loss')
plt.title('Training and Validation Loss')
plt.legend()
plt.grid(True)
plt.show()
Adapters merge karein 🔄
Ek baar train hone ke baad aap LoRA adapters ko model ke saath merge kar sakte hain. Aap training checkpoint folder specify karke yah chun sakte hain ki kaun se adapters ko merge karna hai, anyatha yah last epoch par default ho jaega.
- Better task generalization ke liye, sabse underfit checkpoint chunein (validation loss < training loss)
- Specific examples ke बेहतर memorization ke liye, sabse overfit checkpoint chunein (> training loss)
from peft import AutoPeftModelForCausalLM
merged_model = AutoPeftModelForCausalLM.from_pretrained(
training_arguments.output_dir,
torch_dtype=torch.bfloat16
)
merged_model_id = "gemma-3-270m-it-emoji-merged"
merged_model.save_pretrained(merged_model_id)
tokenizer.save_pretrained(merged_model_id)
Fine-tuned model test karein ✅
Chaliye apne fine-tuned model ki performance ki base model ke khilaf tulna karte hain! `text_to_translate` ko update karke kuch inputs test karein.
Kya model aapki ummeed ke emoji output karta hai?
Agar aapko desired results nahi mil rahe hain, to aap model ko train karne ke liye alag hyperparameters ka upyog karne ka prayas kar sakte hain, ya adhik representative examples shamil karne ke liye apne training dataset ko update kar sakte hain.
Ek baar jab aap results se khush ho jaate hain, to aap apne model ko Hugging Face Hub par save kar sakte hain.
text_to_translate = "I am so happy to be learning about fine-tuning Gemma!"
input_ids = tokenizer(text_to_translate, return_tensors="pt").to(merged_model.device)
outputs = merged_model.generate(**input_ids, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Apna model save karein aur Hugging Face Hub par upload karein ☁️
Ab aapke paas ek customized Gemma 3 270M model hai! 🎉
Ise Hugging Face Hub par ek repository mein upload karein taaki aap apne model ko aasani se share kar sakein ya baad mein access kar sakein.
# merged_model.push_to_hub(merged_model_id)
# tokenizer.push_to_hub(merged_model_id)
Summary aur agle steps ➡️
Is notebook mein emoji generation ke liye Gemma 3 270M ko efficiently fine-tune karna cover kiya gaya hai. On-device deployment ke liye ise taiyar karne ke liye conversion aur quantization steps par aage badhein. Aap nimnalikhit steps ko follow kar sakte hain:
- MediaPipe LLM Inference API ke saath upyog ke liye convert karein
- ONNX Runtime ke madhyam se Transformers.js ke saath upyog ke liye convert karein
Run in Google Colab
|
Run in Google Colab