Gemma Models ke saath On-Device RAG System Banayein

Project Tutorial • On-Device AI

Gemma Models ke saath On-Device RAG System Banayein 📱

Ek Retrieval-Augmented Generation (RAG) system isliye zaroori hai kyunki yeh information retrieval aur generative AI ki taqat ko milakar zyada accurate, up-to-date, aur samjhane yogya jawab deta hai. Standard language models jo sirf pre-trained knowledge par nirbhar karte hain, unke vipreet, RAG systems external data sources ko dynamically access kar sakte hain. Isse yeh sunishchit hota hai ki jawab factual aur current information par aadharit hain.

Gemma models is kaam ke liye ideal hain kyunki yeh lightweight hone ke bawajood kaafi capable hain, jisse on-device deployment performance se samjhauta kiye bina possible ho jaata hai. Is post me hum step-by-step dekhenge ki kaise ek PDF file load karein, uske text ko extract aur chunk karein, similarity matching karein, aur Gemma 3 model ka use karke document ke baare me user ke sawalon ka context-aware jawab generate karein.

Step 1: PDF File se Text Extract Karein

Aap mobile par IText Core library ka use karke PDF se text extract kar sakte hain. Neeche diye gaye snippet se aap assets folder me rakhi PDF se text nikal sakte hain:

context.assets.open(assetFileName).use { inputStream ->
    val pdfReader = PdfReader(inputStream)
    val pdfDocument = PdfDocument(pdfReader)

    val text = StringBuilder()
    val numberOfPages = pdfDocument.numberOfPages

    // Sabhi pages se text extract karein (limit to first n pages to avoid overwhelming)
    val pagesToProcess = minOf(numberOfPages, 100)

    for (page in 1..pagesToProcess) {
        val pageText =
            PdfTextExtractor.getTextFromPage(pdfDocument.getPage(page))
               if (pageText.isNotBlank()) {
                   text.append("## Page $page\n")
                   text.append(pageText.trim())
                   text.append("\n\n")
               }
        }

    pdfDocument.close()

    val result = text.toString().trim()
        result.ifBlank {
          "[PDF Document: $assetFileName - No readable text content found]"
        }
    }

Step 2: Text ko Chunks me Todein (Tokenization)

Tokenization ka matlab hai text ko chhote-chhote tukdo (tokens) me todna. Hum iske liye Deep Java Library (DJL) API ka use karenge. Har text chunk EmbeddingGemma model ke max input size se chhota ya barabar hona chahiye.

// Tokenizer load karein
private fun loadTokenizer() {
    try {
        tokenizer =
            HuggingFaceTokenizer.newInstance(Paths.get("/data/local/tmp/tokenizer_embedding_300m.json"))
    } catch (e: Exception) {
        Log.e("GemmaTokenizer", "Failed to load tokenizer", e)
    }
}

// Chunker setup karein
val chunker = ChunkerHelper.RecursiveTextChunker(
    tokenizer = tokenizerAdapter,
    maxChunkTokens = 256, // tokens me chunk size
    overlapTokens = 40,  // tokens me overlap
    separators = listOf("\n\n", "\n", ". ", " ", "")
)

// Chunks banayein
val chunks = chunker.createChunks(fileTextContent ?: "")

Step 3: Har Text Chunk ke liye Embeddings Banayein aur Store Karein

Pichle step me banaye gaye har text chunk par embedding model run karein aur resulting vectors ko ek file ya vector database me store karein. Yeh step har PDF file ke liye sirf ek baar karna hota hai.

val embeddingsMap = HashMap<String, FloatArray>()

chunks.forEach { sentence ->
    val embedding = runEmbedding(sentence)
    if (embedding.isNotEmpty()) {
        embeddingsMap[sentence] = embedding
    }
}

// Embeddings ko file me save karein
ObjectOutputStream(FileOutputStream(embeddingsFile)).use { stream ->
   stream.writeObject(embeddingsMap)
}

Step 4: User ke Input ke liye Embedding Generate Karein

Har user query ke liye, hum EmbeddingGemma (.tflite model) ka use karke LiteRT framework ke saath embedding generate karenge.

private fun runEmbedding(query: String): FloatArray {
    if (tokenizer == null || interpreter == null) return floatArrayOf()
    
    val prompt = "task: search result | query: "
    val fullInput = prompt + query
    val encoding = tokenizer!!.encode(fullInput)
    val sequenceLength = 256
    val truncatedIds = encoding.ids.take(sequenceLength)
    
    val paddedIds = IntArray(sequenceLength) { 0 }
    truncatedIds.forEachIndexed { i, id -> paddedIds[i] = id.toInt() }
    
    val inputArray = arrayOf(paddedIds)
    val outputBuffer = TensorBuffer.createFixedSize(intArrayOf(1, 768), DataType.FLOAT32)
    
    interpreter?.run(inputArray, outputBuffer.buffer)
    return outputBuffer.floatArray
}

Step 5: Similarity Check Karein

Ab user ke input embedding aur har stored text chunk ke embedding ke beech cosine similarity check karein.

fun cosineSimilarity(vectorA: FloatArray, vectorB: FloatArray): Float {
    if (vectorA.size != vectorB.size) throw IllegalArgumentException("Vectors must be of the same size")

    var dotProduct = 0.0
    var normA = 0.0
    var normB = 0.0
    for (i in vectorA.indices) {
        dotProduct += vectorA[i] * vectorB[i]
        normA += vectorA[i] * vectorA[i]
        normB += vectorB[i] * vectorB[i]
    }

    val magnitudeA = sqrt(normA)
    val magnitudeB = sqrt(normB)
    if (magnitudeA == 0.0 || magnitudeB == 0.0) return 0.0f

    return (dotProduct / (magnitudeA * magnitudeB)).toFloat()
}

Step 6: Sabse Relevant Text Fetch Karein

Similarity score ke aadhar par top matches ko sort karein aur unke corresponding text ko fetch karein.

// Top 3 matches ko sort karke lein
val topThreeMatches = allMatches.sortedByDescending { it.similarity }.take(3)

val bestMatches: String

if (topThreeMatches.isNotEmpty()) {
    // Top matches ke text ko ek context string me combine karein
    bestMatches = topThreeMatches.joinToString(separator = "\n\n---\n\n") { it.text }
}

Step 7: LLM se Jawab Generate Karein

Final step me, user query aur fetched context ko ek prompt me combine karke LLM (Gemma 3 1B) ko pass karein taaki woh ek context-aware jawab generate kar sake.

// LLM load karein
private fun loadLLM() {
  val taskOptions = LlmInferenceOptions.builder()
        .setModelPath("/data/local/tmp/Gemma3-1B-IT_seq128_q8_ekv1280.task")
        .setMaxTokens(MAX_TOKENS) // 1280
        .build()
    llmInference = LlmInference.createFromOptions(this, taskOptions)
}

// Prompt taiyar karein aur response generate karein
val inputPrompt =
    "You are a helpful assistant that responds to user query: ${query}, based ONLY on the context: ${bestMatches}. Use only text from the context. DO NOT offer any other help."

var stringBuilder = ""
llmInference?.generateResponseAsync(inputPrompt.take(MAX_TOKENS)) { partialResult, done ->
    stringBuilder += partialResult
    onResult(stringBuilder)
}

💡 Summary

Congratulations! Aapne seekh liya hai ki kaise ek on-device RAG system banaya jaata hai. Is approach se aap apne Android applications me powerful, private, aur context-aware AI features add kar sakte hain, bina server dependency ke.

← Back to Awesome AI Projects