Project Hinglish - A gemma-2b fine tuned Hinglish to English Language Translater using PEFT(LoRA) method.
Project Hinglish aims to develop a high-performance language translation model capable of translating Hinglish (a blend of Hindi and English commonly used in informal communication in India) to standard English. The model is fine-tuned over gemma-2b using PEFT(LoRA) method using the rank 128. Aim of this model is for handling the unique syntactical and lexical characteristics of Hinglish.
Fine-Tune Method:
- Fine-Tuning Approach Using PEFT (LoRA): The fine-tuning employs Parameter-efficient Fine Tuning (PEFT) techniques, particularly using LoRA (Low-Rank Adaptation). LoRA modifies a pre-trained model efficiently by introducing low-rank matrices that adapt the model’s attention and feed-forward layers. This method allows significant model adaptation with minimal updates to the parameters, preserving the original model's strengths while adapting it effectively to the nuances of Hinglish.
- Base Model: Gemma-2b-v2
- Dataset: cmu_hinglish_dog + Combination of sentences taken from my own dialy life chats with friends and Uber Messages.
Example Output
Usage
import keras_nlp
import keras
import keras_nlp
import string
gemma_lm = keras_nlp.models.GemmaCausalLM.from_preset("hf://google/gemma-2b-keras")
gemma_lm.backbone.enable_lora(rank=128)
gemma_lm.preprocessor.sequence_length = 256
gemma_lm.backbone.load_lora_weights("modelv3_hien_to_en(128).lora.h5")
template = "Hinglish:\n{hi_en}\n\nEnglish:\n{en}"
def check_sentence_end(sentence):
last_char = sentence[-1]
if last_char in string.punctuation:
return sentence
else:
return sentence + "."
sentence = "aapki age kya hai?"
sentence = check_sentence_end(sentence)
input_x = template.format(hi_en=sentence,en="")
output_x = gemma_lm.generate(input_x, max_length=128)
print("Original:\n"+sentence)
print("\n")
print("Translated:\n"+output_x[len(input_x):])
- Downloads last month
- 0