Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to add eos token #419

Open
gamercoder153 opened this issue May 3, 2024 · 28 comments
Open

how to add eos token #419

gamercoder153 opened this issue May 3, 2024 · 28 comments
Labels
fixed - pending confirmation Fixed, waiting for confirmation from poster

Comments

@gamercoder153
Copy link

how to add eos token

@danielhanchen
Copy link
Contributor

Our conversational notebooks add eos_tokens to llama-3 for eg: https://colab.research.google.com/drive/1XamvWYinY6FOSX9GLvnqSjjsNflxdhNc?usp=sharing

All our notebooks on our Github page here: https://github.com/unslothai/unsloth?tab=readme-ov-file#-finetune-for-free add eos tokens

@gamercoder153
Copy link
Author

gamercoder153 commented May 4, 2024

515212 (1)
Iam facing this error with your colab notebook during inferencing: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing

@mxtsai
Copy link

mxtsai commented May 4, 2024

I'm facing the same problem here.

@gamercoder153
Copy link
Author

@mxtsai which model?

@mxtsai
Copy link

mxtsai commented May 4, 2024

I've tried LLama 3 and other models. Not sure where the issue is...

@danielhanchen
Copy link
Contributor

oh wait llama-3 base right hmm
where are you all doing inference - ollama? llama.cpp?

@gamercoder153
Copy link
Author

@danielhanchen in colab after finetuning

@KillerShoaib
Copy link

@danielhanchen in colab after finetuning

I was having the same issue and created an issue #416 . I've posted a solution here.

@gamercoder153
Copy link
Author

@KillerShoaib Man thanks a lot for fixing, I really appreciate that. Can u explain to me where to add that I am using Google Colab t4 GPU: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing

@danielhanchen
Copy link
Contributor

@KillerShoaib @gamercoder153 @mxtsai Apologies I just fixed it! No need to change code - I updated the tokenizer configs so all should be fine now!

@KillerShoaib
Copy link

@KillerShoaib Man thanks a lot for fixing, I really appreciate that. Can u explain to me where to add that I am using Google Colab t4 GPU: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing

as @danielhanchen mentioned, you don't need to change the code anymore, The bug has been fixed.

@gamercoder153
Copy link
Author

great! thanks a lot guys @KillerShoaib @danielhanchen

@danielhanchen danielhanchen added the fixed - pending confirmation Fixed, waiting for confirmation from poster label May 5, 2024
@gamercoder153
Copy link
Author

@danielhanchen @KillerShoaib I checked it once again it literally the same
78415 (1)

@KillerShoaib
Copy link

@danielhanchen @KillerShoaib I checked it once again it literally the same 78415 (1)

Okay, I think you're using your finetuned model which was finetuned on top of old unsloth llama 3 (where pad token and eos token were the same). In that case, you need to change the pad token value.

Here is the code to do that:

################################### Existing Colab Code ###################################

from unsloth import FastLanguageModel
import torch
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.

# 4bit pre quantized models we support for 4x faster downloading + no OOMs.
fourbit_models = [
    "unsloth/mistral-7b-bnb-4bit",
    "unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
    "unsloth/llama-2-7b-bnb-4bit",
    "unsloth/gemma-7b-bnb-4bit",
    "unsloth/gemma-7b-it-bnb-4bit", # Instruct version of Gemma 7b
    "unsloth/gemma-2b-bnb-4bit",
    "unsloth/gemma-2b-it-bnb-4bit", # Instruct version of Gemma 2b
    "unsloth/llama-3-8b-bnb-4bit", # [NEW] 15 Trillion token Llama-3
] # More models at https://huggingface.co/unsloth

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "your_finetuned_model_name",   ##### Change the name according to your finetuned model #####
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
    # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)

## If your model already saved as LoRA adapter then you don't need to use .get_peft_model()
model = FastLanguageModel.get_peft_model(
    model,
    r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",],
    lora_alpha = 16,
    lora_dropout = 0, # Supports any, but = 0 is optimized
    bias = "none",    # Supports any, but = "none" is optimized
    # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
    use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
    random_state = 3407,
    use_rslora = False,  # We support rank stabilized LoRA
    loftq_config = None, # And LoftQ
)


################## Additional Code to change pad token value ###################

tokenizer.add_special_tokens({"pad_token": "<|reserved_special_token_0|>"})
model.config.pad_token_id = tokenizer.pad_token_id # updating model config
tokenizer.padding_side = 'right' # padding to right (otherwise SFTTrainer shows warning)

################## Rest of the colab code ###################
.
.
.

After changing the pad token value you need to fine-tune the model again so that it can learn to predict EOS token. Try few iterations (i.e: 30-50) and check if model is able to generate eos token or not.

This example is for those models that have been fine-tuned on top of old unsloth llama 3 ( same pad & eos token). Unsloth has updated their model. If any of you using their current llama 3 model then you won't have to follow these steps. Follow the original Colab notebook

@gamercoder153
Copy link
Author

@KillerShoaib Iam using this colab notebook : https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing from their github
I am using llama3-8b-instruct model : https://huggingface.co/unsloth/llama-3-8b-Instruct

@KillerShoaib
Copy link

@KillerShoaib Iam using this colab notebook : https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing from their github I am using llama3-8b-instruct model : https://huggingface.co/unsloth/llama-3-8b-Instruct

I've just downloaded unsloth/llama-3-8b-Instruct and verified its pad token and eos token value. They are different as @danielhanchen mentioned he has solved the issue.

issuepic1

I even trained the model on alpaca dataset for 60 epochs and got an answer with an eos token

issuepic2

Everything is working fine on my end. Are you sure you aren't using already fine-tuned version of old llama 3 (which had same eos & pad token) that you've saved locally (or huggingface hub) and loading it again?

@gamercoder153
Copy link
Author

@KillerShoaib No, Iam not using any older finetuning model. Let me try it once again

@gamercoder153
Copy link
Author

@KillerShoaib Its the same stuff dude!! just generates <|end_of_text|> sometimes and goes on loop like this
image
till 128 max new tokens

and if text streaming is true it does the same thing again

@KillerShoaib
Copy link

@KillerShoaib Its the same stuff dude!! just generates <|end_of_text|> sometimes and goes on loop like this image till 128 max new tokens

and if text streaming is true it does the same thing again

since you're getting eos token sometimes then there is no problem in the llama 3 model. You need to finetune it for more iterations. The model is still learning to predict the eos token.

@adeel-maker
Copy link

adding pad_token_id solved this issue for me
outputs = model.generate(**inputs, max_new_tokens = 200, use_cache = False, pad_token_id=tokenizer.pad_token_id)

@gamercoder153
Copy link
Author

@adeel-maker which notebook are u using?

@adeel-maker
Copy link

adeel-maker commented May 13, 2024

@gamercoder153 Kaggle or Google colab

@gamercoder153
Copy link
Author

@adeel-maker can u share it

@adeel-maker
Copy link

adeel-maker commented May 14, 2024

@gamercoder153 same as provided by unsloth, just putting my data there!

@gamercoder153
Copy link
Author

adding pad_token_id solved this issue for me outputs = model.generate(**inputs, max_new_tokens = 200, use_cache = False, pad_token_id=tokenizer.pad_token_id)

in the notebook where did u add this section of code @adeel-maker

@adeel-maker
Copy link

@gamercoder153 at inference portion of this notebook, right after the training portion!

@gamercoder153
Copy link
Author

@adeel-maker ok let me try

@shensmobile
Copy link

If you're using the instruct model, you need to change the EOS token. The tokenizer still has the EOS token as <|end_of_speech|> when it should be <|eot_id|>. When you build your Alpaca dataset, change this line:

EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN

to this:

EOS_TOKEN = <|eot_id|> # Must add EOS_TOKEN

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
fixed - pending confirmation Fixed, waiting for confirmation from poster
Projects
None yet
Development

No branches or pull requests

6 participants