Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tuning_lm_with_rl 有完整运行成功的案例嘛? #3

Open
jerry1993-tech opened this issue May 23, 2023 · 1 comment
Open

tuning_lm_with_rl 有完整运行成功的案例嘛? #3

jerry1993-tech opened this issue May 23, 2023 · 1 comment

Comments

@jerry1993-tech
Copy link

accelerate launch --multi_gpu --num_machines 1 --num_processes 8
tuning_lm_with_rl.py
--log_with wandb
--model_name <LLAMA_FINETUNED_MODEL>
--reward_model_name <LLAMA_RM_MODEL>
--adafactor False
--tokenizer_name <LLAMA_TOKENIZER>
--save_freq 100
--output_max_length 128
--batch_size 8
--gradient_accumulation_steps 8
--batched_gen True
--ppo_epochs 4
--learning_rate 1.4e-5
--early_stopping True
--output_dir './checkpoints/tuning_llama_rl/'

请问 <LLAMA_RM_MODEL> 是指的哪个文件?是 「Wenzhong-GPT2-110M_peft_gpt-4-llm_rm_xxx_xx」 还是原base模型?求解答

想跟您联系与合作,谢谢。我的微信:xyj15764222030

@43zxj
Copy link

43zxj commented Jun 28, 2023

accelerate launch --multi_gpu --num_machines 1 --num_processes 8 tuning_lm_with_rl.py --log_with wandb --model_name <LLAMA_FINETUNED_MODEL> --reward_model_name <LLAMA_RM_MODEL> --adafactor False --tokenizer_name <LLAMA_TOKENIZER> --save_freq 100 --output_max_length 128 --batch_size 8 --gradient_accumulation_steps 8 --batched_gen True --ppo_epochs 4 --learning_rate 1.4e-5 --early_stopping True --output_dir './checkpoints/tuning_llama_rl/'

请问 <LLAMA_RM_MODEL> 是指的哪个文件?是 「Wenzhong-GPT2-110M_peft_gpt-4-llm_rm_xxx_xx」 还是原base模型?求解答

想跟您联系与合作,谢谢。我的微信:xyj15764222030

请问您知道<LLAMA_RM_MODEL>、<LLAMA_RM_MODEL>和<LLAMA_TOKENIZER>这三个分别指哪个文件了吗?非常期待您的回复,谢谢!

0xprincess pushed a commit to 0xprincess/gzip-llama that referenced this issue Jul 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants