Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

整体微调以后,领域内的知识记住了,但是问常规问题,比如你好,你叫什么,他也回答领域内的知识 #206

Open
heiheiwangergou opened this issue Apr 27, 2023 · 5 comments

Comments

@heiheiwangergou
Copy link

heiheiwangergou commented Apr 27, 2023

企业微信截图_16825612262066

是我参数哪里设置的不对嘛,a100单卡,14000条qa数据 下面是训练参数

2b20251c-b619-408d-8f13-f3750ecf1c58

36c4aed50fce536b554a672e88240aa

'max_seq_length': 1024, # 如果资源充足,推荐长度2048 与官方保持一致
'max_target_length': 100,  # 预测最大长度, 保留字段

这两个参数会影响结果嘛

@cywjava
Copy link

cywjava commented Apr 27, 2023

+1

1 similar comment
@cristianohello
Copy link

+1

@lianrzh
Copy link

lianrzh commented Apr 30, 2023

是全量微调,还是lora训练?

@lxw0109
Copy link

lxw0109 commented May 10, 2023

+1
尝试了全量微调、lora训练都试过了,都有灾难性遗忘的情况,灾难性遗忘的情况lora稍强一点,但两种训练方式的拟合效果都不好
训练数据只有300条,epoch: 1, max_seq_length: 2048,其他超参基本没动

@ssbuild

@liu459977653
Copy link

同问

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants