We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
模型训练与精调
Llama-3-Chinese-8B-Instruct(指令模型)
Linux
lama3的tokenizer没有使用sentence piece,而是使用tiktoken建立的,请问我如果基于sentencepiece训练中文tokenizer,要怎么和llama3 的已有词表进行合并?或者有什么其他的扩充llama3词表的方式吗?
# 请在此处粘贴依赖情况(请粘贴在本代码块里)
# 请在此处粘贴运行日志(请粘贴在本代码块里)
The text was updated successfully, but these errors were encountered:
可以使用tiktoken训练一个新的词表,然后两个合并。
Sorry, something went wrong.
No branches or pull requests
提交前必须检查以下项目
问题类型
模型训练与精调
基础模型
Llama-3-Chinese-8B-Instruct(指令模型)
操作系统
Linux
详细描述问题
依赖情况(代码类问题务必提供)
运行日志或截图
The text was updated successfully, but these errors were encountered: