Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

repeated answer:When I use vllm with Qwen-7b-chat the generated text is x lnot end until the maength, with the repeated answer #7215

Open
ChengShuting opened this issue May 14, 2024 · 1 comment

Comments

@ChengShuting
Copy link

sampling_parameters = {
"temperature": "0",
"top_p": "0.5",
"max_tokens": "300"}

python3 client.py

图片

@statiraju
Copy link

statiraju commented May 14, 2024

Sorry it is difficult to understand your question here. Can you post us what the expected result is?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants