Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent Output between LightLLM and Transformers Inference Library #309

Open
Lvjinhong opened this issue Jan 19, 2024 · 2 comments
Open
Labels
bug Something isn't working

Comments

@Lvjinhong
Copy link

When specifying 'max new tokens', LightLLM's output consistently matches this maximum value. However, Transformers sometimes adjust according to the model itself, resulting in outputs shorter than the specified 'max new tokens'. I believe Transformers is correct in this approach. It's implausible to always generate output exactly matching the maximum 'max new tokens' value, as this would only lead to repetitive outputs.

@Lvjinhong Lvjinhong added the bug Something isn't working label Jan 19, 2024
@hiworldwzj
Copy link
Collaborator

@Lvjinhong You can specify the stop token ID by setting the --eos_id xxx argument when starting the server or by using the stop_sequences parameter in the request parameters

@shihaobai
Copy link
Collaborator

@Lvjinhong You can also check if your input is spliced with the correct prompt. lightllm doesn't splice prompts on inputs, while transformers usually splice prompts on inputs in their chat functions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants