-
Notifications
You must be signed in to change notification settings - Fork 298
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to enable infinite generations? #820
Comments
Would banning EOS (End-Of-Stream) token be a possible solution ? Settings -> Advanced -> EOS Token Ban |
Yes, this helps in the token terms, but there's an |
It can be set higher, up to about 80% of the max context length. Try increase your max context length first, then manually override and input the amount to gen as a larger number than 512. |
Not only you can type any number to "amount to generate", but also you can just press the sending button with empty input box to force the model to continue right where it stopped! Personally, I edit model's output often, so it's useless to generate too long text, since it would be reprocessed in my next turn if I'll edit something. |
Also. you may use Idle Responses to let the generating go infinitely further as if you were doing it manually, the only problem is it can stuck somewhere even after banning EOS tokens, but for me it is a rare occasion |
What is that? |
I do. I'm just running Python koboldcpp.py! Anyway, what does this thing do? |
Idle Responses allow the AI to automatically continue the response without user input after some amount of inactive time. |
I want the model, no matter what tokens are printed, to continue forever until I stop it. Is it possible?
The text was updated successfully, but these errors were encountered: