You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using gpt-3.5-turbo you may often encounter agents going into a "gratitude loop", meaning when they complete a task they will begin congratulating and thanking eachother in a continuous loop. This is a limitation in the performance of gpt-3.5-turbo, in contrast to gpt-4 which has no problem remembering instructions. This can hinder the experimentation experience when trying to test out your own use case with cheaper models.
A workaround is to add an additional termination notice to the prompt. This acts a "little nudge" for the LLM to remember that they need to terminate the conversation when their task is complete. You can do this by appending a string such as the following to your user input string:
prompt="Some user query"termination_notice= (
'\n\nDo not show appreciation in your responses, say only what is necessary. ''if "Thank you" or "You\'re welcome" are said in the conversation, then say TERMINATE ''to indicate the conversation is finished and this is your last message.'
)
prompt+=termination_notice
Note: This workaround gets the job done around 90% of the time, but there are occurences where the LLM still forgets to terminate the conversation.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
When using
gpt-3.5-turbo
you may often encounter agents going into a "gratitude loop", meaning when they complete a task they will begin congratulating and thanking eachother in a continuous loop. This is a limitation in the performance ofgpt-3.5-turbo
, in contrast togpt-4
which has no problem remembering instructions. This can hinder the experimentation experience when trying to test out your own use case with cheaper models.A workaround is to add an additional termination notice to the prompt. This acts a "little nudge" for the LLM to remember that they need to terminate the conversation when their task is complete. You can do this by appending a string such as the following to your user input string:
Note: This workaround gets the job done around 90% of the time, but there are occurences where the LLM still forgets to terminate the conversation.
Beta Was this translation helpful? Give feedback.
All reactions