You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can anyone else reproduce this odd behavior when running ollama/codeqwen:7b-chat-v1.5-q4_0 ?
The first MonlogueAgent prompt has too many thoughts for it to succeeed. It just prints out something like : 101011 10 ▅ ▅10
This is the last thought that it can tolerate (and print correctly-formatted JSON response):
{
"action": "think",
"args": {
"thought": "Very cool. Now to accomplish my task."
}
},
Because as soon as the next one is added, it generates that nonsense output:
{
"action": "think",
"args": {
"thought": "I'll need a strategy. And as I make progress, I'll need to keep refining that strategy. I'll need to set goals, and break them into sub-goals."
}
},
Additional context
Using the docker image, launched from WSL on Windows 11. Ollama version 0.1.32
The text was updated successfully, but these errors were encountered:
Is there already a way to expose Agent-specific parameters to the UI, or would I need to perform a build to adjust this parameter below (which I suspect may be too high for this local LLM) ?
Describe your question
Can anyone else reproduce this odd behavior when running ollama/codeqwen:7b-chat-v1.5-q4_0 ?
The first MonlogueAgent prompt has too many thoughts for it to succeeed. It just prints out something like :
101011 10 ▅ ▅10
This is the last thought that it can tolerate (and print correctly-formatted JSON response):
Because as soon as the next one is added, it generates that nonsense output:
Additional context
Using the docker image, launched from WSL on Windows 11. Ollama version 0.1.32
The text was updated successfully, but these errors were encountered: