You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
The code snippets shown in the chat window loose syntax highlighting occasionally and appear in plain white color. Both the input as well as the output code is affected by this. Sometimes both, the highlighted code as well as the generated code appear without syntax highlighting. One thing I could observe is that sometimes the syntax highlighting is correctly applied until the very end of the code generation. Once code generation finishes, the syntax highlighting is lost again.
I am not sure what is causing this. I don't use any special themes and since the code snippets are correctly formatted as such, I assume there is just the color coding of the syntax highlighting missing in the last step.
To Reproduce
Use Ollama as your provider
Use llama3 as your chat model
Mark code in your editor and have it fixed, explained, refactored by twinny
Either the selected code or the generated code is missing syntax highlighting and appears in plain white (see screenshot below)
Expected behavior
Syntax highlighting being correctly applied to both the code used as an input as well as the generated code.
Screenshots
Edit: the first screenshot shows a different model than llama3 because I moved over to codeqwen because I encountered the syntax highlighting issue. But to be clear, I did use llama3 when encountering the issue.
Logging
Rnable logging in the extension settings if not already enabled (you may need to restart vscode if you don't see logs). Proivide the log with the report.
API Provider
Ollama
Chat or Auto Complete?
chat
Model Name llama3:latest
Desktop (please complete the following information):
OS: Ubuntu 24.04
VSCode (see metadata attached below)
Twinny Version: v3.11.35
VSCode:
Version: 1.89.1
Commit: dc96b837cf6bb4af9cd736aa3af08cf8279f7685
Date: 2024-05-07T05:16:23.416Z
Electron: 28.2.8
ElectronBuildId: 27744544
Chromium: 120.0.6099.291
Node.js: 18.18.2
V8: 12.0.267.19-electron.0
OS: Linux x64 6.8.0-31-generic snap
Additional context
It does happen for other models as well (like codeqwen), but it most often appears with llama3:latest (pulled directly using Ollama).
Please let me know if you need more input or if there are other ways to successfully use llama3 and I am just doing it wrong. Also thanks for this amazing extension ❤️
The text was updated successfully, but these errors were encountered:
Describe the bug
The code snippets shown in the chat window loose syntax highlighting occasionally and appear in plain white color. Both the input as well as the output code is affected by this. Sometimes both, the highlighted code as well as the generated code appear without syntax highlighting. One thing I could observe is that sometimes the syntax highlighting is correctly applied until the very end of the code generation. Once code generation finishes, the syntax highlighting is lost again.
I am not sure what is causing this. I don't use any special themes and since the code snippets are correctly formatted as such, I assume there is just the color coding of the syntax highlighting missing in the last step.
To Reproduce
Expected behavior
Syntax highlighting being correctly applied to both the code used as an input as well as the generated code.
Screenshots
Edit: the first screenshot shows a different model than llama3 because I moved over to codeqwen because I encountered the syntax highlighting issue. But to be clear, I did use llama3 when encountering the issue.
Logging
Rnable logging in the extension settings if not already enabled (you may need to restart vscode if you don't see logs). Proivide the log with the report.
API Provider
Ollama
Chat or Auto Complete?
chat
Model Name
llama3:latest
Desktop (please complete the following information):
VSCode:
Additional context
It does happen for other models as well (like codeqwen), but it most often appears with llama3:latest (pulled directly using Ollama).
Please let me know if you need more input or if there are other ways to successfully use llama3 and I am just doing it wrong. Also thanks for this amazing extension ❤️
The text was updated successfully, but these errors were encountered: