Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code snippets in the chat window loose syntax highlighting occasionally #244

Open
nicikiefer opened this issue May 10, 2024 · 0 comments
Open
Labels
question Further information is requested

Comments

@nicikiefer
Copy link

nicikiefer commented May 10, 2024

Describe the bug
The code snippets shown in the chat window loose syntax highlighting occasionally and appear in plain white color. Both the input as well as the output code is affected by this. Sometimes both, the highlighted code as well as the generated code appear without syntax highlighting. One thing I could observe is that sometimes the syntax highlighting is correctly applied until the very end of the code generation. Once code generation finishes, the syntax highlighting is lost again.

I am not sure what is causing this. I don't use any special themes and since the code snippets are correctly formatted as such, I assume there is just the color coding of the syntax highlighting missing in the last step.

To Reproduce

  1. Use Ollama as your provider
  2. Use llama3 as your chat model
  3. Mark code in your editor and have it fixed, explained, refactored by twinny
  4. Either the selected code or the generated code is missing syntax highlighting and appears in plain white (see screenshot below)

Expected behavior
Syntax highlighting being correctly applied to both the code used as an input as well as the generated code.

Screenshots
Bildschirmfoto vom 2024-05-10 16-00-01

grafik

Edit: the first screenshot shows a different model than llama3 because I moved over to codeqwen because I encountered the syntax highlighting issue. But to be clear, I did use llama3 when encountering the issue.

Logging
Rnable logging in the extension settings if not already enabled (you may need to restart vscode if you don't see logs). Proivide the log with the report.

API Provider
Ollama

Chat or Auto Complete?
chat

Model Name
llama3:latest

Desktop (please complete the following information):

  • OS: Ubuntu 24.04
  • VSCode (see metadata attached below)
  • Twinny Version: v3.11.35

VSCode:

  • Version: 1.89.1
  • Commit: dc96b837cf6bb4af9cd736aa3af08cf8279f7685
  • Date: 2024-05-07T05:16:23.416Z
  • Electron: 28.2.8
  • ElectronBuildId: 27744544
  • Chromium: 120.0.6099.291
  • Node.js: 18.18.2
  • V8: 12.0.267.19-electron.0
  • OS: Linux x64 6.8.0-31-generic snap

Additional context
It does happen for other models as well (like codeqwen), but it most often appears with llama3:latest (pulled directly using Ollama).

Please let me know if you need more input or if there are other ways to successfully use llama3 and I am just doing it wrong. Also thanks for this amazing extension ❤️

@rjmacarthy rjmacarthy added the question Further information is requested label May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants