Issues: lm-sys/FastChat
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Feature: Add OpenAI Usage stats when using streaming with the Chat Completions API or Completions API
#3360
opened May 23, 2024 by
douxiaofeng99
[Bug] Single quote character
'
tripping up model generation and streaming
#3358
opened May 22, 2024 by
PyroGenesis
Unable to use streaming with the /v1/embeddings API for the CodeQwen1.5-7B model.
#3357
opened May 22, 2024 by
fangx1129
Error: Type object 'Dropdown' has no attribute 'update' in "qa_browser.py"
#3347
opened May 17, 2024 by
tanliboy
How do you consider the 'top_k' parameter when using openai_api_server to start?
#3316
opened May 7, 2024 by
garyyang85
Merged Model from Huggingface runs fine with fastchat CLI but not when using service worker
#3315
opened May 7, 2024 by
heli-sdsu
Leaderboard mentions
GPT-3.5-Turbo-0314
but I believe it is instead GPT-3.5-Turbo-0301
#3311
opened May 5, 2024 by
Franck-Dernoncourt
fastchat.serve.openai_api_server doesn't work with
stream=true
parameter
#3307
opened May 3, 2024 by
richginsberg
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.