Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

### Bug Description #4646

Closed
1 of 3 tasks
hossain666 opened this issue May 9, 2024 · 6 comments
Closed
1 of 3 tasks

### Bug Description #4646

hossain666 opened this issue May 9, 2024 · 6 comments

Comments

@hossain666
Copy link

Bug Description

Because models like Claude have strict definitions for the roles of messages in the message queue, such as the first message can only be from the system or user, and user and assistant must strictly alternate, the functional errors in the message queue controlled by NextChat were exposed by attaching the number of historical messages and the length of historical messages.
① The minimum unit of attaching historical message count is messages that do not distinguish roles, which means that once a user message at the head of the queue is popped out, the head message of the message queue will be the role of the assistant. A better solution is to use dialogue turns as the smallest unit, if the length exceeds the limit, the head of a dialogue turn should be popped out simultaneously, that is, a group of user and assistant QA messages should be discarded.
② max_tokens was originally the maximum number of replies for large models, but in the program, it is also used to determine whether to compress the message queue, which, on one hand, combined with the situation in ①, causes problems with the message queue, and on the other hand, it also affects the use of long-context models.

由于 claude 等模型对消息队列的角色有严格的定义,比如第一条消息的角色只能是system或者user,user和assistant必须严格交替,所以nextchat原先通过附带历史消息数和历史消息长度来控制消息队列存在的功能性错误被暴露出来。
① 附带历史消息数的最小单位是不区别角色的消息,这就导致了一旦队列头部的user消息被弹出后,消息队列的头部消息会是 assistant 的角色。一个更好的解决方案是以对话轮次为最小单位,如果长度超限,则应该将头部的一轮对话同时弹出,即抛弃一组user和assistant的QA消息。
② max_tokens 本是大模型的最大回复数量,在程序里却也被用来判断是否压缩消息队列,一方面与情况①叠加导致消息队列出问题,另一方面也会影响长上下文模型的使用。

{
  "error": {
    "message": "messages: first message must use the \"user\" role (request id: 2024040416494530153478024102122)",
    "type": "invalid_request_error",
    "param": "",
    "code": null
  }
}

Steps to Reproduce

满足以下条件可能触发该问题:

  1. 调用对消息队列角色要求严格的模型,如claude-3相关模型
  2. 对话内容较长,上下文超过4000token
  3. 对话次数多,超过预设的历史消息数

Expected Behavior

User can effectively utilize a long-context model for extensive historical conversations.

  • Utilize dialogue rounds as the smallest unit of dialogue memory to prevent messages starting with "assistant" in the message queue (as it does not add value to the conversation);
  • Enhance the role of max_tokens in handling historical message queues.

能成功调用长上下文模型进行长历史对话

  • 以对话轮次为对话记忆的最小单位,避免消息队列中出现 assistant 开头的消息(它对于对话其实也起不到什么作用);
  • 优化max_tokens 在历史消息队列处理中的作用。

Screenshots

image

Deployment Method

  • Docker
  • Vercel
  • Server

Desktop OS

No response

Desktop Browser

No response

Desktop Browser Version

No response

Smartphone Device

No response

Smartphone OS

No response

Smartphone Browser

No response

Smartphone Browser Version

No response

Additional Logs

No response

Originally posted by @QAbot-zh in #4443

@nextchat-manager
Copy link

Please follow the issue template to update title and description of your issue.

@H0llyW00dzZ
Copy link
Contributor

Bug Description

Because models like Claude have strict definitions for the roles of messages in the message queue, such as the first message can only be from the system or user, and user and assistant must strictly alternate, the functional errors in the message queue controlled by NextChat were exposed by attaching the number of historical messages and the length of historical messages. ① The minimum unit of attaching historical message count is messages that do not distinguish roles, which means that once a user message at the head of the queue is popped out, the head message of the message queue will be the role of the assistant. A better solution is to use dialogue turns as the smallest unit, if the length exceeds the limit, the head of a dialogue turn should be popped out simultaneously, that is, a group of user and assistant QA messages should be discarded. ② max_tokens was originally the maximum number of replies for large models, but in the program, it is also used to determine whether to compress the message queue, which, on one hand, combined with the situation in ①, causes problems with the message queue, and on the other hand, it also affects the use of long-context models.

由于 claude 等模型对消息队列的角色有严格的定义,比如第一条消息的角色只能是system或者user,user和assistant必须严格交替,所以nextchat原先通过附带历史消息数和历史消息长度来控制消息队列存在的功能性错误被暴露出来。 ① 附带历史消息数的最小单位是不区别角色的消息,这就导致了一旦队列头部的user消息被弹出后,消息队列的头部消息会是 assistant 的角色。一个更好的解决方案是以对话轮次为最小单位,如果长度超限,则应该将头部的一轮对话同时弹出,即抛弃一组user和assistant的QA消息。 ② max_tokens 本是大模型的最大回复数量,在程序里却也被用来判断是否压缩消息队列,一方面与情况①叠加导致消息队列出问题,另一方面也会影响长上下文模型的使用。

{
  "error": {
    "message": "messages: first message must use the \"user\" role (request id: 2024040416494530153478024102122)",
    "type": "invalid_request_error",
    "param": "",
    "code": null
  }
}

Steps to Reproduce

满足以下条件可能触发该问题:

  1. 调用对消息队列角色要求严格的模型,如claude-3相关模型
  2. 对话内容较长,上下文超过4000token
  3. 对话次数多,超过预设的历史消息数

Expected Behavior

User can effectively utilize a long-context model for extensive historical conversations.

  • Utilize dialogue rounds as the smallest unit of dialogue memory to prevent messages starting with "assistant" in the message queue (as it does not add value to the conversation);
  • Enhance the role of max_tokens in handling historical message queues.

能成功调用长上下文模型进行长历史对话

  • 以对话轮次为对话记忆的最小单位,避免消息队列中出现 assistant 开头的消息(它对于对话其实也起不到什么作用);
  • 优化max_tokens 在历史消息队列处理中的作用。

Screenshots

image

Deployment Method

  • Docker
  • Vercel
  • Server

Desktop OS

No response

Desktop Browser

No response

Desktop Browser Version

No response

Smartphone Device

No response

Smartphone OS

No response

Smartphone Browser

No response

Smartphone Browser Version

No response

Additional Logs

No response

Originally posted by @QAbot-zh in #4443

I've been aware of this issue related to

@Dean-YZG
Copy link
Contributor

this issue has been resolved, please pull the new commit

@Dean-YZG
Copy link
Contributor

you are right, thank you for your feedback, we will adopt the suggestion

@Dean-YZG
Copy link
Contributor

this is the specific feature of anthropic's models, so it's better way to resolve this issue that fomat the message in logitics of client about the provider anthropic

@Dean-YZG
Copy link
Contributor

I figure out the seconds issue, thank you for your feedback

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants