Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPT4All v2.7.4 - List of chats: Off-topic short description of the subject of a conversation #2281

Closed
SINAPSA-IC opened this issue Apr 29, 2024 · 6 comments · Fixed by #2322
Assignees
Labels
bug-unconfirmed chat gpt4all-chat issues good first issue Good for newcomers

Comments

@SINAPSA-IC
Copy link

SINAPSA-IC commented Apr 29, 2024

Bug Report

Not a bug, but something strange nonetheless.

Past conversations (chats) listed on the left are indicated by short descriptions each, which I assume that are words relevant for the main idea or main topic in each conversation.

Even if the user has the option to edit the text of these items in the list,
the defaults are sometimes strange:

Like in this case - in the image below - where the description of a conversation
seems to have been derived from the Greek words therein,
which is wrong - as the main idea/subject/topic (if this is the case with the text of the list items) was another subject altogether, not the language of portions of the (prompt and) reply.
As such, the text of the list item and the conversation are unrelated, or wrongly connected.

Suggestion:

  1. remove this feature until properly implemented - if it is worthy of implementation, that is
    2.1) if there is such a section in the reply as "In summary", then make a summary of this section and use it
    2.2) if not, then use the prompt (as that is the 100% certain thing in the conversation - what the user wanted) preceded by an indicator such as "About: "

Steps to Reproduce

Compare the short summaries of chats in the list items with the main topic of the respective conversations which may not be properly described by their corresponding list items.

Expected Behavior

If so desired by developers - the user may have the option to refuse this - then the /displayed short summary (main topic, idea) of a chat should be truthful to the contents of that chat.

Your Environment

  • GPT4All v2.7.4
  • Windows 10, updated as of 2024.04.29
  • Chat model used (if applicable): any

Thank you for considering this.

@SINAPSA-IC
Copy link
Author

gpt4all_i1

@AndriyMulyar
Copy link
Contributor

What is the current prompt and input being used to produce short summary for the conversation name @cebtenzzre ?

@cebtenzzre
Copy link
Member

We append the following hard-coded prompt to the conversation in order to generate the name, which is inappropriate for many of the models we currently support which use different templates:

### Instruction:
Describe response above in three words.
### Response:

I'm not really sure why this doesn't use the current prompt template instead.

@cebtenzzre cebtenzzre added the chat gpt4all-chat issues label May 1, 2024
@SINAPSA-IC
Copy link
Author

SINAPSA-IC commented May 3, 2024

Hello.
I've just seen the summary of a chat as "Advice: Prepare a".

Suggestion:
exclude from such summaries, when at the end of the summary:

  • 1-letter sequences, which generally are of no use in that place
  • even 2 or 3 - if they are not written in capital letters, which would possibly hint at abbreviations like "UK" and "UFO".

@SINAPSA-IC
Copy link
Author

SINAPSA-IC commented May 5, 2024

I've also seen the summary of a chat in a language that looks different from that of the dialogue:
"Planification de la"
which looks like Spanish, although it is not - "planification".en is "planificacion".es
while the language of the chat was English, with the LLM - Nous Hermes 2 Mistral DPO.

I cannot think of a sequence in English which would contain, in a similar context of the sentence (planification of something, planning something) the words (in this order) "planification de la".
Weird stuff. Not Earth-shattering, but still.

Also, the rule of three words for the summary might not always be followed, because I've also seen this summary:
Plan-Generator-Output: In this

@cebtenzzre
Copy link
Member

Suggestion: exclude from such summaries, when at the end of the summary:

The most obvious solution to me is to prompt the LLM correctly in the first place :)
Otherwise, there will be garbage output no matter what we do. The LLM was simply not trained on the format that we have hardcoded.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed chat gpt4all-chat issues good first issue Good for newcomers
Development

Successfully merging a pull request may close this issue.

3 participants