Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Requests #23

Open
gfsysa opened this issue Mar 5, 2024 · 6 comments
Open

Feature Requests #23

gfsysa opened this issue Mar 5, 2024 · 6 comments

Comments

@gfsysa
Copy link

gfsysa commented Mar 5, 2024

  • Delete responses from a conversation, or even aspects of a response. This will help improve the quality of the data being stored in the index... many conversations result in unusable outputs, misleading information, and other hallucinations, but that same conversation may have some useful outputs. Ideally we can just remove the undesired outputs.

  • Presets for plug-ins-- sometimes web search is needed, sometimes the file i/o, most often it's a combination, or none. It'd be helpful if the presets allowed for a pre-configured set of plugins, for switching between prompts. If that's a big lift, it would great to have a plugin quick select similar to Mode, Model, and Presets.

  • Group conversations, or tags -- Personally, I need to hide some conversations for screen shares, so it would be great to see only those conversations with a certain label color, or custom taxonomy (Tag or Category)

Bug? -- If the default llama-index is not base, but base still exists and you begin a chat with chat-with-file enableed, the model will not find your indexed files... Have to switch to chat with files mode, select the database and switch back to Chat.

Thanks for considering this input! Thanks for your hard work even more.

@gfsysa
Copy link
Author

gfsysa commented Mar 5, 2024

Another thought:

  • It would be incredible if there was means to isolate a portion of the vector store when working with llama-index. For example, if you database has data about cars and fish, and you'll be working extensively with the cars data, it would be great to establish that the conversation will only relate to cars. I'm finding other data leaking into my conversations.

I suspect there are few ways to approach this in the prompt, instructions, and perhaps with the advanced indexing techniques... I'm not sure.

@gfsysa
Copy link
Author

gfsysa commented Mar 5, 2024

Couple more thoughts:

  • Loop warning --
  • Token throttling or other management

Both relate to an operation I executed to indexed 65 pages from a website, and then a second prompt to identify which pages had not been updated since Feb and to draft a copy update for one of the pages. I assumed these were pre-processing requests that were going to llama-index (index llama-index first) and that the model would provide the 'draft' I requested.... I watched the system output enter a series of loops, and after about the 4th loop I realized it was repeating the same request over and over and giving the same output. I stopped the operation, but my next prompt was rejected as I had hit our rate limit... not a big deal and we're going to the next tier this week, but token management is sometimes an issue.

@szczyglis-dev
Copy link
Owner

Thank you very much for the feedback!

Several of the things you mentioned have been added in the latest version (2.1.10):

  • Added label color filter in the context list
  • Added an option to delete context items
  • Added presets for plugins
  • Fixed and improved the running of autonomous agents

Regarding the need to select an index from the list in Chat in mode in:

Bug? -- If the default llama-index is not base, but base still exists and you begin a chat with chat-with-file enableed, the model will not find your indexed files... Have to switch to chat with files mode, select the database and switch back to Chat.

could you please describe in more detail, step by step with example setup? Unfortunately, I can't reproduce this problem.

@oleksii-honchar
Copy link

oleksii-honchar commented Mar 8, 2024

Hey Marcin,
I am really impressed with how far you have come with this project over the past year! Thank you for all your hard work. I did notice that there is currently no support for markdown in responses and posts, meaning that code and text are not formatted properly. Is there a way to enable this functionality for better readability?

This is current app style

And this is an prettified example

@oleksii-honchar
Copy link

oleksii-honchar commented Mar 8, 2024

Another useful feature could be the particular chat history reset. For example I'm using same preset/persona for general topics (e.g. "SW Dev Coach") and I don't need to store the context of every topic or conversation, also want to keep the list of chats clean, so I'm usually reset this particular chat history(and context) reusing it for different topic.

Here is the example how it works
image

And this is an idea how it could look like in pygpt interface
image

@gfsysa
Copy link
Author

gfsysa commented Mar 13, 2024

Hi -- Just want to say thank you for all of the updates and feature inclusions.. You're an animal and I don' t know why you're so awesome.

I will be active with the tool more over the next week or so, will try to gather some more feedback.

Also, loving the other input here, these are great sugestions.

Question: do you want a new Issue created for everything so you can close the ticket, or is the thread here okay?

@gfsysa gfsysa mentioned this issue Mar 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants