Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement adversarial prompting. #131

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

davidbrodrick
Copy link

This adds a method adversarial_query.

Adversarial queries first generate an answer, then asks the LLM to find problems with the answer, then finally generates the final response so that it addresses the initial shortcomings.

In general this GREATLY increases the quality of the answers.

This new method returns a list containing the original answer, the adversarial critique and the final answer.

Also, the current master didn't work for me at all. In the end I discovered the adoc_match (the langchain chain I suppose) was returning "None." as a string, which caused all of the subsequent logic to fail! I've added a check for this.

@whitead
Copy link
Owner

whitead commented Jun 4, 2023

Cool idea!

I've been looking at making the prompts customizable and your PR definitely pushes more towards that direction. I think the key filter problem is fixed in latest release, good catch. Thanks.

So - let me get back to you on the PR as I work on how prompts should be customizable.

@whitead
Copy link
Owner

whitead commented Jun 14, 2023

Hey @davidbrodrick, the refactor is done (https://github.com/whitead/paper-qa#customizing-prompts) and I've added custom prompts at pre/post + memory. Do you think your adversarial idea can be done via this new system?

@davidbrodrick
Copy link
Author

Looks great - this is more akin to what I had in mind initially.

I would suggest that the adversarial prompting is so useful that it might be worth wrapping in a standard method included in paper-qa using this new prompt templating framework.. but I respect you might equally feel that now the hooks are in place any optimal sequence of prompts and contexts is something for the user to figure out and implement?

I'm very happy to update my pull request against the latest release if you're on board with adding this method to paper-qa.

A colleague has recently pre-published a nice paper on the benefits of adversarial prompting in the context of astronomy. I'll share the link once I get it.

@davidbrodrick
Copy link
Author

Oh BTW when I tried merging your changes around key_filter a week or so ago I still had issues with no documents being returned, but I'll test again with your latest release.

I've actually crippled the doc_match functionality in my operational version as, while it optimises for cost, it acts to restrict the diversity of the context.. (?)

Just FYI the project I've written around paper-qa is here: https://github.com/davidbrodrick/virtualpi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants