Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensemble LLMs to get better results: Suggestion for new example #120

Open
yogeshhk opened this issue Jul 16, 2023 · 0 comments
Open

Ensemble LLMs to get better results: Suggestion for new example #120

yogeshhk opened this issue Jul 16, 2023 · 0 comments

Comments

@yogeshhk
Copy link

Each LLM model has its own strength, has its own diverse corpus. Rather than relying on one LLM at a time, to get response, why not employ multiple LLMs and then get the best result by ranking the responses?

Steps for any problem:

  • Have multiple LLMs in ensemble (for-loop)
  • Let each LLM respond to the given prompt, collect responses from each
  • Let each LLM rank the responses from all the LLMs.
  • Aggregate the ranks, or some way of majority, find the best response.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant