Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deduction and Accuracy capability of knowledge database #255

Open
dezhouc2 opened this issue Apr 17, 2024 · 2 comments
Open

Deduction and Accuracy capability of knowledge database #255

dezhouc2 opened this issue Apr 17, 2024 · 2 comments

Comments

@dezhouc2
Copy link

Describe the solution you'd like

Exploring Prompting Strategies to Enhance Deduction and Accuracy in Knowledge Databases: Knowledge databases primarily function by extracting relevant information through indexing and querying, which generally yields lower levels of deduction and accuracy compared to large language models (LLMs). I am considering several prompting strategies to potentially improve these aspects, including Chain-of-Thought, Few-Shot Chain-of-Thought, and Zero-Shot Chain-of-Thought, which systematically approach problem-solving step by step. However, I am uncertain whether these strategies will be effective in the context of knowledge databases

Why the solution needed

To get a better user experiences and doing projects based on knowledge database.

@jeremylatorre
Copy link
Contributor

Maybe the integration of bedrock agents will be the answer to this question?

@dezhouc2
Copy link
Author

Maybe the integration of bedrock agents will be the answer to this question?

Thanks for your reply. I acutally implemented chain of thought in this case which DOES improve accuracy and deduction capabilities. I'm wondering except for the techniques I mention above, what else can I do?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants