Replies: 2 comments
-
Hey @asmith26! Great to see you back with another intriguing question 🌟. How's the quest for knowledge treating you? 🚀 Yes, it is possible to create an Extractive Question & Answering pipeline in LlamaIndex where the answer is directly highlighted in the source or context. The key components involved in creating such a pipeline include:
To adapt this pipeline for extractive Q&A where answers are highlighted in the source, you would need to modify the response synthesizer component (e.g., Also, ensure the query processing and document retrieval steps are fine-tuned to support extractive Q&A. This might involve using embeddings or models that are specifically trained for extractive tasks. The existing code structure supports the integration of these components and modifications. However, specific implementation details for highlighting answers within the source context are not provided in the snippet. You would need to implement or extend the response synthesizer logic to include highlighting functionality based on your requirements. You can refer to the code snippet from I hope this helps! If you have any more questions or need further clarification, feel free to ask. Sources
|
Beta Was this translation helpful? Give feedback.
-
Ok, I think this feature is more about using a extractive question answering model, e.g. https://huggingface.co/tasks/question-answering |
Beta Was this translation helpful? Give feedback.
-
Hi, is it possible to create an Extractive Question & Answering pipeline, where the answer is highlighted directly in the source/context (like in the image)? Thanks!
(image from: https://github.com/deepset-ai/haystack-demos/tree/main/healthcare)
Beta Was this translation helpful? Give feedback.
All reactions