Skip to content

Part of our final year project work involving complex NLP tasks along with experimentation on various datasets and different LLMs

Notifications You must be signed in to change notification settings

dinesh-kumar-mr/MediVQA

Repository files navigation

In this work, a LoRA-optimized BLIP model is used to investigate the improvement of Medical Visual Question Answering (VQA). Our work aims to optimize the BLIP architecture through the use of Low-Rank Adaptation (LoRA) to develop a more effective and resource-efficient method for medical image analysis. We rigorously test this technique on a specialized combination of medical VQA datasets and show its efficacy. The outcomes demonstrate notable gains in accuracy, especially for closed-type questions, highlighting the potential of LoRA-enhanced BLIP models to advance AI-driven healthcare solutions and medical diagnostics. This paper lays the groundwork for future research and development in the field by presenting a novel approach that connects cutting-edge AI techniques with essential medical applications.

About

Part of our final year project work involving complex NLP tasks along with experimentation on various datasets and different LLMs

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published