This project provides a user-friendly interface to generate bug reports using a large language model (LLM). Users can describe the steps to reproduce an issue and the expected results, and the LLM will craft a detailed bug report incorporating these details.
This project requires the following Python libraries:
- python-dotenv
- streamlit
- langchain-google-genai
- pinecone-client
The project consists of three main Python scripts:
- chatbot.py: This script handles the Streamlit user interface and interacts with other scripts to process user input and generate bug reports.
- generate_bug_report.py: This script defines the core logic for processing user input, querying Pinecone for similar issues, and using the LLM to create the bug report.
- closest_sample_finder.py: This script handles vectorizing user input for searching similar issues within Pinecone.
- upsert_csv_to_pinecone.py: This script handles vectorizing source data for upserting to Pinecone database.
- User Input: Users describe the steps to reproduce a bug and the expected results through a Streamlit interface.
- Vectorization and Search: The user input is vectorized and used to query Pinecone for similar bug reports (if available).
- LLM Prompt Generation: Based on user input and potentially retrieved similar issues, a prompt is crafted to guide the LLM in generating the bug report.
- Bug Report Generation: The LLM utilizes the provided prompt to generate a comprehensive bug report describing the issue and potential solutions.
- Additional Information: The generated report includes a template for users to add details about their test environment and any relevant screenshots or recordings.
- Install the required libraries using
pip install -r requirements.txt
. - Set up your Google and Pinecone accounts and obtain the necessary API keys.
- Create a
.env
file in the project directory and store your API keys as the example in.env.example
. - Ensure you have a Pinecone index created and populated with relevant bug report data (title, description, steps to reproduce, expected results) with appropriate metadata fields.
- Run the Streamlit app using
streamlit run src/chatbot.py
. - To run with production mode options, use below command:
streamlit run src/chatbot.py --client.toolbarMode=minimal
- This is a basic example, and the Pinecone integration can be further customized to match your specific data schema and indexing needs.
- Consider error handling and user feedback mechanisms for a more robust user experience.