Skip to content

k-milktooth/remingoat-gpt

Repository files navigation

RemingoatGPT - AI-powered chatbot for TheRemingoat switch reviews 🐐🤖

https://remingoat-gpt.vercel.app/

Motivation

It's no surprise that TheRemingoat switch reviews, coming in at thousands of words each, are the most detailed switch reviews available. However, this verbosity has one major downside: it becomes unwieldy to read any given review, not to mention multiple reviews for when you're comparing many switches.

Time to read TheRemingoat switch review

The obvious alternative is not to read the reviews and simply browse the provided score sheets. This shortcut is more time-efficient but ultimately lacks nuance.

TheRemingoat score sheet

The scores simply reflect TheRemingoat's preferences, which you won't be mirroring. In other words, without context, scores are meaningless.

RemingoatGPT seeks to mitigate the downsides of both long-winded reviews and reductionist score sheets. An AI-powered Q&A chatbot that lets you query detailed data as needed could be the perfect synthesis of detail and efficiency—simply ask and read what's relevant to you.

We hope RemingoatGPT facilitates an easier and more successful switch discovery and research process!

Development

  1. Clone the repo
git clone https://github.com/k-milktooth/remingoat-gpt.git
  1. Install packages
npm i
  1. Set up your .env file
  • Copy .env.local.example into .env
  • Fill in API keys
    • Visit OpenAI to retrieve your OpenAI API key.
    • Visit Pinecone to create an index. Then, from the dashboard, retrieve the appropriate Pinecone API key, environment and index name.
      • We've prefilled in the Pinecone namespace environment variable (PINECONE_NAMESPACE="remingoat-gpt") but feel free to change that to your liking.
  1. Ingest data
  • Run npm run ingest
    • First, we scrape theremingoat.com for switch reviews
    • Then, we create embeddings and save them to our Pinecone index
  1. Run the app
npm run dev

Open http://localhost:3000 with your browser to see the result.

Roadmap

  • Get streamed responses working in prod
    • Current solution streams responses locally but not once deployed in serverless environments such as Vercel or Netlify
    • Might require moving to /app dir, see here
  • Option to bring your own OpenAI API key

Support

If you've found the site useful, please consider a donation to help cover the cost of OpenAI API calls!

Stripe donate QR code

Or, click here