Skip to content

Latest commit

 

History

History
45 lines (27 loc) · 1.72 KB

File metadata and controls

45 lines (27 loc) · 1.72 KB

This is a LlamaIndex project using Next.js bootstrapped with create-llama

Prerequisites

This demo is showcasing the Llama 3 model running on Replicate. To get started, you'll need a REPLICATE_API_TOKEN from https://replicate.com/account/api-tokens

The OpenAI embedding models are used to calculate embeddings. Please retrieve an OPENAI_API_KEY from https://platform.openai.com/api-keys to use them.

After retrieving these tokens, you must set them both as environment variables or add them to the .env file - now you're ready to start!

Getting Started

First, install the dependencies:

npm install

Second, generate the embeddings of the documents in the ./data directory (if this folder exists - otherwise, skip this step):

npm run generate

Third, run the development server:

npm run dev

Open http://localhost:3000 with your browser to see the result.

You can start editing the page by modifying app/page.tsx. The page auto-updates as you edit the file.

This project uses next/font to automatically optimize and load Inter, a custom Google Font.

Learn More

To learn more about LlamaIndex, take a look at the following resources:

You can check out the LlamaIndexTS GitHub repository - your feedback and contributions are welcome!