Skip to content

This is a very simple python app that you can use to get up and chatting with Stable LM by Stability AI models locally.

License

Notifications You must be signed in to change notification settings

astrobleem/Simple-StableLM-Chat

Simple-StableLM-Chat

Built to use Stability AI's Stable LM image

StableLM is an open source language model developed by Stability AI that has been trained on an incredibly vast dataset called Pile, which contains 1.5 trillion tokens. This makes StableLM one of the most advanced language models available, capable of generating highly nuanced and accurate responses to a wide variety of inputs.

Simple-StableLM-Chat is a python applciation that interfaces with the model, and generates text, based on the user's input.

Purpose

  1. Showcase the capabilities of StableLM by building a simple chatbot that can engage in conversations with users.
  2. Provide a starting point for developers to make their own applications using this technology.

With StableLM as its foundation, the chatbot is able to generate responses that are highly context-sensitive and demonstrate a remarkable level of understanding of natural language.

To try out the chatbot, simply follow the instructions provided in the repository. There is a runme.bat file included to download and install miniconda and resolve the environment. The runme also launches the interactive chat prompt which will run in a commandline window. It's not a fancy webapp.

You'll be able to engage in conversations with the chatbot and witness firsthand the impressive capabilities of StableLM. We hope that this project serves as a starting point for others who are interested in exploring the power of StableLM and the potential of AI chatbots.

This is a very simple python app that you can use to get up and talking with Stability AI's recently released Stable LM models locally on you own home computer that doesnt need to be connected to the internet.

Requirements

-Windows 10

-16GB System RAM

-40 GB FREE DISK SPACE

Must have greater than 16GB of VRAM to load to the GPU 7B parameter models.

Otherwise the 3B parameter model will be loaded to your CPU

Confirmed to work with -Nvida Tesla M40 with 24GB VRAM

If you only have 16GB of RAM, and No GPU, this is going to be really bad. It will attempt to load the 3B parameter model, which does fit, but there is no room left over for anything else so it starts swapping to disk.

Chatbot Documentation

Example Output

Working

Setting Up

I added a runme.bat file that should download and install miniconda and then then requirements, and then launch the app.

After cloning the repository,

Double click the runme.bat

This will take a long time. It's going to download a lot of stuff.

I have included an environment.yaml file you can add this into your conda environnent with this command conda env create --file environment.yml

Alternatively, check the wiki for manually setting up the environment.

Contributing

Contributions to Simple-StableLM are welcome. If you find a bug or have a suggestion for a new feature, please open an issue on the GitHub repository. If you would like to contribute code, please fork the repository and create a pull request.

License

Simple-StableLM is released under the Apache 2.0 License. See LICENSE for details.