Skip to content

This project seeks to fine-tune the GPT-2.5 model with a personal touch. By training it on personal Telegram chats, we aim to capture the essence of individualized writing styles. To spice things up, we've also fed the model a sprinkle of anecdotes and a dash of math tasks.

License

Notifications You must be signed in to change notification settings

StepanTita/llm-friend-bot

Repository files navigation

GPT-2.5 LLM Friend 🤖 🧠

Harness the power of GPT-2.5, fine-tuned with personal Telegram chats to not just mimic a writing style but to infuse it with a touch of humor and mathematical prowess. Built on Python, this model utilizes the strengths of PyTorch and the Hugging Face Transformers library.

Overview 🌐

This project seeks to fine-tune the GPT-2.5 model with a personal touch. By training it on personal Telegram chats, we aim to capture the essence of individualized writing styles. To spice things up, we've also fed the model a sprinkle of anecdotes and a dash of math tasks. The result? A model with personality, humor, and the ability to crunch some numbers!

Cherry-picked results 🍒

photo_2023-08-14 13 32 27 photo_2023-08-14 13 32 28 photo_2023-08-14 13 32 29 photo_2023-08-14 13 32 30

Features 🎁

  • Personalized Writing Style: Thanks to the Telegram chat data, the model is geared to adopt and mimic individual writing styles.
  • Sense of Humor: Integrated with various anecdotes, the model doesn't just process information but does so with a witty touch.
  • Mathematical Abilities: With added math tasks, expect this model to handle mathematical queries with precision.

Setup & Installation 📥

Note: it is highly recommended to train these models in GPU-backed environments

Prerequisites

  • Python 3.x
  • PyTorch
  • Hugging Face Transformers

Usage 💼

stepan-bot-v1.py is a file where you can find an example of model inference. Please, feel free to adapt it for your own needs and use-cases.

Data Privacy 🛡️

While personal Telegram chats were used for training, all chat data has been kept confidential, ensuring utmost privacy.

Contribute 🤝

Your contributions, issues, and feature requests are welcome! Feel free to check the issues page.

License 📜

This project is licensed under the MIT License. See the LICENSE.md file for details.

About

This project seeks to fine-tune the GPT-2.5 model with a personal touch. By training it on personal Telegram chats, we aim to capture the essence of individualized writing styles. To spice things up, we've also fed the model a sprinkle of anecdotes and a dash of math tasks.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published