Skip to content
This repository has been archived by the owner on Jun 2, 2024. It is now read-only.

Releases: maxijonson/gpt-turbo

🔌 5.0.0 - Conversation Plugins and Library Rewrite

30 Jul 20:38
Compare
Choose a tag to compare

Library Rewrite

Important notice for the web app forkers/hosters: v5 moves all implementations into the packages/implementations folder instead of the packages folder. If you are currently hosting your own version of the web app on Render with automatic deploys of the develop branch, your web app is currently broken!. You'll need to update the deploy settings to point to the new location of the web app.

If you're already using GPT Turbo v4 and don't need the new features, you can (and should) continue using it. v5 will introduce breaking changes even to the most basic functionality. While basic usages is fairly easy to migrate, you should switch to v5 if you start a new project, need the new features or discover a bug that won't be fixed in v4. The last version of v4 is 4.5.0.

v5 is a near-complete rewrite of the library. More specifically, the functionality of the library mostly remains the same, but the Conversation class methods have been split into different classes, which can be interacted with from the Conversation class. This is done in response to the growing complexity of the Conversation class, which was becoming difficult to maintain. I noticed this problem even more when OpenAI released their Callable Functions, as I had a hard time integrating them into the library. Now, the Conversation class has the most essential methods (prompt, reprompt and functionPrompt) and you can access specific functionality through its public properties:

  • config: Configure both the library and the OpenAI API configuration, such as the API key, context, moderation, etc.
  • requestOptions: Instead of the old plain old object, these have been moved to a ConversationRequestOptions class.
  • history: Everything message related has been moved to the ConversationHistory class.
  • callableFunctions: Everything callable function related has been moved to the ConversationCallableFunctions class.
  • plugins: With the new conversation plugins feature, this is where you'll interact with your plugins.

Conversation Plugins

The library rewrite was greatly motivated by the desire to add a plugin system. Plugins are a way to tap into the Conversation lifecycle and transform inputs/outputs. This is useful for developers who want to add functionality that is specific to their use case, but not necessarily useful to everyone. For example, as you'll read in the breaking changes, the entire size/cost feature has been removed from the library, but it's still available as a plugin (gpt-turbo-plugin-stats). This is because the size/cost feature might not be used by everyone, yet the bundle size would be greatly increased.

Breaking changes

Removed

  • Removed Conversation size/cost features to reduce bundle size by about 1.3MB. This was because of the gpt-token-utils library, which like many other token libraries, need to be bundled with all the tokens... Since these "stats" were mostly just that, "stats", I decided to remove it.
  • Removed Conversation.fromMessages. Messages can now be passed through the constructor. (see new constructor in "Changed" section)

Changed

  • Conversation.fromJSON is no longer async and no longer moderates messages on create.
    • Previously, Conversation.fromJSON would call addMessage, which itself was async solely because of moderation. Now, the moderation is done seperately, which makes addMessage sync.
  • The Conversation constructor can now initialize more than just the configuration and request options. Each property can be initialized seperately:
    • const conversation = new Conversation({
          config: {
              apiKey: "..."
          },
          requestOptions: { /* ... */ },
          history: { /* ... */ },
          callableFunctions: { /* ... */ },
          plugins: { /* ... */ },
      })

Moved

  • Here's a list of where all the old methods were moved to. (private methods not shown)
    • Conversation (unchanged)
      • toJSON
      • fromJSON (static)
      • getChatCompletionResponse
      • prompt
      • reprompt
      • functionPrompt
    • Conversation.config
      • getConfig
      • setConfig
    • Conversation.requestOptions
      • getRequestOptions
      • setRequestOptions
    • Conversation.history
      • addAssistantMessage
      • addUserMessage
      • addFunctionCallMessage
      • addFunctionMessage
      • getMessages
      • offMessageAdded
      • onMessageAdded
      • offMessageRemoved
      • onMessageRemoved
      • clearMessages
      • removeMessage
      • setContext
    • Conversation.callableFunctions
      • removeFunction
      • clearFunctions
      • addFunction
      • getFunctions

Changes

  • Updated README examples with new syntax
  • Renamed Message listeners:
    • onMessageUpdated -> onUpdate
    • offMessageUpdated -> offUpdate
    • onMessageStreamingUpdate -> onStreamingUpdate
    • offMessageStreamingUpdate -> offStreamingUpdate
    • onMessageStreamingStart -> onStreamingStart
    • offMessageStreamingStart -> offStreamingStart

New Features

  • Added "once" listeners to fire only once, instead of having to manually remove the listener after it fires:
    • Conversation.history.onceMessageAdded
    • Conversation.history.onceMessageRemoved
    • Message.onceUpdated
    • Message.onceStreamingUpdate
    • Message.onceStreamingStart
    • Message.onceStreamingStop
  • Added Message listener for when content is updated during streaming. This is a special kind of listener, since it unsubscribes automatically when streaming ends. This is done out of convenience so you don't have to manually unsubscribe when streaming ends, unlike previous versions where you always needed an onUpdate and onStreamingStop listener. Also, if you've been using the old streaming methods with function calling, you'll notice the example in the README has been updated to use this new listener and is much simpler than before!
    • Message.onContentStream
    • Message.onceContentStream
    • Message.offContentStream
  • You can now define plugins in the Conversation constructor. See README for examples.
  • You can also author plugins that are not included in the library. See README for examples.
  • Added event listeners when callable functions are added/removed:
    • Conversation.callableFunctions.onFunctionAdded
    • Conversation.callableFunctions.onceFunctionAdded
    • Conversation.callableFunctions.offFunctionAdded
    • Conversation.callableFunctions.onFunctionRemoved
    • Conversation.callableFunctions.onceFunctionRemoved
    • Conversation.callableFunctions.offFunctionRemoved

About the NestJS implementation...

While upgrading the Nest implementation to v5, I noticed that the overall usage of GPT Turbo was still using old techniques from v1.4 (back when serialization was not even a thing!). It made me realize that this implementation has never really been maintained further than making the build pass. This is because, unlike other implementations, the NestJS implementation was more of a proof of concept rather than a useable project. The goal was to show that GPT Turbo could be used in a backend (Node.js) environment, but the Discord implementation is a much better example.

Going forward, not much effort will be put into maintaining it. It's kept here for historical purposes, and may be reworked in the future or removed entirely. You're welcome to contribute to it if you want to keep it alive! As of right now, I tested it one last time thoroughly on v5.0.0 and it seemed to work fine, apart from not using new features beyond v1.4, such as callable functions and plugins. In a way, it's a good sign that I've been able to upgrade it ever since v1.4 without any major or time consuming changes!

4.5.0 - Web conversation edit, duplicate, import and export

18 Jul 21:40
Compare
Choose a tag to compare

Web conversation edit, duplicate, import and export

You can now edit an ongoing conversation, duplicate it and export/import it elsewhere, just like callable functions! Note that your API key is not exported with the conversation.

Other improvements:

  • The default settings now work inside the conversation form by pressing the Save icon, next to the "Create Conversation" button. The old settings button will now only show app-specific settings
  • You can now generate the name of a conversation, similar to how it's done in ChatGPT. At the moment, this can only be done manually by editing the name and clicking on the button inside the text box. If it doesn't show up, it's because requirements were not met (no api key or not enough messages)

Lib

  • Added getRequestOptions and setRequestOptions. Previously, there was no way to get/set them after a conversation has been created. These methods work similarly to getConfig and setConfig.

4.3.0 - Web Function Templates

03 Jul 04:55
Compare
Choose a tag to compare

Web Function Templates

To help you get started with Function Templates in the GPT Turbo web app, I've added 2 templates:

  • Async: Provides an example on how to write async/await style code in the editor. Try it out!
  • Fetch: Provides an example on the capabilities of Callable Functions to execute fetch requests. Try it out

More may come in the future, but the idea was to show you that Function Calling in the web can do things beyond simple and synchronous operations!

Other fixes

  • Parameters can now be removed from the editor. Sorry about that, totally forgot to add a delete button! 😅

4.2.0 - Web Function Calling (with code execution!)

01 Jul 06:47
Compare
Choose a tag to compare

Web Function Calling

I'm very proud to announce one of the biggest addition to the Web implementation: Function Calling with (optional) code execution! 🙌🎉 This can potentially help you create your own "plugins" like you'd find on ChatGPT Pro subscription!

Not only does this let you take advantage of the latest Function Calling feature from OpenAI directly in the Web interface, but you can actually implement the callable function with a JavaScript snippet which will get called to create a Function Message.

Here's how you can try it out right now!

  1. Create your function in the Function Editor
    image

  2. Add it to your conversation from the new "Functions" tab
    image

  3. Let the assistant call your function automatically when it thinks it is appropriate
    image

Additionally, I've added a feature that let's you export and import functions to easily share your functions (with people you know please!!!). You can try importing the sample function attached to this release that makes a fetch request to JSON Placeholder.

Over the next few weeks/month, I'll be adding more features on top of functions, but since this has been in development for about 2 weeks already, I wanted to share this amazing feature!

Library Function Calling Improvements

When I first released Function Calling in 4.0.0, the usage was pretty basic (or as I'd define it: too raw). I pushed it like that so that to get it out fast for people who wanted to try it out. While passing functions objects entirely still works, there are now a few handy classes that will keep you safe from making mistakes:

  • CallableFunction: similar to the Conversation and Message classes, functions have their own class now instead of just being plain old objects.
  • CallableFunctionParameter is an abstract class that each JSON Schema specification class inherits, such as CallableFunctionString, CallableFunctionNumber, and CallableFunctionObject. They each have properties for their own specific JSON schema specification.
  • Conversation now has a addFunction and removeFunction method to add CallableFunctions. Note that currently, you can't pass a CallableFunction instance to the Conversation config because the config only takes a plain object as parameter. Instead you could use the CallableFunction.toJSON method inside the config or use the addFunction method right after creating the conversation.

4.0.0 - Function Calling

18 Jun 06:56
Compare
Choose a tag to compare

Function Calling

In case you haven't been following the OpenAI news lately, about a week ago, they announced that their GPT models now support function calling. This allows developers to specify function signatures of their code to the assistant and let the assistant decide which function to call and with what parameters. Devs can then use this result to call their function, provide the function result back to the assistant, and get a natural language answer all based on the function result.

This release adds support to this feature so you can try it out right now in the new 4.0.0 major version. Note that at the time of writing this, Function Calling is only available on the gpt-3.5-turbo-0613 model and equivalent for GPT-4, but they're planned to release on the stable channel very soon!

You can see an example in the lib's README. All other packages have been updated to mitigate breaking changes, but none of them currently make use of the feature. This will probably come eventually in some form for the Web implementation, as it is the most popular implementation and most flexible to these kinds of features. Stay tuned!

Pricing changes

Along with this new feature, OpenAI has also lowered input costs by 25%. This has been updated in the lib's pricing table for GPT-3.5.

Breaking Changes

This is a major version as it introduces breaking changes that could affect existing applications as it did for most implementations.

  • Message.content can now be string | null instead of just string. This is because function calls by the assistant will have a null content. Previously, the library did its best to always send you a string (empty or throw an error). Now, it actually needs to work with null contents.
  • Message.role can now also be function. Previously, only assistant, user and system were possible.

These two changes are the most relevant with existing APIs. Here are some new APIs:

  • There's a new Conversation.functionPrompt method to send your function results to the assistant.
  • There are 3 new methods (and exported types) on Message to help you, with type guards, to differentiate between standard completions, function calls and function messages: isCompletion (CompletionMessage), isFunctionCall (FunctionCallMessage) and isFunction (FunctionMessage). Use these methods instead of doing null checks for a more robust check and automatic type inference.

3.5.0 - JSON Serialization

01 May 01:00
Compare
Choose a tag to compare

JSON Serialization

When developing different implementations of GPT Turbo, I quickly noticed that a usage pattern emerged: I wanted to persist conversations in some way or another and load them at a later time. Before, this was done by creating a conversation and manually adding each message one after the other. Additionally, the code was almost exactly the same for each implementation.

This release brings new static and instance methods to handle the serialization and deserialization of Conversation objects (and other classes):

  • conversation.toJSON() creates a JSON object from a Conversation instance.
  • Conversation.fromJSON creates a Conversation instance from a JSON object.

These were also implemented in all implementations of GPT Turbo, greatly reducing logic and repeated code.

3.4.0 - Discord Implementation

27 Apr 01:01
Compare
Choose a tag to compare

Discord Implementation

A new package has appeared! The Discord implementation shows-off GPT Turbo's capabilities in a Discord environment. Like Discord's upcoming Clyde AI bot, the GPT Turbo Discord bot lets users interact with GPT models straight from the Discord app.

Because of OpenAI's usage policy on Bring-Your-Own-Key applications and because I can't afford to run a free conversational AI bot, it also comes with whitelisting/blacklisting capabilities and a token quota system to limit usage. While this feature was created because I wanted to be able to share a running bot with people on my server without going broke, this also allows other developers to host their own with the same capabilities!

I'm still figuring out a proper (and free/cheap) host for a bot I'll be hosting on GPT Turbo's new official Discord server. In the meantime, come join the new community and get the Early Bird role!

Library Changes

A new static method was added to create new conversations from messages: Conversation.fromMessages. This was mainly implemented in the context of the Discord implementation, so expect a few updates on this new method as it is implemented in other implementations of GPT Turbo. I've noticed that loading conversations have been a recurring pattern in all of the implementations, so it feels appropriate that the library provides helpers like this method.

3.2.0 - Nest Implementation

12 Apr 05:28
Compare
Choose a tag to compare

Happy to announce the successful implementation of GPT Turbo in a backend environment: NestJS!

While, this one is much smaller than the previous ones, as it was mostly designed as a proof of concept, rather than a useable product like the CLI or the Web app, it is nonetheless another important milestone. As you'll notice from the version bump, there are no breaking changes introduced to make this possible! Plus, now the most significant parts of a full-stack application (front-end and back-end) are now battle-tested! This means that the library is becoming more stable and useful in any environment.

Here's what changed in the library for this version bump:

  • The stop configuration parameter cannot be an empty string or array when sent to OpenAI. This is now validated.

There are more implementations in more specific environments I'd like to try out, such as a Discord bot. Stay tuned!

3.1.0 - CLI Save and Load commands

07 Apr 19:35
Compare
Choose a tag to compare

This release only changes the CLI. It adds the ability to save and load conversations, as requested by issue #3. To save your conversation, you'll need to specify it when launching the CLI:

# Generated timestamped name
gpt-turbo --save

# Custom name
gpt-turbo --save my-conversation

Then load it at a later time

gpt-tubo --load ./my-conversation.json

3.0.0 - Web Implementation and major API changes

05 Apr 05:28
Compare
Choose a tag to compare

This release marks the release of the new Web implementation! 🎉

Web Implementation

image

If you've used ChatGPT in the past, this web app will feel very familiar. Almost every feature has been replicated using GPT Turbo, except for the "Stop Generating" feature. Additionally, you can create multiple conversations with different configurations, including for the GPT model use, the response method, and the moderation options.

Conversation storage is also supported and uses the browser's local storage to save them only when you explicitly specify it using the Conversation configuration form. This means you can run (or host 👀) this web app as a static website, with no external services needed. Note that API keys are saved to your local storage with the conversations, hence the need to explicitly specify you wish to save it. This should prevent accidental storage on devices you don't own.

Hosting should theoretically be possible. However, I have yet to put it into practice. I will try to deploy the web app myself and document the process for other people.

Major changes to the library

While the following changes are not necessarily "breaking", they are still a huge rewrite of the library's internals, which could have some side effects, depending on how you used the library previously. There were no notable CLI impacts for these changes. In fact, the CLI's code remains practically the same, with only empty TypeScript interfaces being removed.

Ditching the OpenAI Node SDK

I had to find out the hard way that OpenAI's Node SDK is not meant to be used in the browser. Single response requests worked fine, but there was an issue with streamed responses because the SDK uses Axios with { responseType: "stream" }, which is not supported in the browser. For this reason, I had to ditch the openai NPM package and rewrite the calls manually using a good ol' fetch. While this is an important internal change, it's worth noting that this didn't impact the library's functionality, since the result is still the same. In fact, this allowed for more flexibility, which improved streaming request listeners.

Improving streaming request listeners

Previously, the Message.onStreamingUpdate method was a bit unpredictable. It wasn't always clear when the streaming started or stopped, which led to some hacks in the CLI's implementation and in the library itself to re-fire the events continuously during the streaming process. Now, these events are fired only once: when it starts and when it ends.

Switching gpt-3-encoder for gpt-token-utils

Similar to the reason for ditching openai, gpt-3-encoder used some Node-only APIs not available on the browser. gpt-token-utils uses an isomorphic approach so that token counting works both in Node and in the browser!

Streamed dry mode messages

Previously, prompts made with stream: true in dry mode had the same response type as stream: false prompts. Now, dry requests made with stream: true will simulate a fake ReadableStream that periodically sends tokens, just like they do in live mode.

Other Changes

  • Overall, the Conversation class has seen more private methods made public for better control over the flow, when prompt is not enough.
  • The configuration can now be accessed and changed from the client code.
  • There's a new reprompt method for regenerating responses or editing previous prompts that handles deleting messages after those (just like ChatGPT).
  • Other optional ChatCompletion parameters, such as temperature, can now be configured at Conversation level or prompt level.