Library Rewrite
β Important notice for the web app forkers/hosters: v5 moves all implementations into the
packages/implementations
folder instead of thepackages
folder. If you are currently hosting your own version of the web app on Render with automatic deploys of the develop branch, your web app is currently broken!. You'll need to update the deploy settings to point to the new location of the web app.
If you're already using GPT Turbo v4 and don't need the new features, you can (and should) continue using it. v5 will introduce breaking changes even to the most basic functionality. While basic usages is fairly easy to migrate, you should switch to v5 if you start a new project, need the new features or discover a bug that won't be fixed in v4. The last version of v4 is
4.5.0
.
v5 is a near-complete rewrite of the library. More specifically, the functionality of the library mostly remains the same, but the Conversation
class methods have been split into different classes, which can be interacted with from the Conversation
class. This is done in response to the growing complexity of the Conversation
class, which was becoming difficult to maintain. I noticed this problem even more when OpenAI released their Callable Functions, as I had a hard time integrating them into the library. Now, the Conversation
class has the most essential methods (prompt
, reprompt
and functionPrompt
) and you can access specific functionality through its public properties:
config
: Configure both the library and the OpenAI API configuration, such as the API key, context, moderation, etc.requestOptions
: Instead of the old plain old object, these have been moved to aConversationRequestOptions
class.history
: Everything message related has been moved to theConversationHistory
class.callableFunctions
: Everything callable function related has been moved to theConversationCallableFunctions
class.plugins
: With the new conversation plugins feature, this is where you'll interact with your plugins.
Conversation Plugins
The library rewrite was greatly motivated by the desire to add a plugin system. Plugins are a way to tap into the Conversation
lifecycle and transform inputs/outputs. This is useful for developers who want to add functionality that is specific to their use case, but not necessarily useful to everyone. For example, as you'll read in the breaking changes, the entire size/cost feature has been removed from the library, but it's still available as a plugin (gpt-turbo-plugin-stats
). This is because the size/cost feature might not be used by everyone, yet the bundle size would be greatly increased.
Breaking changes
Removed
- Removed Conversation size/cost features to reduce bundle size by about 1.3MB. This was because of the
gpt-token-utils
library, which like many other token libraries, need to be bundled with all the tokens... Since these "stats" were mostly just that, "stats", I decided to remove it. - Removed
Conversation.fromMessages
. Messages can now be passed through the constructor. (see new constructor in "Changed" section)
Changed
Conversation.fromJSON
is no longer async and no longer moderates messages on create.- Previously,
Conversation.fromJSON
would calladdMessage
, which itself was async solely because of moderation. Now, the moderation is done seperately, which makesaddMessage
sync.
- Previously,
- The
Conversation
constructor can now initialize more than just the configuration and request options. Each property can be initialized seperately:-
const conversation = new Conversation({ config: { apiKey: "..." }, requestOptions: { /* ... */ }, history: { /* ... */ }, callableFunctions: { /* ... */ }, plugins: { /* ... */ }, })
-
Moved
- Here's a list of where all the old methods were moved to. (private methods not shown)
Conversation
(unchanged)toJSON
fromJSON
(static)getChatCompletionResponse
prompt
reprompt
functionPrompt
Conversation.config
getConfig
setConfig
Conversation.requestOptions
getRequestOptions
setRequestOptions
Conversation.history
addAssistantMessage
addUserMessage
addFunctionCallMessage
addFunctionMessage
getMessages
offMessageAdded
onMessageAdded
offMessageRemoved
onMessageRemoved
clearMessages
removeMessage
setContext
Conversation.callableFunctions
removeFunction
clearFunctions
addFunction
getFunctions
Changes
- Updated README examples with new syntax
- Renamed
Message
listeners:onMessageUpdated
->onUpdate
offMessageUpdated
->offUpdate
onMessageStreamingUpdate
->onStreamingUpdate
offMessageStreamingUpdate
->offStreamingUpdate
onMessageStreamingStart
->onStreamingStart
offMessageStreamingStart
->offStreamingStart
New Features
- Added "once" listeners to fire only once, instead of having to manually remove the listener after it fires:
Conversation.history.onceMessageAdded
Conversation.history.onceMessageRemoved
Message.onceUpdated
Message.onceStreamingUpdate
Message.onceStreamingStart
Message.onceStreamingStop
- Added
Message
listener for when content is updated during streaming. This is a special kind of listener, since it unsubscribes automatically when streaming ends. This is done out of convenience so you don't have to manually unsubscribe when streaming ends, unlike previous versions where you always needed anonUpdate
andonStreamingStop
listener. Also, if you've been using the old streaming methods with function calling, you'll notice the example in the README has been updated to use this new listener and is much simpler than before!Message.onContentStream
Message.onceContentStream
Message.offContentStream
- You can now define plugins in the
Conversation
constructor. See README for examples. - You can also author plugins that are not included in the library. See README for examples.
- Added event listeners when callable functions are added/removed:
Conversation.callableFunctions.onFunctionAdded
Conversation.callableFunctions.onceFunctionAdded
Conversation.callableFunctions.offFunctionAdded
Conversation.callableFunctions.onFunctionRemoved
Conversation.callableFunctions.onceFunctionRemoved
Conversation.callableFunctions.offFunctionRemoved
About the NestJS implementation...
While upgrading the Nest implementation to v5, I noticed that the overall usage of GPT Turbo was still using old techniques from v1.4 (back when serialization was not even a thing!). It made me realize that this implementation has never really been maintained further than making the build pass. This is because, unlike other implementations, the NestJS implementation was more of a proof of concept rather than a useable project. The goal was to show that GPT Turbo could be used in a backend (Node.js) environment, but the Discord implementation is a much better example.
Going forward, not much effort will be put into maintaining it. It's kept here for historical purposes, and may be reworked in the future or removed entirely. You're welcome to contribute to it if you want to keep it alive! As of right now, I tested it one last time thoroughly on v5.0.0 and it seemed to work fine, apart from not using new features beyond v1.4, such as callable functions and plugins. In a way, it's a good sign that I've been able to upgrade it ever since v1.4 without any major or time consuming changes!