Skip to content

Releases: vercel/modelfusion

v0.137.0

24 Feb 10:45
Compare
Choose a tag to compare

Changed

  • Moved cost calculation into @modelfusion/cost-calculation package. Thanks @jakedetels for the refactoring!

v0.136.0

07 Feb 19:09
Compare
Choose a tag to compare

Added

  • FileCache for caching responses to disk. Thanks @jakedetels for the feature! Example:

    import { generateText, openai } from "modelfusion";
    import { FileCache } from "modelfusion/node";
    
    const cache = new FileCache();
    
    const text1 = await generateText({
      model: openai
        .ChatTextGenerator({ model: "gpt-3.5-turbo", temperature: 1 })
        .withTextPrompt(),
      prompt: "Write a short story about a robot learning to love",
      logging: "basic-text",
      cache,
    });
    
    console.log({ text1 });
    
    const text2 = await generateText({
      model: openai
        .ChatTextGenerator({ model: "gpt-3.5-turbo", temperature: 1 })
        .withTextPrompt(),
      prompt: "Write a short story about a robot learning to love",
      logging: "basic-text",
      cache,
    });
    
    console.log({ text2 }); // same text

v0.135.1

04 Feb 14:44
Compare
Choose a tag to compare

Fixed

  • Try both dynamic imports and require for loading libraries on demand.

v0.135.0

29 Jan 12:25
Compare
Choose a tag to compare

v0.135.0 - 2024-01-29

Added

  • ObjectGeneratorTool: a tool to create synthetic or fictional structured data using generateObject. Docs
  • jsonToolCallPrompt.instruction(): Create a instruction prompt for tool calls that uses JSON.

Changed

  • jsonToolCallPrompt automatically enables JSON mode or grammars when supported by the model.

v0.134.0

28 Jan 10:37
Compare
Choose a tag to compare

Added

  • Added prompt function support to generateText, streamText, generateObject, and streamObject. You can create prompt functions for text, instruction, and chat prompts using createTextPrompt, createInstructionPrompt, and createChatPrompt. Prompt functions allow you to load prompts from external sources and improve the prompt logging. Example:

    const storyPrompt = createInstructionPrompt(
      async ({ protagonist }: { protagonist: string }) => ({
        system: "You are an award-winning author.",
        instruction: `Write a short story about ${protagonist} learning to love.`,
      })
    );
    
    const text = await generateText({
      model: openai
        .ChatTextGenerator({ model: "gpt-3.5-turbo" })
        .withInstructionPrompt(),
    
      prompt: storyPrompt({
        protagonist: "a robot",
      }),
    });

Changed

  • Refactored build to use tsup.

v0.133.0

26 Jan 10:16
Compare
Choose a tag to compare

Added

  • Support for OpenAI embedding custom dimensions.

Changed

  • breaking change: renamed embeddingDimensions setting to dimensions

v0.132.0

25 Jan 19:37
Compare
Choose a tag to compare

Added

  • Support for OpenAI text-embedding-3-small and text-embedding-3-large embedding models.
  • Support for OpenAI gpt-4-turbo-preview, gpt-4-0125-preview, and gpt-3.5-turbo-0125 chat models.

v0.131.1

25 Jan 12:37
Compare
Choose a tag to compare

Fixed

  • Add type-fest as dependency to fix type inference errors.

v0.131.0

23 Jan 10:46
Compare
Choose a tag to compare

Added

  • ObjectStreamResponse and ObjectStreamFromResponse serialization functions for using server-generated object streams in web applications.

    Server example:

    export async function POST(req: Request) {
      const { myArgs } = await req.json();
    
      const objectStream = await streamObject({
        // ...
      });
    
      // serialize the object stream to a response:
      return new ObjectStreamResponse(objectStream);
    }

    Client example:

    const response = await fetch("/api/stream-object-openai", {
      method: "POST",
      body: JSON.stringify({ myArgs }),
    });
    
    // deserialize (result object is simpler than the full response)
    const stream = ObjectStreamFromResponse({
      schema: itinerarySchema,
      response,
    });
    
    for await (const { partialObject } of stream) {
      // do something, e.g. setting a React state
    }

Changed

  • breaking change: rename generateStructure to generateObject and streamStructure to streamObject. Related names have been changed accordingly.

  • breaking change: the streamObject result stream contains additional data. You need to use stream.partialObject or destructuring to access it:

    const objectStream = await streamObject({
      // ...
    });
    
    for await (const { partialObject } of objectStream) {
      console.clear();
      console.log(partialObject);
    }
  • breaking change: the result from successful Schema validations is stored in the value property (before: data).

v0.130.1

22 Jan 17:43
Compare
Choose a tag to compare

Fixed

  • Duplex speech streaming works in Vercel Edge Functions.