Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expressjs: Allow streamToResponse to send back custom data in response body #1285

Closed
Banhawy opened this issue Apr 6, 2024 · 4 comments
Closed
Assignees
Labels
ai/ui enhancement New feature or request

Comments

@Banhawy
Copy link

Banhawy commented Apr 6, 2024

Feature Description

I am using React ^18.2.0 and Express ^4.19.2 and openai as the ai provider. I am proposing the addition of a feature that allows the streamToResponse function to send back custom data in the response body. Currently, the streamToResponse function only supports streaming LLM response directly from the AI provider to the client with the ability to modify only the response headers and status.

/**
 * A utility function to stream a ReadableStream to a Node.js response-like object.
 */
declare function streamToResponse(res: ReadableStream, response: ServerResponse, init?: {
    headers?: Record<string, string>;
    status?: number;
}): void;

However, there are use cases where it would be beneficial to include additional custom data in the response body.

This option is available in the similar function experimental_StreamingReactResponse where you can send custom data in the body.

/**
 * A utility class for streaming React responses.
 */
declare class experimental_StreamingReactResponse {
    constructor(res: ReadableStream, options?: {
        ui?: (message: {
            content: string;
            data?: JSONValue[];
        }) => UINode | Promise<UINode>;
        data?: experimental_StreamData;
        generateId?: IdGenerator;
    });
}

However this doesn't work in Nodejs servers and works only on Vercel as far as I know.

Use Case

It would be incredibly useful to piggyback on the final response and add custom data to the response body sent back to the client. For example if the function can be modified to accept an options params with the following signature:

declare function streamToResponse(res: ReadableStream, response: ServerResponse, options?: {
    init?: {
        headers?: Record<string, string>;
        status?: number;
    }
    data?: experimental_StreamData;
    generateId?: IdGenerator;
}): void;

Which in turn would be use like so:

router.post('/chat/:id', async (req, res) => {
    const openai = new OpenAI();

    const { messages } = req.body;

    const data = new experimental_StreamData();

    await data.append({
        customData: {
            messageCount: 1,
            id: '1',
        }
    });

    await data.close();

    try {
        const aiResponse = await openai.chat.completions.create({
          model: 'gpt-3.5-turbo',
          stream: true,
          messages,
        });

        const stream = OpenAIStream(aiResponse);

        return streamToResponse(stream, res, { data });
    } catch (error) {
        res.status(500).json({ error: 'Error forwarding request to OpenAI' });
    }
});

Currently struggling to find a way around this, and I really don't want to migrate to vercel just for this one feature.

Additional context

No response

@valstu
Copy link
Contributor

valstu commented Apr 26, 2024

I created my own version of streamToResponse which is pretty much the same as the original streamToResponse but with data support.

So add this function somewhere in your code:

export function streamToResponse(
  res: ReadableStream,
  response: ServerResponse,
  init?: { headers?: Record<string, string>; status?: number },
  data?: StreamData,
) {
  response.writeHead(init?.status || 200, {
    'Content-Type': 'text/plain; charset=utf-8',
    ...init?.headers,
  });

  let processedStream = res;

  if (data) {
    processedStream = res.pipeThrough(data.stream);
  }

  const reader = processedStream.getReader();
  function read() {
    void reader
      .read()
      .then(({ done, value }: { done: boolean; value?: any }) => {
        if (done) {
          response.end();
          return;
        }
        response.write(value);
        read();
      });
  }
  read();
}

Then use this streamToResponse instead of the one provided with AI SDK.

I could make a PR about this if this is something maintainers would agree on?

@MartinCura
Copy link

Hello, i almost opened a similar PR to @valstu and @lgrammel's. My use case is adding some custom data as a "prelude" before streaming the LLM generation; for example, to add an identifier of the agent that is responding.

This can be done either by adding a stream or data argument that is written to response before the actual LLM streaming, or by allowing headers to not be set by streamToResponse() so that there's no errors because of twice trying to set the headers.

Below a simple example of both together jic it's of any use to see how we're now using it.

/**
 * Adds the ability to not set headers (`{ headers: null }`) and/or write a prelude before streaming.
 */
export function streamToResponse(
  stream: ReadableStream,
  response: ServerResponse,
  { headers, prelude, status }: { headers?: Record<string, string> | null; prelude?: string; status?: number } = {},
) {
  if (headers !== null) {  // new check
    response.writeHead(status || 200, {
      "Content-Type": "text/plain; charset=utf-8",
      ...headers,
    });
  }

  if (prelude) {  // allow adding a "prelude" before the stream; another stream would also be fine but wasn't necessary for us
    response.write(prelude);
  }

  const reader = stream.getReader();
  function read() {
    reader.read().then(({ done, value }: { done: boolean; value?: any }) => {
      if (done) {
        response.end();
        return;
      }
      response.write(value);
      read();
    });
  }
  read();
}

@lgrammel
Copy link
Collaborator

@MartinCura adding a prelude can cause problems with stream parsing on the client (if you use useChat or useCompletion). Have you considered using StreamData instead? https://sdk.vercel.ai/docs/ai-sdk-ui/streaming-data

@lgrammel
Copy link
Collaborator

streamToResponse supports stream data in v3.1.12

@lgrammel lgrammel self-assigned this May 17, 2024
@lgrammel lgrammel added ai/ui enhancement New feature or request labels May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai/ui enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants