Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cowboy_stream_h problems in grpc streaming #1430

Open
tony612 opened this issue Dec 25, 2019 · 4 comments
Open

cowboy_stream_h problems in grpc streaming #1430

tony612 opened this issue Dec 25, 2019 · 4 comments

Comments

@tony612
Copy link
Contributor

tony612 commented Dec 25, 2019

I got two problems when using cowboy_stream_h to implement grpc streaming calls:

  1. I tried to use auto mode, but got an error when calling cowboy_req:read_body(Req, [{length, auto}]). I believe this is because this line
    Timeout = maps:get(timeout, Opts, Period + 1000),
  2. Streaming client may send only one empty body with fin to info indicate the streaming request is done. But current cowboy_stream_h can't handle this because
    info(StreamID, Info={read_body, Pid, Ref, auto, infinity}, State=#state{read_body_buffer= <<>>}) ->
    do_info(StreamID, Info, [], State#state{
    read_body_pid=Pid,
    read_body_ref=Ref,
    read_body_length=auto
    });
    To solve this, we may need to send the body back when fin, like adding this as before the first read_body info:
info(StreamID, Info={read_body, Pid, Ref, auto, infinity}, State=#state{
		read_body_is_fin=fin, read_body_buffer=Buffer, body_length=BodyLen}) ->
	send_request_body(Pid, Ref, IsFin, BodyLen, Buffer),
         % We can avoid sending flow if Buffer is empty
	do_info(StreamID, Info, [{flow, byte_size(Buffer)}],
		State#state{read_body_buffer= <<>>});
@essen
Copy link
Member

essen commented Dec 25, 2019

You can't use read_body with auto at the moment (or ever, to be honest, it's not really the same thing) but you can send the message directly. Now there's a cowboy_req:cast(Req, Msg) to send that message.

It should special case fin indeed.

@essen
Copy link
Member

essen commented Jan 6, 2020

I am thinking of reworking read_body auto into a sort of active,N where we would be able to send a message asking for the body to be sent as it comes in for a total of N messages, followed by a type of passive message like active,N has. Then it can be requested again. It could also be canceled if necessary.

I think we should also be able to request a minimum flow value when doing this. So perhaps a command like min_flow needs to be added. Then if we set min_flow and there's already a high enough flow value there's no need to send WINDOW_UPDATE or other and no risk to overflow.

I am thinking of making new messages rather than reuse read_body. Maybe {active_read_body, Pid, Ref, MinFlow, N} where N can be false to cancel. The request_body message can be kept as is, however an extra message {passive_read_body, Ref} needs to be sent once N reaches 0 or is otherwise canceled. Sending active_read_body twice in a row would result in the N values being added up and the MinFlow updated.

@tony612
Copy link
Contributor Author

tony612 commented Jan 7, 2020

@essen Then we need to store the data as a list of binary instead binary like what we do now, right? And when read_body is called, we should call iolist_to_binary first?

@essen
Copy link
Member

essen commented Jan 7, 2020

In active mode, you don't call read_body.

And either way there's never a need to use lists of binaries when you can just do binary appends. That's how read_body works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants