Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some feature requests related to the decoder and flush operations #186

Open
single-wolf opened this issue Oct 1, 2019 · 5 comments
Open
Assignees

Comments

@single-wolf
Copy link

Is your feature request related to a problem? Please describe.

  1. Add a configuration to support consolidating Channel.flush() so that write latency can traded with higher throughput in some cases. Sounds like batch send .
  2. Add a configuration to limit the maximum of read message in Decoder. there is a memory security risk when the decoder meetting a huge message with a big value on their length of content or header.
  3. Should prevent AbstractBatchDecoder.channelInactive() from invoking callDecode() when the channel closed because of a CodecException throwed by the decode.

Describe the solution you'd like

  1. Add a configuration or switch to decide whether to add a FlushConsolidationHandler to the pipeline.
  2. Check the length of header or content before return or deserialization.
  3. Let bytebuf unreadable with ex: in.skipBytes(in.readableBytes()) before throwing a CodecException ,so that it would be set null by AbstractBatchDecoder.channelRead() before invoking AbstractBatchDecoder.channelInactive().

Additional context
Hope thats helpful ;-)

@cytnju
Copy link
Contributor

cytnju commented Oct 9, 2019

Thanks for your helpful suggestion!

  1. We have already supported this feature in version 1.6.1 and provide related benchmark report,
    You can try it yourself.
    1. Would you like to provide PR to help us fix it ?

@cytnju cytnju self-assigned this Oct 9, 2019
@single-wolf
Copy link
Author

Had tried that feature, the throughput increased by 40% and the rt decreased by 15% based on myself PC benchmark report. XD
Glad to fix it , what is a properable default value of the maximum of (classLen + headerLen + contentLen) ?

@cytnju
Copy link
Contributor

cytnju commented Dec 12, 2019

In fact, netty will limit the size of each received packet, and the maximum size of each packet will not exceed 8192. And we will complete this optimization in next version .

single-wolf added a commit to single-wolf/sofa-bolt that referenced this issue Dec 15, 2019
@single-wolf
Copy link
Author

Whats mean of that netty will limit the size of each received packet ? i dont get it.
Run the test case of testOOM() with master branch, you will get my words.

@cytnju
Copy link
Contributor

cytnju commented Dec 17, 2019

  • I mean netty will limit the size of each packet it reads, and the bolt's decoder will compare the length of the packet read with the actual length, so it will not allocate memory in the heap.
if ((version == RpcProtocolV2.PROTOCOL_VERSION_1 && in.readableBytes() >= lengthAtLeastForV1)
                               || (version == RpcProtocolV2.PROTOCOL_VERSION_2 && in
                                   .readableBytes() >= lengthAtLeastForV2)) 
  • You are right! I tried the test case and I get your words. if decoder refuse decoding the large length data, the data in bytebuf will continue to accumulate although it's off-heap memory. So could you provide this pr that you mentioned help us fix it ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants