Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some requests (stream based) never ends and block the queue #371

Open
Verhov opened this issue Dec 20, 2020 · 4 comments
Open

Some requests (stream based) never ends and block the queue #371

Verhov opened this issue Dec 20, 2020 · 4 comments

Comments

@Verhov
Copy link

Verhov commented Dec 20, 2020

Summary:

I parsed millions of domains and faced issue what some stream based domain request can permanently block the queue.
Timeout in this case does not fired and RAM is constantly leaking.
I found two same domains: https://goldfm.nu/, https://rsradio.online/.
It's really nice radio 😄 but totally block my crawler pattern))

Current behavior

I'am using timeout but looks it not work pretty correctly, callback never fired in this case:

_crawler = new Crawler({
      timeout: 9000,
      retries: 1,
      retryTimeout: 1000,
      debug: true,
      callback: (error, res, done) => {
          ...
          done()
      }
})

_crawler.queue([{ uri: 'https://goldfm.nu' }])

image

Issue

Definitely it because of this request starts media stream and node-crawler tried to get it all... request always in pending state.
image

Side issues

Also as stream is arriving it increase RAM and seem will thrown 'out of memory' exception.
image

Attempts to fix

I also tried to set accept header to html only, but it's doesn't have affect.
headers: { Accept: 'text/html,application/xhtml+xml,application/xml;q=0.9' },

Currently I just skip this url as the special case, but I think it may not be unique case.

Expected behavior

Timeout should fire an error when we did not receive a response within the allotted time.

Related issues

This issue is definitely related with request package.

Question

Do you have any ideas how to resolve this case?)

@slienceisgolden
Copy link

have the same issue. the spider must have not only a timeout, but also a limit on the download volume

@mike442144
Copy link
Collaborator

Refer to my comment here: request/request#3341
Feel free to discuss if any more questions. and hope it helps.

@Verhov
Copy link
Author

Verhov commented Jan 3, 2021

Thanks @mike442144, but in this (crawling) context we can't blacklisted it before we face it.
And it's not very good to wait few days before the server decides to disconnect - we will receive continuous payload on both sides during this time.

I still don't know how we can identify this type of connection in advance and complete it - Iam tried to send a OPTIONS request at first, but it did't helps to detect next GET request type.

The most elegant solution in my opinion would be a some timeout, and 'response size limit' option that @slienceisgolden mentioned would be great (incl. other pitfalls: huge docs, files, other streams etc..).

Currently not working on it, but it's still relevant.

@mike442144
Copy link
Collaborator

@Verhov Good idea to limit response body size, should work well in your case. Body size, also should be in the options for flexibility. Look forward to your merge request :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants