Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rate Limit Minute (-rlm) behavior does not distribute requests evenly #1620

Closed
swdbo opened this issue Mar 7, 2024 · 2 comments
Closed

Rate Limit Minute (-rlm) behavior does not distribute requests evenly #1620

swdbo opened this issue Mar 7, 2024 · 2 comments
Labels
Type: Bug Inconsistencies or issues which will cause an issue or problem for users or implementors. Type: Question A query or seeking clarification on parts of the spec. Probably doesn't need the attention of all.

Comments

@swdbo
Copy link

swdbo commented Mar 7, 2024

httpx version:

v1.6.0

Current Behavior:

When using the -rlm (rate limit per minute) argument, the tool sends the specified amount of requests all at once instead of distributing them evenly across the minute. This behavior is counterintuitive, as it leads to a burst of traffic followed by a period of inactivity, rather than spreading the requests out to avoid overwhelming target servers.

Expected Behavior:

The expected behavior for the -rlm argument is to distribute the specified number of requests evenly across the minute. For instance, if -rlm 10 is specified, one would expect a request to be sent every 6 seconds, thereby evenly pacing the load on the target server(s) and adhering more closely to a "rate limit."

Steps To Reproduce:

  1. Prepare a urls.txt file with multiple target URLs.
  2. Run httpx with verbose logging and the -rlm argument set to 10, like so: cat urls.txt | httpx -v -rlm 10
  3. Observe the output and timing of requests; all 10 requests are made at the same time, not spaced out at 1 request every 6 seconds as expected.

Anything else:

This unexpected behavior could lead to potential flooding of target websites, which is especially concerning in scenarios where careful rate limiting is necessary to comply with target server policies or to avoid unintentional Denial-of-Service conditions. An adjustment to ensure requests are distributed evenly throughout the specified time frame would greatly enhance the utility and reliability of the -rlm feature.

@swdbo swdbo added the Type: Bug Inconsistencies or issues which will cause an issue or problem for users or implementors. label Mar 7, 2024
@Mzack9999
Copy link
Member

@swdbo we opted for a rate limit with burst at maximum speed capability as httpx is mostly meant to interact with many different servers. Hence the general usage is interacting with targets ideally only once per port at maximum speed. The rate limit is mostly meant to impose a generic very large upper bound, and use maximum concurrency of connections towards different servers. Did you face any scenario where you experienced idle period of inactivity or overwhelmed the servers? I'm just trying to understand the context better, so that we could change the internals to address common use cases. Thanks!

@Mzack9999 Mzack9999 added the Type: Question A query or seeking clarification on parts of the spec. Probably doesn't need the attention of all. label Jun 11, 2024
@Mzack9999
Copy link
Member

Closing due to inactivity - Feel free to reopen

@Mzack9999 Mzack9999 closed this as not planned Won't fix, can't repro, duplicate, stale Jun 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Bug Inconsistencies or issues which will cause an issue or problem for users or implementors. Type: Question A query or seeking clarification on parts of the spec. Probably doesn't need the attention of all.
Projects
None yet
Development

No branches or pull requests

2 participants