-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broken pipes on large data writes #490
Comments
is this because a mismatch between sender and receiver |
@howardjohn, Any chance it's something like hitting a limit on the number of open files or file watches? I think each pipe is going to consume 2 file descriptors (one referring to each end of the pipe). |
I am seeing this even with a single connection that sends a large amount of data. so it may be some limit but I don't think it would be num files? |
https://unix.stackexchange.com/questions/11946/how-big-is-the-pipe-buffer could be related to pipe overflow |
I'm pretty sure this has to do with the TCP connection. I can get a similar message if I run
From the
There were no |
However, I can't seem to replicate the issue with fortio (in new or old builds) |
Similar error here with flaky perf tests https://prow.istio.io/view/gs/istio-prow/pr-logs/pull/istio_ztunnel/670/test_ztunnel/1695125088690507776 |
I am seeing the same error. I have installed 1.20 on an EKS cluster. I do not see this on version 1.18.3. I was also having problems with 1.19.x versions.
|
fixed by #1067 |
fortio load -uniform -c 1 -qps 100 -payload-size=40000 -t 2s -abort-on=-1 http://localhost:808
We keep seeing the data closed
The text was updated successfully, but these errors were encountered: