Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spark starts crashing when some of the eventhub partitions go down. #664

Open
pablo-statsig opened this issue Dec 17, 2022 · 0 comments
Open

Comments

@pablo-statsig
Copy link

We have seen three times in the past few months where one or more of the partitions in our eventhub go down due to some internal Azure problem. The rest of the partitions keep working, and data is written to the working partitions. However, during this time spark starts to crash with a Futures Timed out error. Is there a way to have spark continue to work even when it cannot connect to some of the partitions? Is there a manual way to set some partitions not to be read from during these outages?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant