Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support deadline/timeout options for execute method #1335

Open
cyhii opened this issue Jun 28, 2023 · 9 comments
Open

Support deadline/timeout options for execute method #1335

cyhii opened this issue Jun 28, 2023 · 9 comments

Comments

@cyhii
Copy link

cyhii commented Jun 28, 2023

It seems there is no way to specify a deadline/timeout when we call execute method. Sometimes all connections in pool are used by some pretty slow query, then tons of other query commands are added to the waiter, waiting for a long time. They would be executed finally (waiting for the slow queries and queries before them in the waiter) but the result may be ignored because event-bus delivery has got a timeout.

In this case the SQL queue may be like this, all the queries that stay in waiter more than a sensible time should be dropped, a bit like fail-fast.
image

A deadline/timeout support should resolve this problem. It's would be like the method withDeadlineAfter of gRPC stub, or just one option for the connection pool should also be OK.

Have a consider?

@tsegismont
Copy link
Contributor

We recommend creating a separate pool or creating connections on-demand for long running queries (e.g. analytics).

Then the ability to configure a timeout depends on the database. For example, with Pg, you can do SET statement_timeout = x

@cyhii
Copy link
Author

cyhii commented Jul 3, 2023

DB query timeout may be another problem, see #668. Although we can separate the connection pool, the should-be-timeout queries will also be blocked on the long time waiting waiter queue.

Another problem is we cannot assume a query is always fast or slow, it depends on the DB status. A simple query may be very slow when the DB CPU has reached 100% and vise versa. In such a case a deadline/timeout option could not just protect our application from being blocked on the queue, but also prevent the DB being back-pressure.

@vietj
Copy link
Member

vietj commented Jul 3, 2023

you can limit the wait queue size of the pool to limit this effect

@vietj
Copy link
Member

vietj commented Jul 3, 2023

I think also we could have something built in vertx future too that would help

@cyhii
Copy link
Author

cyhii commented Jul 4, 2023

you can limit the wait queue size of the pool to limit this effect

Thanks, this is our current solution, but it may be a little hard to find a proper queue size in such a case.

I think also we could have something built in vertx future too that would help

I think that's great if we would have some more feature in vertx future, may be CancellableFuture or sth?

@vietj
Copy link
Member

vietj commented Jul 4, 2023 via email

@cyhii
Copy link
Author

cyhii commented Jul 5, 2023

It seems be a more general feature, if so this feature might be placed in the connection pool interface/implementation in vertx-core which vertx-sql-client depends on?

@pablosaavedra-rappi
Copy link

You could workaround your issue by explicitly obtaining the connection instead of calling execute directly. That way, the connection timeout setting of the pool will be honored and you'll get an error if it exceeded. I've also added a pull request that fixes the behavior in the execute method, but it's awaiting review.

@cyhii
Copy link
Author

cyhii commented Jul 13, 2023

You could workaround your issue by explicitly obtaining the connection instead of calling execute directly. That way, the connection timeout setting of the pool will be honored and you'll get an error if it exceeded. I've also added a pull request that fixes the behavior in the execute method, but it's awaiting review.

It seems to be a good solution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants