You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes our redis pub-sub cluster goes down (i.e. for maintenance, when we upgrade to a new version, since we run it in kubernetes), and we'll receive the following error:
Error: Socket closed unexpectedly
We correctly log the error by catching it in the error handler, but we never seem to retry / reconnect -- the only way I can get a reconnect to actually happen is to continually restart the process until the reconnection succeeds.
Also, if the process tries to issue a command, we sometimes get an internal error killing the process because of a node uncaught exception, even though I've added a client.on('error') above.
I followed the findings from #2120 and #2302, but those don't really seem to solve our problems.
What I'd like is to be able to specify a reconnect strategy so that we continually try to retry (according to the reconnectStrategy) if we lose our TLS connection / fail to talk to a node in the cluster. Also, I'd like that we actually queue messages when we're offline instead of throwing an error and taking down the process.
Node.js Version
20.11.1
Redis Server Version
7.0.10
Node Redis Version
4.6.13
Platform
linux
Logs
No response
The text was updated successfully, but these errors were encountered:
I am also seeing this same behavior. It seems to happen for us whenever we perform something that will require a rolling update to redis (kubernetes) -- for example a version upgrade or a config update.
The application will end up spewing errors, and never reconnect, even when the cluster is healthy for a long time.
The only solution I could come up with is (like OP) to restart the process (in my case, failing the health check so k8 restarts the process).
This is certainly a non-ideal solution.
The general steps to repro:
Run app in k8 that uses redis in cluster mode (i had 3master/3slave configuration)
Perform a rollout restart of redis (this just incrementally restarts each redis)
Description
We use cluster-mode with redis for sharded pub-sub (we have 3 masters and 3 replicas in a kubernetes cluster).
We have the following args for the clients:
and then we create the client(s) like this:
Sometimes our redis pub-sub cluster goes down (i.e. for maintenance, when we upgrade to a new version, since we run it in kubernetes), and we'll receive the following error:
We correctly log the error by catching it in the error handler, but we never seem to retry / reconnect -- the only way I can get a reconnect to actually happen is to continually restart the process until the reconnection succeeds.
Also, if the process tries to issue a command, we sometimes get an internal error killing the process because of a node uncaught exception, even though I've added a
client.on('error')
above.I followed the findings from #2120 and #2302, but those don't really seem to solve our problems.
What I'd like is to be able to specify a reconnect strategy so that we continually try to retry (according to the
reconnectStrategy
) if we lose our TLS connection / fail to talk to a node in the cluster. Also, I'd like that we actually queue messages when we're offline instead of throwing an error and taking down the process.Node.js Version
20.11.1
Redis Server Version
7.0.10
Node Redis Version
4.6.13
Platform
linux
Logs
No response
The text was updated successfully, but these errors were encountered: