You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Hey there,
the current implementation of fetchSockets in ClusterAdapterWithHeartbeat only resolves if a response was received from all cluster nodes. In some situations this isn't needed and an optimistic response would suffice (i.e. return all responses of nodes that are alive).
Nodes that go down (based on logic inside of cleanupTimer) are also not removed from the missingUids list of any pending requests (customRequests). So even when it's detected that a node is down and therefore can't return a response, an error is still thrown.
Describe the solution you'd like
Add optional flag to fetchSockets which causes the function to resolve with the values of all nodes that answered.
remove id of nodes that were detected as not alive in cleanUp timer from missingUids of customRequests
Describe alternatives you've considered
An optional flag could also be set on ClusterAdapterOptions however the config would then need to be added as union type for existing adapters (e.g. change createAdapter(pool: Pool, opts: Partial<PostgresAdapterOptions> = {})
to createAdapter(pool: Pool, opts: Partial<PostgresAdapterOptions & ClusterAdapterOptions> = {}) otherwise the value cannot be set.
Additional context
I already opened an issue in the adapter repository but didn't get a response there.
I'd be happy to implement the changes myself if there aren't any concerns.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
Hey there,
the current implementation of
fetchSockets
inClusterAdapterWithHeartbeat
only resolves if a response was received from all cluster nodes. In some situations this isn't needed and an optimistic response would suffice (i.e. return all responses of nodes that are alive).Nodes that go down (based on logic inside of
cleanupTimer
) are also not removed from themissingUids
list of any pending requests (customRequests
). So even when it's detected that a node is down and therefore can't return a response, an error is still thrown.Describe the solution you'd like
fetchSockets
which causes the function to resolve with the values of all nodes that answered.cleanUp
timer frommissingUids
ofcustomRequests
Describe alternatives you've considered
An optional flag could also be set on
ClusterAdapterOptions
however the config would then need to be added as union type for existing adapters (e.g. changecreateAdapter(pool: Pool, opts: Partial<PostgresAdapterOptions> = {})
to
createAdapter(pool: Pool, opts: Partial<PostgresAdapterOptions & ClusterAdapterOptions> = {})
otherwise the value cannot be set.Additional context
I already opened an issue in the adapter repository but didn't get a response there.
I'd be happy to implement the changes myself if there aren't any concerns.
The text was updated successfully, but these errors were encountered: