You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have attached 1024 database instances and I'm either using Appenders on them in parallel or running INSERT OR UPDATE on them non-concurrently.
What happens?
If I run in dry-run mode that comments out the conn.Query() line then the memory consumption of entire process stays flat.
The moment I disable dry-run mode, the memory consumption shoots up quickly to ~100% and OOMs most of the time.
When I dive deep, it is the OS file cache that seems to just grow all the way to the max possible. And the reclamation of the reclaimable memory in DuckDB is not fast enough to prevent OOMs.
I've set the memory_limit to 21GB and the total available memory is 55GB.
Zero SELECT queries. Just INSERT OR UPDATE statements being executed across the 1k connections non-concurrently.
Later when I set the memory_limit to 11GB, the process OOMS with error
'duckdb::OutOfMemoryException'
what(): {"exception_type":"Out of Memory","exception_message":"failed to pin block of size 256.0 KiB (11.1 GiB/11.1 GiB used)"}
user8555
changed the title
memory_limit is not honored when 1k databases are attached and 1k connections are open
DuckDB needs more than 11GB memory to process INSERT OR UPDATE statements
May 11, 2024
Setup
I have attached 1024 database instances and I'm either using Appenders on them in parallel or running INSERT OR UPDATE on them non-concurrently.
What happens?
If I run in dry-run mode that comments out the conn.Query() line then the memory consumption of entire process stays flat.
The moment I disable dry-run mode, the memory consumption shoots up quickly to ~100% and OOMs most of the time.
When I dive deep, it is the OS file cache that seems to just grow all the way to the max possible. And the reclamation of the reclaimable memory in DuckDB is not fast enough to prevent OOMs.
I've set the memory_limit to 21GB and the total available memory is 55GB.
Zero SELECT queries. Just INSERT OR UPDATE statements being executed across the 1k connections non-concurrently.
Later when I set the memory_limit to 11GB, the process OOMS with error
To Reproduce
I'll send a reproduction once I know whether this is unexpected behavior or a expected behavior
OS:
Centos 9
DuckDB Version:
0.10.1
DuckDB Client:
C++
Full Name:
Ajay Gopalakrishnan
Affiliation:
Meta
What is the latest build you tested with? If possible, we recommend testing with the latest nightly build.
I have tested with a stable release
Did you include all relevant data sets for reproducing the issue?
No - I cannot share the data sets because they are confidential
Did you include all code required to reproduce the issue?
Did you include all relevant configuration (e.g., CPU architecture, Python version, Linux distribution) to reproduce the issue?
The text was updated successfully, but these errors were encountered: