-
Notifications
You must be signed in to change notification settings - Fork 445
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERR_OUT_OF_CPU_MEM: Not enough host memory to execute the query #807
Comments
I can only query normally after restarting heavydb. Is there a heavydb parameter to avoid this problem? ` HeavyDB Server GPU Memory Summary: heavysql> select count(*) from DAILYSESSION_TABLE; HeavyDB Server GPU Memory Summary: heavysql> select * from DAILYSESSION_TABLE limit 5; HeavyDB Server GPU Memory Summary: heavysql> \t HeavyDB Server GPU Memory Summary: [heavyai@test storage]$ free -g ` |
Hi, You can free up the memory used for CPU Cache with the SQL command ALTER SESSION CLEAR CPU MEMORY; That said, when working with systems having such limited memory, it's essential to review memory parameters. By default, 80% of the system memory is allocated as CPU cache, which can potentially lead to HEAP memory exhaustion and result in Out-of-Memory (OOM) errors. On such systems with very low resurources, I recommend limiting the memory available for the CPU cache using the cpu-buffer-mem-bytes parameter, setting it to 6GB (6442450944 bytes). This leave the rest to the Operative system, OS managed FS cache and HEAP memory of the process. Let me provide some insights into how our data is organized in our tables and how much data needs to be loaded into memory to handle your simple queries. In our tables, each column is subdivided into chunks of N-elements, a size referred to as the . This size is defined at the table level and has a default value of 32,000,000. For instance, if you run a query like SELECT column1 FROM table1 LIMIT 10, the first chunk of column1 is loaded into the CPU cache before processing. If N columns are selected, the chunks of those columns will also be loaded into memory. In other words, if you run a query like SELECT * FROM table LIMIT X, the first 32 million records will be read from disk and loaded into the cache. Then, an array capable of containing the X requested rows will be allocated in the HEAP memory of the process. After that, the array is populated and transmitted back to the client for visualization. The response time depends on several factors, including the number of columns, the fragment size (or the total number of records in the table if it's less than the fragment size), and the speed of the disks. For example, if you have a very fast disk subsystem with a throughput of 10GB/sec, the disk transfer of the 14 chunks totaling 1.9GB takes 381ms.
The same query on the same table with a fragment size of 2 million takes just 71ms to load the data and uses less memory.
Unlike Oracle or other row-based databases, it's important to note that when only specific columns are required, the query performs faster if only those needed columns are included in the SELECT clause. Therefore, if you request the server to return only 4 columns out of the available 14, the response time will be quicker, and it will consume less memory.
Hope this helps, but I encourage you in testing aggregates rather than simple projection queries. Candido |
In the case of insufficient memory, how can the query be executed normally?
After that, all database table queries are as follows:
ERR_OUT_OF_CPU_MEM: Not enough host memory to execute the query
thanks!
`
[heavyai@test storage]$ free -g
total used free shared buff/cache available
Mem: 15 9 0 1 5 3
Swap: 7 7 0
heavysql> \version
HeavyDB Server Version: 7.0.0-20230926-d315676d23
heavysql> \t
CUSTOMER_TABLE
RAD_DETAIL
DAILYSESSION_TABLE
heavysql> COPY DAILYSESSION_TABLE FROM '/var/lib/heavyai/storage/import/sample_datasets/dailysession_table.csv';
Result
Loaded: 251343812 recs, Rejected: 0 recs in 1352.280000 secs
1 rows returned.
Execution time: 1352442 ms, Total time: 1352484 ms
heavysql>
heavysql> \t
CUSTOMER_TABLE
RAD_DETAIL
DAILYSESSION_TABLE
heavysql> select * from RAD_DETAIL limit 10;
USERIDbSERVICE_TYPEbACCOUNT_INFObFRAMED_PROTOCOLbFRAMED_IP_ADDRESSbFRAMED_IP_NETMASKbIDLE_TIMEOUTbCLASSbSESSION_TIMEOUTbCISCO_AVPAIRbIP_TYPEbASSIGN_IP_POOLbSTART_TIMEbEND_TIME
......
10 rows returned.
Execution time: 707 ms, Total time: 1080 ms
heavysql> select count(*) from DAILYSESSION_TABLE;
EXPR$0
251343812
1 rows returned.
Execution time: 1080 ms, Total time: 1313 ms
heavysql> select * from DAILYSESSION_TABLE limit 3;
ERR_OUT_OF_CPU_MEM: Not enough host memory to execute the query
heavysql> \memory_summary
HeavyDB Server CPU Memory Summary:
MAX USE ALLOCATED FREE
12837.02 MB 7375.76 MB 8192.00 MB 816.24 MB
HeavyDB Server GPU Memory Summary:
[GPU] MAX USE ALLOCATED FREE
heavysql> select * from DAILYSESSION_TABLE limit 3;
ERR_OUT_OF_CPU_MEM: Not enough host memory to execute the query
`
The text was updated successfully, but these errors were encountered: