Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issues #478

Open
tworzenieweb opened this issue Feb 9, 2020 · 1 comment
Open

Performance issues #478

tworzenieweb opened this issue Feb 9, 2020 · 1 comment

Comments

@tworzenieweb
Copy link

tworzenieweb commented Feb 9, 2020

I've made recently a switch for my redis server to ardb. It was working find for 2 weeks but now am experiencing a lot of performance issues. The data is currently 35.88GB, server is running on default settings.

HGETALL command is taking 8-10 seconds.
ALL cpu cores are used for whole time
DISK writes are very low - maybe 100kb/s

The server is having 8 CPU (Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz
) cores and 4GB of RAM.

I think it makes the main problem, what spec do you suggest would work better.

127.0.0.1:6379> CONFIG GET rocksdb.options
1) "rocksdb.options"
2) "write_buffer_size=512M;max_write_buffer_number=6;min_write_buffer_number_to_merge=3;compression=kSnappyCompression;bloom_locality=1;memtable_prefix_bloom_size_ratio=0.1;block_based_table_factory={block_cache=512M;filter_policy=bloomfilter:10:true};create_if_missing=true;max_open_files=10000;rate_limiter_bytes_per_sec=50M;use_direct_io_for_flush_and_compaction=true;use_adaptive_mutex=true"
rocksdb.block_table_usage:536732576
rocksdb.block_table_pinned_usage:1247136
rocksdb_memtable_total:227538976
rocksdb_memtable_unflushed:227538976
rocksdb_table_readers_total:2640699315
rocksdb.estimate-table-readers-mem:0
rocksdb.cur-size-all-mem-tables:14680800
** Compaction Stats [5] **
Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop
----------------------------------------------------------------------------------------------------------------------------------------------------------
  L0      1/0   192.96 MB   0.5      0.0     0.0      0.0      12.1     12.1       0.0   1.0      0.0     49.9       248        63    3.937       0      0
  L1      9/0   476.73 MB   0.9     21.2    11.9      9.3      21.2     11.9       0.0   1.8     45.5     45.5       477        31   15.399     19M    586
  L2     90/0    4.98 GB   1.0     43.3    12.2     31.1      35.5      4.3       0.0   2.9     19.6     16.0      2268        98   23.144     73M   305K
  L3    419/0   26.25 GB   0.5     64.7     4.3     60.3      62.0      1.7       0.0  14.3     17.0     16.3      3896        54   72.146     87M  6935K
 Sum    519/0   31.88 GB   0.0    129.2    28.4    100.8     130.8     30.0       0.0  10.8     19.2     19.4      6889       246   28.006    180M  7240K
 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0         0         0    0.000       0      0
Uptime(secs): 165859.8 total, 0.8 interval
Flush(GB): cumulative 12.097, interval 0.000
AddFile(GB): cumulative 0.000, interval 0.000
AddFile(Total Files): cumulative 0, interval 0
AddFile(L0 Files): cumulative 0, interval 0
AddFile(Keys): cumulative 0, interval 0
Cumulative compaction: 130.75 GB write, 0.81 MB/s write, 129.18 GB read, 0.80 MB/s read, 6889.4 seconds
Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
[1] 02-09 21:20:00,874 INFO ========================Period Statistics Dump Begin===========================
[1] 02-09 21:20:00,874 INFO coststat_ping_all:calls=10,costs=4,cost_per_call=0,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_ping_range[0-1000]:calls=10,costs=4,cost_per_call=0,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_info_all:calls=34,costs=80854,cost_per_call=2378,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_info_range[0-1000]:calls=4,costs=53,cost_per_call=13,percents=11.7647%
[1] 02-09 21:20:00,874 INFO coststat_info_range[1000-5000]:calls=29,costs=75749,cost_per_call=2612,percents=85.2941%
[1] 02-09 21:20:00,874 INFO coststat_info_range[5000-10000]:calls=1,costs=5052,cost_per_call=5052,percents=2.9412%
[1] 02-09 21:20:00,874 INFO coststat_config_all:calls=20,costs=1005,cost_per_call=50,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_config_range[0-1000]:calls=20,costs=1005,cost_per_call=50,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_client_all:calls=20,costs=66,cost_per_call=3,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_client_range[0-1000]:calls=20,costs=66,cost_per_call=3,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_quit_all:calls=10,costs=2,cost_per_call=0,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_quit_range[0-1000]:calls=10,costs=2,cost_per_call=0,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_select_all:calls=8,costs=10,cost_per_call=1,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_select_range[0-1000]:calls=8,costs=10,cost_per_call=1,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_del_all:calls=84,costs=178999581,cost_per_call=2130947,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_del_range[10000-20000]:calls=29,costs=399837,cost_per_call=13787,percents=34.5238%
[1] 02-09 21:20:00,874 INFO coststat_del_range[20000-50000]:calls=37,costs=1004534,cost_per_call=27149,percents=44.0476%
[1] 02-09 21:20:00,874 INFO coststat_del_range[1000000-]:calls=18,costs=177595210,cost_per_call=9866400,percents=21.4286%
[1] 02-09 21:20:00,874 INFO coststat_exists_all:calls=95,costs=1861783,cost_per_call=19597,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_exists_range[10000-20000]:calls=69,costs=968227,cost_per_call=14032,percents=72.6316%
[1] 02-09 21:20:00,874 INFO coststat_exists_range[20000-50000]:calls=22,costs=662869,cost_per_call=30130,percents=23.1579%
[1] 02-09 21:20:00,874 INFO coststat_exists_range[50000-100000]:calls=4,costs=230687,cost_per_call=57671,percents=4.2105%
[1] 02-09 21:20:00,874 INFO coststat_hget_all:calls=447,costs=14858845,cost_per_call=33241,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_hget_range[1000-5000]:calls=70,costs=255201,cost_per_call=3645,percents=15.6600%
[1] 02-09 21:20:00,874 INFO coststat_hget_range[5000-10000]:calls=14,costs=100405,cost_per_call=7171,percents=3.1320%
[1] 02-09 21:20:00,874 INFO coststat_hget_range[10000-20000]:calls=1,costs=19594,cost_per_call=19594,percents=0.2237%
[1] 02-09 21:20:00,874 INFO coststat_hget_range[20000-50000]:calls=279,costs=9449051,cost_per_call=33867,percents=62.4161%
[1] 02-09 21:20:00,874 INFO coststat_hget_range[50000-100000]:calls=81,costs=4824468,cost_per_call=59561,percents=18.1208%
[1] 02-09 21:20:00,874 INFO coststat_hget_range[100000-200000]:calls=2,costs=210126,cost_per_call=105063,percents=0.4474%
[1] 02-09 21:20:00,874 INFO coststat_hgetall_all:calls=508,costs=5084506324,cost_per_call=10008870,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_hgetall_range[1000000-]:calls=508,costs=5084506324,cost_per_call=10008870,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_hset_all:calls=1149,costs=28248374,cost_per_call=24585,percents=100.0000%
[1] 02-09 21:20:00,874 INFO coststat_hset_range[1000-5000]:calls=7,costs=23875,cost_per_call=3410,percents=0.6092%
[1] 02-09 21:20:00,874 INFO coststat_hset_range[5000-10000]:calls=4,costs=24723,cost_per_call=6180,percents=0.3481%
[1] 02-09 21:20:00,875 INFO coststat_hset_range[10000-20000]:calls=507,costs=7561134,cost_per_call=14913,percents=44.1253%
[1] 02-09 21:20:00,875 INFO coststat_hset_range[20000-50000]:calls=555,costs=16290350,cost_per_call=29351,percents=48.3029%
[1] 02-09 21:20:00,875 INFO coststat_hset_range[50000-100000]:calls=75,costs=4241347,cost_per_call=56551,percents=6.5274%
[1] 02-09 21:20:00,875 INFO coststat_hset_range[100000-200000]:calls=1,costs=106945,cost_per_call=106945,percents=0.0870%
[1] 02-09 21:20:00,875 INFO slave_sync_total_commands_processed:0
[1] 02-09 21:20:00,875 INFO slave_sync_instantaneous_ops_per_sec:0
[1] 02-09 21:20:00,875 INFO total_commands_processed:15377048
[1] 02-09 21:20:00,875 INFO instantaneous_ops_per_sec:3
[1] 02-09 21:20:00,875 INFO total_connections_received:20119
[1] 02-09 21:20:00,875 INFO rejected_connections:0
[1] 02-09 21:20:00,875 INFO ========================Period Statistics Dump End===========================
@diwu1989
Copy link

diwu1989 commented Oct 8, 2020

Doesn't HGETALL cause scan through the entire rocksdb?
What does your read IOPS look like during that stall?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants