Skip to content

Commit

Permalink
Update REDIS* to VALKEY* in object.c and utils/create-cluster/README (#…
Browse files Browse the repository at this point in the history
…380)

1. Rename `REDIS_*` macros defined in object.c to `VALKEY_*`, 
2. Rename `Redis` to `Valkey` , `redis-cli` to `valkey-cli` in logs
(i.e. put statement) and descriptions in object.c and
utils/create-cluster/README

---------

Signed-off-by: Sher Sun <sher.sun@huawei.com>
Co-authored-by: Sher Sun <sher.sun@huawei.com>
  • Loading branch information
WM0323 and Sher Sun committed Apr 26, 2024
1 parent 19c4c64 commit a5a1377
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 16 deletions.
22 changes: 11 additions & 11 deletions src/object.c
Expand Up @@ -733,11 +733,11 @@ robj *getDecodedObject(robj *o) {
* use ll2string() to get a string representation of the numbers on the stack
* and compare the strings, it's much faster than calling getDecodedObject().
*
* Important note: when REDIS_COMPARE_BINARY is used a binary-safe comparison
* Important note: when STRING_COMPARE_BINARY is used a binary-safe comparison
* is used. */

#define REDIS_COMPARE_BINARY (1<<0)
#define REDIS_COMPARE_COLL (1<<1)
#define STRING_COMPARE_BINARY (1<<0)
#define STRING_COMPARE_COLL (1<<1)

int compareStringObjectsWithFlags(const robj *a, const robj *b, int flags) {
serverAssertWithInfo(NULL,a,a->type == OBJ_STRING && b->type == OBJ_STRING);
Expand All @@ -759,7 +759,7 @@ int compareStringObjectsWithFlags(const robj *a, const robj *b, int flags) {
blen = ll2string(bufb,sizeof(bufb),(long) b->ptr);
bstr = bufb;
}
if (flags & REDIS_COMPARE_COLL) {
if (flags & STRING_COMPARE_COLL) {
return strcoll(astr,bstr);
} else {
int cmp;
Expand All @@ -773,12 +773,12 @@ int compareStringObjectsWithFlags(const robj *a, const robj *b, int flags) {

/* Wrapper for compareStringObjectsWithFlags() using binary comparison. */
int compareStringObjects(const robj *a, const robj *b) {
return compareStringObjectsWithFlags(a,b,REDIS_COMPARE_BINARY);
return compareStringObjectsWithFlags(a,b,STRING_COMPARE_BINARY);
}

/* Wrapper for compareStringObjectsWithFlags() using collation. */
int collateStringObjects(const robj *a, const robj *b) {
return compareStringObjectsWithFlags(a,b,REDIS_COMPARE_COLL);
return compareStringObjectsWithFlags(a,b,STRING_COMPARE_COLL);
}

/* Equal string objects return 1 if the two objects are the same from the
Expand Down Expand Up @@ -1379,12 +1379,12 @@ sds getMemoryDoctorReport(void) {
"The new Sam and I will be back to our programming as soon as I "
"finished rebooting.\n");
} else {
s = sdsnew("Sam, I detected a few issues in this Redis instance memory implants:\n\n");
s = sdsnew("Sam, I detected a few issues in this Valkey instance memory implants:\n\n");
if (big_peak) {
s = sdscat(s," * Peak memory: In the past this instance used more than 150% the memory that is currently using. The allocator is normally not able to release memory after a peak, so you can expect to see a big fragmentation ratio, however this is actually harmless and is only due to the memory peak, and if the Redis instance Resident Set Size (RSS) is currently bigger than expected, the memory will be used as soon as you fill the Redis instance with more data. If the memory peak was only occasional and you want to try to reclaim memory, please try the MEMORY PURGE command, otherwise the only other option is to shutdown and restart the instance.\n\n");
s = sdscat(s," * Peak memory: In the past this instance used more than 150% the memory that is currently using. The allocator is normally not able to release memory after a peak, so you can expect to see a big fragmentation ratio, however this is actually harmless and is only due to the memory peak, and if the Valkey instance Resident Set Size (RSS) is currently bigger than expected, the memory will be used as soon as you fill the Valkey instance with more data. If the memory peak was only occasional and you want to try to reclaim memory, please try the MEMORY PURGE command, otherwise the only other option is to shutdown and restart the instance.\n\n");
}
if (high_frag) {
s = sdscatprintf(s," * High total RSS: This instance has a memory fragmentation and RSS overhead greater than 1.4 (this means that the Resident Set Size of the Redis process is much larger than the sum of the logical allocations Redis performed). This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. If the problem is a large peak memory, then there is no issue. Otherwise, make sure you are using the Jemalloc allocator and not the default libc malloc. Note: The currently used allocator is \"%s\".\n\n", ZMALLOC_LIB);
s = sdscatprintf(s," * High total RSS: This instance has a memory fragmentation and RSS overhead greater than 1.4 (this means that the Resident Set Size of the Valkey process is much larger than the sum of the logical allocations Valkey performed). This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. If the problem is a large peak memory, then there is no issue. Otherwise, make sure you are using the Jemalloc allocator and not the default libc malloc. Note: The currently used allocator is \"%s\".\n\n", ZMALLOC_LIB);
}
if (high_alloc_frag) {
s = sdscatprintf(s," * High allocator fragmentation: This instance has an allocator external fragmentation greater than 1.1. This problem is usually due either to a large peak memory (check if there is a peak memory entry above in the report) or may result from a workload that causes the allocator to fragment memory a lot. You can try enabling 'activedefrag' config option.\n\n");
Expand All @@ -1393,13 +1393,13 @@ sds getMemoryDoctorReport(void) {
s = sdscatprintf(s," * High allocator RSS overhead: This instance has an RSS memory overhead is greater than 1.1 (this means that the Resident Set Size of the allocator is much larger than the sum what the allocator actually holds). This problem is usually due to a large peak memory (check if there is a peak memory entry above in the report), you can try the MEMORY PURGE command to reclaim it.\n\n");
}
if (high_proc_rss) {
s = sdscatprintf(s," * High process RSS overhead: This instance has non-allocator RSS memory overhead is greater than 1.1 (this means that the Resident Set Size of the Redis process is much larger than the RSS the allocator holds). This problem may be due to Lua scripts or Modules.\n\n");
s = sdscatprintf(s," * High process RSS overhead: This instance has non-allocator RSS memory overhead is greater than 1.1 (this means that the Resident Set Size of the Valkey process is much larger than the RSS the allocator holds). This problem may be due to Lua scripts or Modules.\n\n");
}
if (big_slave_buf) {
s = sdscat(s," * Big replica buffers: The replica output buffers in this instance are greater than 10MB for each replica (on average). This likely means that there is some replica instance that is struggling receiving data, either because it is too slow or because of networking issues. As a result, data piles on the master output buffers. Please try to identify what replica is not receiving data correctly and why. You can use the INFO output in order to check the replicas delays and the CLIENT LIST command to check the output buffers of each replica.\n\n");
}
if (big_client_buf) {
s = sdscat(s," * Big client buffers: The clients output buffers in this instance are greater than 200K per client (on average). This may result from different causes, like Pub/Sub clients subscribed to channels bot not receiving data fast enough, so that data piles on the Redis instance output buffer, or clients sending commands with large replies or very large sequences of commands in the same pipeline. Please use the CLIENT LIST command in order to investigate the issue if it causes problems in your instance, or to understand better why certain clients are using a big amount of memory.\n\n");
s = sdscat(s," * Big client buffers: The clients output buffers in this instance are greater than 200K per client (on average). This may result from different causes, like Pub/Sub clients subscribed to channels bot not receiving data fast enough, so that data piles on the Valkey instance output buffer, or clients sending commands with large replies or very large sequences of commands in the same pipeline. Please use the CLIENT LIST command in order to investigate the issue if it causes problems in your instance, or to understand better why certain clients are using a big amount of memory.\n\n");
}
if (many_scripts) {
s = sdscat(s," * Many scripts: There seem to be many cached scripts in this instance (more than 1000). This may be because scripts are generated and `EVAL`ed, instead of being parameterized (with KEYS and ARGV), `SCRIPT LOAD`ed and `EVALSHA`ed. Unless `SCRIPT FLUSH` is called periodically, the scripts' caches may end up consuming most of your memory.\n\n");
Expand Down
10 changes: 5 additions & 5 deletions utils/create-cluster/README
@@ -1,11 +1,11 @@
create-cluster is a small script used to easily start a big number of Redis
create-cluster is a small script used to easily start a big number of Valkey
instances configured to run in cluster mode. Its main goal is to allow manual
testing in a condition which is not easy to replicate with the Redis cluster
testing in a condition which is not easy to replicate with the Valkey cluster
unit tests, for example when a lot of instances are needed in order to trigger
a given bug.

The tool can also be used just to easily create a number of instances in a
Redis Cluster in order to experiment a bit with the system.
Valkey Cluster in order to experiment a bit with the system.

USAGE
---
Expand All @@ -15,8 +15,8 @@ To create a cluster, follow these steps:
1. Edit create-cluster and change the start / end port, depending on the
number of instances you want to create.
2. Use "./create-cluster start" in order to run the instances.
3. Use "./create-cluster create" in order to execute redis-cli --cluster create, so that
an actual Redis cluster will be created. (If you're accessing your setup via a local container, ensure that the CLUSTER_HOST value is changed to your local IP)
3. Use "./create-cluster create" in order to execute valkey-cli --cluster create, so that
an actual Valkey cluster will be created. (If you're accessing your setup via a local container, ensure that the CLUSTER_HOST value is changed to your local IP)
4. Now you are ready to play with the cluster. AOF files and logs for each instances are created in the current directory.

In order to stop a cluster:
Expand Down

0 comments on commit a5a1377

Please sign in to comment.