New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A strategy for performance testing #368
Comments
@valkey-io/core-team Would like thoughts on the above proposal, and implicitly would appreciate a vote. |
re: hardware I guess one precursor question: What hardware/architectures is Valkey planning on targeting? |
I wasn't aware that there were performance benchmarking tools but I love this idea so adding my vote explicitly here |
Ideally one pair of arm hosts and one pair of x86 hosts. Something like an m7i and an m7g is probably "broadly sufficient". If GCP would like to donate some hardware, we could run it on their infra as well. :) |
A few fixed jobs is good to have, but what I've felt the need for when doing certain optimizations is specific runs to indicate performance improvement for specific scenarios/workloads. For example, I had a PR to avoid looking up the expire dict for keys that don't have a TTL. This is only slow if there are many keys in the expire dict and also many accesses to keys that don't have a TTL. I had convincing (to myself) results on my laptop by running several times with similar results, but that automated Redis benchmark could see it. When we test a few fixed workloads, we will always miss other workloads and scenarios. |
I totally agree with this. Before designing the test, I would like to propose several my concerns.
|
I have an idea about performance. We can refer to the process of TPC (Transaction Processing Performance Council) and design the server configuration to be suitable for various workloads of the NoSQL database. For example(A system similar to Quora): The data will be generated proportionally and discretely to cover scenes of different sizes. This work will only involve managing workloads that are in line with actual production (including both the client and server). |
@artikell Do you mean we should have an advanced "traffic model" where we can define how many of each kind of command and the size of data with probabilities? I have heard about such benchmarks (for some commercial products) where statistics is collected from users and this is used to run benchmark tests with the user's traffic model. It can be very powerful. Maybe the first thing we need is a way to collect these statistics from a running node. (It shall contain only statistics, no actual key names or value content.) |
@zuiderkwast Yes, a traffic model. However, it is difficult to achieve real-time access to user data (as it involves privacy and company assets), and there may also be differences among different companies. So, I think this model can be constantly updated, and the operational standards can be implemented first. We need to control the scope of the discussion, we can continue our discussion on https://github.com/orgs/valkey-io/discussions/398 the current issue requires a performance benchmark standard. We cannot expect this method to detect all performance issues. It can do:
It cannot perform dynamic validation, such as expiration and eviction strategies,it needs to be designed separately. A little idea. At least have a fixed performance report first |
Opening this issue, as we no longer have a benchmarking framework. Performance is an essential part of Valkey, and we need to make it easier to evaluate if something is degrading (or improving) performance. The previous framework was not open-source, as it was maintained by filipe from Redis. The specifications are still here: https://github.com/redis/redis-benchmarks-specification, although they were never really reviewed so I'm not convinced we want to reference them, I think we should start fresh.
My vision, is that I would like us to implement some performance tests that run on test runners running on dedicated hardware (ideally bare-metal, but that might get expensive) that we can trigger to run when a specific tag get's added to a PR and during daily. The daily runs can be used for generating historical graphs.
I would like to at least see the following sets of tests:
We should run them each for at least like a minute or so, ideally in parallel.
Next steps
Future work
I would also like to extend to automatically generate perf + flamegraphs of the above results as well, and have them always available on the website. That gives folks a way to see where time is being spent and maybe investigate optimizations.
The text was updated successfully, but these errors were encountered: