You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've searched existing issues and found nothing related to my issue.
Describe the feature you want to add
The iteration count on the collection runner could be used for performance testing if it had an option to disable running the tests in parallel. For instance, I'd like to run a certain request 10 times so that I can figure out the average timing. Running the tests in parallel shows how it will perform under high load, but running them serially would be more useful for evaluating whether a specific change makes things better or worse.
Also, on the Runner page, it would be very helpful if the results table included timing information broken down per request. You can dig this information out by drilling into each iteration and finding the request you're interested in.
And finally, some way to run multiple iterations of a single request would be very helpful, especially when I'm trying to tune an individual operation. For now, I can just move the request to a folder and run that “collection”, but a more official way would be better.
My use case is testing whether a query optimization is actually making things better or worse. I want to run an individual query 10 times, and gather the average time for the whole run before and after each tweak I make.
Mockups or Images of the feature
The text was updated successfully, but these errors were encountered:
Please work on it when you find some time. I believe this should be a relatively straightforward implementation.
Showing the time info would also be really helpful.
I have checked the following:
Describe the feature you want to add
The iteration count on the collection runner could be used for performance testing if it had an option to disable running the tests in parallel. For instance, I'd like to run a certain request 10 times so that I can figure out the average timing. Running the tests in parallel shows how it will perform under high load, but running them serially would be more useful for evaluating whether a specific change makes things better or worse.
Also, on the Runner page, it would be very helpful if the results table included timing information broken down per request. You can dig this information out by drilling into each iteration and finding the request you're interested in.
And finally, some way to run multiple iterations of a single request would be very helpful, especially when I'm trying to tune an individual operation. For now, I can just move the request to a folder and run that “collection”, but a more official way would be better.
My use case is testing whether a query optimization is actually making things better or worse. I want to run an individual query 10 times, and gather the average time for the whole run before and after each tweak I make.
Mockups or Images of the feature
The text was updated successfully, but these errors were encountered: