mirror of
https://github.com/nodejs/node.git
synced 2025-08-15 13:48:44 +02:00
benchmark: add calibrate-n script
This script should help identify the best N when creating/updating benchmarks Signed-off-by: RafaelGSS <rafael.nunu@hotmail.com> PR-URL: https://github.com/nodejs/node/pull/59186 Reviewed-By: Vinícius Lourenço Claro Cardoso <contact@viniciusl.com.br> Reviewed-By: James M Snell <jasnell@gmail.com>
This commit is contained in:
parent
af77e4bf2f
commit
698cbd0f8b
2 changed files with 333 additions and 0 deletions
|
@ -9,6 +9,7 @@
|
|||
* [Benchmark analysis requirements](#benchmark-analysis-requirements)
|
||||
* [Running benchmarks](#running-benchmarks)
|
||||
* [Running individual benchmarks](#running-individual-benchmarks)
|
||||
* [Calibrating the number of iterations with calibrate-n.js](#calibrating-the-number-of-iterations-with-calibrate-njs)
|
||||
* [Running all benchmarks](#running-all-benchmarks)
|
||||
* [Specifying CPU Cores for Benchmarks with run.js](#specifying-cpu-cores-for-benchmarks-with-runjs)
|
||||
* [Filtering benchmarks](#filtering-benchmarks)
|
||||
|
@ -142,6 +143,46 @@ buffers/buffer-tostring.js n=10000000 len=1024 arg=true: 3498295.68561504
|
|||
buffers/buffer-tostring.js n=10000000 len=1024 arg=false: 3783071.1678948295
|
||||
```
|
||||
|
||||
### Calibrating the number of iterations with calibrate-n.js
|
||||
|
||||
Before running benchmarks, it's often useful to determine the optimal number of iterations (`n`)
|
||||
that provides statistically stable results. The `calibrate-n.js` tool helps find this value by
|
||||
running a benchmark multiple times with increasing `n` values until the coefficient of variation (CV)
|
||||
falls below a target threshold.
|
||||
|
||||
```console
|
||||
$ node benchmark/calibrate-n.js benchmark/buffers/buffer-compare.js
|
||||
|
||||
--------------------------------------------------------
|
||||
Benchmark: buffers/buffer-compare.js
|
||||
--------------------------------------------------------
|
||||
What we are trying to find: The optimal number of iterations (n)
|
||||
that produces consistent benchmark results without wasting time.
|
||||
|
||||
How it works:
|
||||
1. Run the benchmark multiple times with a specific n value
|
||||
2. Group results by configuration
|
||||
3. If overall CV is above 5% or any configuration has CV above 10%, increase n and try again
|
||||
4. Stop when we have stable results (overall CV < 5% and all configs CV < 10%) or max increases reached
|
||||
|
||||
Configuration:
|
||||
- Starting n: 10 iterations
|
||||
- Runs per n value: 30
|
||||
- Target CV threshold: 5% (lower CV = more stable results)
|
||||
- Max increases: 6
|
||||
- Increase factor: 10x
|
||||
```
|
||||
|
||||
The tool accepts several options:
|
||||
|
||||
* `--runs=N`: Number of runs for each n value (default: 30)
|
||||
* `--cv-threshold=N`: Target coefficient of variation threshold (default: 0.05)
|
||||
* `--max-increases=N`: Maximum number of n increases to try (default: 6)
|
||||
* `--start-n=N`: Initial n value to start with (default: 10)
|
||||
* `--increase=N`: Factor by which to increase n (default: 10)
|
||||
|
||||
Once you've determined a stable `n` value, you can use it when running your benchmarks.
|
||||
|
||||
### Running all benchmarks
|
||||
|
||||
Similar to running individual benchmarks, a group of benchmarks can be executed
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue