feat(third_party): add oatpp,googltest,benchmark
All checks were successful
sm-rpc / build (Debug, aarch64-linux-gnu) (push) Successful in 1m7s
sm-rpc / build (Debug, arm-linux-gnueabihf) (push) Successful in 1m15s
sm-rpc / build (Debug, host.gcc) (push) Successful in 1m4s
sm-rpc / build (Debug, mipsel-linux-gnu) (push) Successful in 1m16s
sm-rpc / build (Release, aarch64-linux-gnu) (push) Successful in 1m34s
sm-rpc / build (Release, arm-linux-gnueabihf) (push) Successful in 1m33s
sm-rpc / build (Release, host.gcc) (push) Successful in 1m23s
sm-rpc / build (Release, mipsel-linux-gnu) (push) Successful in 1m30s
All checks were successful
sm-rpc / build (Debug, aarch64-linux-gnu) (push) Successful in 1m7s
sm-rpc / build (Debug, arm-linux-gnueabihf) (push) Successful in 1m15s
sm-rpc / build (Debug, host.gcc) (push) Successful in 1m4s
sm-rpc / build (Debug, mipsel-linux-gnu) (push) Successful in 1m16s
sm-rpc / build (Release, aarch64-linux-gnu) (push) Successful in 1m34s
sm-rpc / build (Release, arm-linux-gnueabihf) (push) Successful in 1m33s
sm-rpc / build (Release, host.gcc) (push) Successful in 1m23s
sm-rpc / build (Release, mipsel-linux-gnu) (push) Successful in 1m30s
This commit is contained in:
149
third_party/benchmark/docs/AssemblyTests.md
vendored
Normal file
149
third_party/benchmark/docs/AssemblyTests.md
vendored
Normal file
@@ -0,0 +1,149 @@
|
||||
# Assembly Tests
|
||||
|
||||
The Benchmark library provides a number of functions whose primary
|
||||
purpose in to affect assembly generation, including `DoNotOptimize`
|
||||
and `ClobberMemory`. In addition there are other functions,
|
||||
such as `KeepRunning`, for which generating good assembly is paramount.
|
||||
|
||||
For these functions it's important to have tests that verify the
|
||||
correctness and quality of the implementation. This requires testing
|
||||
the code generated by the compiler.
|
||||
|
||||
This document describes how the Benchmark library tests compiler output,
|
||||
as well as how to properly write new tests.
|
||||
|
||||
|
||||
## Anatomy of a Test
|
||||
|
||||
Writing a test has two steps:
|
||||
|
||||
* Write the code you want to generate assembly for.
|
||||
* Add `// CHECK` lines to match against the verified assembly.
|
||||
|
||||
Example:
|
||||
```c++
|
||||
|
||||
// CHECK-LABEL: test_add:
|
||||
extern "C" int test_add() {
|
||||
extern int ExternInt;
|
||||
return ExternInt + 1;
|
||||
|
||||
// CHECK: movl ExternInt(%rip), %eax
|
||||
// CHECK: addl %eax
|
||||
// CHECK: ret
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
#### LLVM Filecheck
|
||||
|
||||
[LLVM's Filecheck](https://llvm.org/docs/CommandGuide/FileCheck.html)
|
||||
is used to test the generated assembly against the `// CHECK` lines
|
||||
specified in the tests source file. Please see the documentation
|
||||
linked above for information on how to write `CHECK` directives.
|
||||
|
||||
#### Tips and Tricks:
|
||||
|
||||
* Tests should match the minimal amount of output required to establish
|
||||
correctness. `CHECK` directives don't have to match on the exact next line
|
||||
after the previous match, so tests should omit checks for unimportant
|
||||
bits of assembly. ([`CHECK-NEXT`](https://llvm.org/docs/CommandGuide/FileCheck.html#the-check-next-directive)
|
||||
can be used to ensure a match occurs exactly after the previous match).
|
||||
|
||||
* The tests are compiled with `-O3 -g0`. So we're only testing the
|
||||
optimized output.
|
||||
|
||||
* The assembly output is further cleaned up using `tools/strip_asm.py`.
|
||||
This removes comments, assembler directives, and unused labels before
|
||||
the test is run.
|
||||
|
||||
* The generated and stripped assembly file for a test is output under
|
||||
`<build-directory>/test/<test-name>.s`
|
||||
|
||||
* Filecheck supports using [`CHECK` prefixes](https://llvm.org/docs/CommandGuide/FileCheck.html#cmdoption-check-prefixes)
|
||||
to specify lines that should only match in certain situations.
|
||||
The Benchmark tests use `CHECK-CLANG` and `CHECK-GNU` for lines that
|
||||
are only expected to match Clang or GCC's output respectively. Normal
|
||||
`CHECK` lines match against all compilers. (Note: `CHECK-NOT` and
|
||||
`CHECK-LABEL` are NOT prefixes. They are versions of non-prefixed
|
||||
`CHECK` lines)
|
||||
|
||||
* Use `extern "C"` to disable name mangling for specific functions. This
|
||||
makes them easier to name in the `CHECK` lines.
|
||||
|
||||
|
||||
## Problems Writing Portable Tests
|
||||
|
||||
Writing tests which check the code generated by a compiler are
|
||||
inherently non-portable. Different compilers and even different compiler
|
||||
versions may generate entirely different code. The Benchmark tests
|
||||
must tolerate this.
|
||||
|
||||
LLVM Filecheck provides a number of mechanisms to help write
|
||||
"more portable" tests; including [matching using regular expressions](https://llvm.org/docs/CommandGuide/FileCheck.html#filecheck-pattern-matching-syntax),
|
||||
allowing the creation of [named variables](https://llvm.org/docs/CommandGuide/FileCheck.html#filecheck-variables)
|
||||
for later matching, and [checking non-sequential matches](https://llvm.org/docs/CommandGuide/FileCheck.html#the-check-dag-directive).
|
||||
|
||||
#### Capturing Variables
|
||||
|
||||
For example, say GCC stores a variable in a register but Clang stores
|
||||
it in memory. To write a test that tolerates both cases we "capture"
|
||||
the destination of the store, and then use the captured expression
|
||||
to write the remainder of the test.
|
||||
|
||||
```c++
|
||||
// CHECK-LABEL: test_div_no_op_into_shr:
|
||||
extern "C" void test_div_no_op_into_shr(int value) {
|
||||
int divisor = 2;
|
||||
benchmark::DoNotOptimize(divisor); // hide the value from the optimizer
|
||||
return value / divisor;
|
||||
|
||||
// CHECK: movl $2, [[DEST:.*]]
|
||||
// CHECK: idivl [[DEST]]
|
||||
// CHECK: ret
|
||||
}
|
||||
```
|
||||
|
||||
#### Using Regular Expressions to Match Differing Output
|
||||
|
||||
Often tests require testing assembly lines which may subtly differ
|
||||
between compilers or compiler versions. A common example of this
|
||||
is matching stack frame addresses. In this case regular expressions
|
||||
can be used to match the differing bits of output. For example:
|
||||
|
||||
<!-- {% raw %} -->
|
||||
```c++
|
||||
int ExternInt;
|
||||
struct Point { int x, y, z; };
|
||||
|
||||
// CHECK-LABEL: test_store_point:
|
||||
extern "C" void test_store_point() {
|
||||
Point p{ExternInt, ExternInt, ExternInt};
|
||||
benchmark::DoNotOptimize(p);
|
||||
|
||||
// CHECK: movl ExternInt(%rip), %eax
|
||||
// CHECK: movl %eax, -{{[0-9]+}}(%rsp)
|
||||
// CHECK: movl %eax, -{{[0-9]+}}(%rsp)
|
||||
// CHECK: movl %eax, -{{[0-9]+}}(%rsp)
|
||||
// CHECK: ret
|
||||
}
|
||||
```
|
||||
<!-- {% endraw %} -->
|
||||
|
||||
## Current Requirements and Limitations
|
||||
|
||||
The tests require Filecheck to be installed along the `PATH` of the
|
||||
build machine. Otherwise the tests will be disabled.
|
||||
|
||||
Additionally, as mentioned in the previous section, codegen tests are
|
||||
inherently non-portable. Currently the tests are limited to:
|
||||
|
||||
* x86_64 targets.
|
||||
* Compiled with GCC or Clang
|
||||
|
||||
Further work could be done, at least on a limited basis, to extend the
|
||||
tests to other architectures and compilers (using `CHECK` prefixes).
|
||||
|
||||
Furthermore, the tests fail for builds which specify additional flags
|
||||
that modify code generation, including `--coverage` or `-fsanitize=`.
|
||||
|
||||
3
third_party/benchmark/docs/_config.yml
vendored
Normal file
3
third_party/benchmark/docs/_config.yml
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
theme: jekyll-theme-minimal
|
||||
logo: /assets/images/icon_black.png
|
||||
show_downloads: true
|
||||
BIN
third_party/benchmark/docs/assets/images/icon.png
vendored
Normal file
BIN
third_party/benchmark/docs/assets/images/icon.png
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 11 KiB |
BIN
third_party/benchmark/docs/assets/images/icon.xcf
vendored
Normal file
BIN
third_party/benchmark/docs/assets/images/icon.xcf
vendored
Normal file
Binary file not shown.
BIN
third_party/benchmark/docs/assets/images/icon_black.png
vendored
Normal file
BIN
third_party/benchmark/docs/assets/images/icon_black.png
vendored
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 11 KiB |
BIN
third_party/benchmark/docs/assets/images/icon_black.xcf
vendored
Normal file
BIN
third_party/benchmark/docs/assets/images/icon_black.xcf
vendored
Normal file
Binary file not shown.
13
third_party/benchmark/docs/dependencies.md
vendored
Normal file
13
third_party/benchmark/docs/dependencies.md
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
# Build tool dependency policy
|
||||
|
||||
We follow the [Foundational C++ support policy](https://opensource.google/documentation/policies/cplusplus-support) for our build tools. In
|
||||
particular the ["Build Systems" section](https://opensource.google/documentation/policies/cplusplus-support#build-systems).
|
||||
|
||||
## CMake
|
||||
|
||||
The current supported version is CMake 3.10 as of 2023-08-10. Most modern
|
||||
distributions include newer versions, for example:
|
||||
|
||||
* Ubuntu 20.04 provides CMake 3.16.3
|
||||
* Debian 11.4 provides CMake 3.18.4
|
||||
* Ubuntu 22.04 provides CMake 3.22.1
|
||||
12
third_party/benchmark/docs/index.md
vendored
Normal file
12
third_party/benchmark/docs/index.md
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
# Benchmark
|
||||
|
||||
* [Assembly Tests](AssemblyTests.md)
|
||||
* [Dependencies](dependencies.md)
|
||||
* [Perf Counters](perf_counters.md)
|
||||
* [Platform Specific Build Instructions](platform_specific_build_instructions.md)
|
||||
* [Python Bindings](python_bindings.md)
|
||||
* [Random Interleaving](random_interleaving.md)
|
||||
* [Reducing Variance](reducing_variance.md)
|
||||
* [Releasing](releasing.md)
|
||||
* [Tools](tools.md)
|
||||
* [User Guide](user_guide.md)
|
||||
35
third_party/benchmark/docs/perf_counters.md
vendored
Normal file
35
third_party/benchmark/docs/perf_counters.md
vendored
Normal file
@@ -0,0 +1,35 @@
|
||||
<a name="perf-counters" />
|
||||
|
||||
# User-Requested Performance Counters
|
||||
|
||||
When running benchmarks, the user may choose to request collection of
|
||||
performance counters. This may be useful in investigation scenarios - narrowing
|
||||
down the cause of a regression; or verifying that the underlying cause of a
|
||||
performance improvement matches expectations.
|
||||
|
||||
This feature is available if:
|
||||
|
||||
* The benchmark is run on an architecture featuring a Performance Monitoring
|
||||
Unit (PMU),
|
||||
* The benchmark is compiled with support for collecting counters. Currently,
|
||||
this requires [libpfm](http://perfmon2.sourceforge.net/), which is built as a
|
||||
dependency via Bazel.
|
||||
|
||||
The feature does not require modifying benchmark code. Counter collection is
|
||||
handled at the boundaries where timer collection is also handled.
|
||||
|
||||
To opt-in:
|
||||
* If using a Bazel build, add `--define pfm=1` to your build flags
|
||||
* If using CMake:
|
||||
* Install `libpfm4-dev`, e.g. `apt-get install libpfm4-dev`.
|
||||
* Enable the CMake flag `BENCHMARK_ENABLE_LIBPFM` in `CMakeLists.txt`.
|
||||
|
||||
To use, pass a comma-separated list of counter names through the
|
||||
`--benchmark_perf_counters` flag. The names are decoded through libpfm - meaning,
|
||||
they are platform specific, but some (e.g. `CYCLES` or `INSTRUCTIONS`) are
|
||||
mapped by libpfm to platform-specifics - see libpfm
|
||||
[documentation](http://perfmon2.sourceforge.net/docs.html) for more details.
|
||||
|
||||
The counter values are reported back through the [User Counters](../README.md#custom-counters)
|
||||
mechanism, meaning, they are available in all the formats (e.g. JSON) supported
|
||||
by User Counters.
|
||||
48
third_party/benchmark/docs/platform_specific_build_instructions.md
vendored
Normal file
48
third_party/benchmark/docs/platform_specific_build_instructions.md
vendored
Normal file
@@ -0,0 +1,48 @@
|
||||
# Platform Specific Build Instructions
|
||||
|
||||
## Building with GCC
|
||||
|
||||
When the library is built using GCC it is necessary to link with the pthread
|
||||
library due to how GCC implements `std::thread`. Failing to link to pthread will
|
||||
lead to runtime exceptions (unless you're using libc++), not linker errors. See
|
||||
[issue #67](https://github.com/google/benchmark/issues/67) for more details. You
|
||||
can link to pthread by adding `-pthread` to your linker command. Note, you can
|
||||
also use `-lpthread`, but there are potential issues with ordering of command
|
||||
line parameters if you use that.
|
||||
|
||||
On QNX, the pthread library is part of libc and usually included automatically
|
||||
(see
|
||||
[`pthread_create()`](https://www.qnx.com/developers/docs/7.1/index.html#com.qnx.doc.neutrino.lib_ref/topic/p/pthread_create.html)).
|
||||
There's no separate pthread library to link.
|
||||
|
||||
## Building with Visual Studio 2015 or 2017
|
||||
|
||||
The `shlwapi` library (`-lshlwapi`) is required to support a call to `CPUInfo` which reads the registry. Either add `shlwapi.lib` under `[ Configuration Properties > Linker > Input ]`, or use the following:
|
||||
|
||||
```
|
||||
// Alternatively, can add libraries using linker options.
|
||||
#ifdef _WIN32
|
||||
#pragma comment ( lib, "Shlwapi.lib" )
|
||||
#ifdef _DEBUG
|
||||
#pragma comment ( lib, "benchmarkd.lib" )
|
||||
#else
|
||||
#pragma comment ( lib, "benchmark.lib" )
|
||||
#endif
|
||||
#endif
|
||||
```
|
||||
|
||||
Can also use the graphical version of CMake:
|
||||
* Open `CMake GUI`.
|
||||
* Under `Where to build the binaries`, same path as source plus `build`.
|
||||
* Under `CMAKE_INSTALL_PREFIX`, same path as source plus `install`.
|
||||
* Click `Configure`, `Generate`, `Open Project`.
|
||||
* If build fails, try deleting entire directory and starting again, or unticking options to build less.
|
||||
|
||||
## Building with Intel 2015 Update 1 or Intel System Studio Update 4
|
||||
|
||||
See instructions for building with Visual Studio. Once built, right click on the solution and change the build to Intel.
|
||||
|
||||
## Building on Solaris
|
||||
|
||||
If you're running benchmarks on solaris, you'll want the kstat library linked in
|
||||
too (`-lkstat`).
|
||||
34
third_party/benchmark/docs/python_bindings.md
vendored
Normal file
34
third_party/benchmark/docs/python_bindings.md
vendored
Normal file
@@ -0,0 +1,34 @@
|
||||
# Building and installing Python bindings
|
||||
|
||||
Python bindings are available as wheels on [PyPI](https://pypi.org/project/google-benchmark/) for importing and
|
||||
using Google Benchmark directly in Python.
|
||||
Currently, pre-built wheels exist for macOS (both ARM64 and Intel x86), Linux x86-64 and 64-bit Windows.
|
||||
Supported Python versions are Python 3.8 - 3.12.
|
||||
|
||||
To install Google Benchmark's Python bindings, run:
|
||||
|
||||
```bash
|
||||
python -m pip install --upgrade pip # for manylinux2014 support
|
||||
python -m pip install google-benchmark
|
||||
```
|
||||
|
||||
In order to keep your system Python interpreter clean, it is advisable to run these commands in a virtual
|
||||
environment. See the [official Python documentation](https://docs.python.org/3/library/venv.html)
|
||||
on how to create virtual environments.
|
||||
|
||||
To build a wheel directly from source, you can follow these steps:
|
||||
```bash
|
||||
git clone https://github.com/google/benchmark.git
|
||||
cd benchmark
|
||||
# create a virtual environment and activate it
|
||||
python3 -m venv venv --system-site-packages
|
||||
source venv/bin/activate # .\venv\Scripts\Activate.ps1 on Windows
|
||||
|
||||
# upgrade Python's system-wide packages
|
||||
python -m pip install --upgrade pip build
|
||||
# builds the wheel and stores it in the directory "dist".
|
||||
python -m build
|
||||
```
|
||||
|
||||
NB: Building wheels from source requires Bazel. For platform-specific instructions on how to install Bazel,
|
||||
refer to the [Bazel installation docs](https://bazel.build/install).
|
||||
13
third_party/benchmark/docs/random_interleaving.md
vendored
Normal file
13
third_party/benchmark/docs/random_interleaving.md
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
<a name="interleaving" />
|
||||
|
||||
# Random Interleaving
|
||||
|
||||
[Random Interleaving](https://github.com/google/benchmark/issues/1051) is a
|
||||
technique to lower run-to-run variance. It randomly interleaves repetitions of a
|
||||
microbenchmark with repetitions from other microbenchmarks in the same benchmark
|
||||
test. Data shows it is able to lower run-to-run variance by
|
||||
[40%](https://github.com/google/benchmark/issues/1051) on average.
|
||||
|
||||
To use, you mainly need to set `--benchmark_enable_random_interleaving=true`,
|
||||
and optionally specify non-zero repetition count `--benchmark_repetitions=9`
|
||||
and optionally decrease the per-repetition time `--benchmark_min_time=0.1`.
|
||||
98
third_party/benchmark/docs/reducing_variance.md
vendored
Normal file
98
third_party/benchmark/docs/reducing_variance.md
vendored
Normal file
@@ -0,0 +1,98 @@
|
||||
# Reducing Variance
|
||||
|
||||
<a name="disabling-cpu-frequency-scaling" />
|
||||
|
||||
## Disabling CPU Frequency Scaling
|
||||
|
||||
If you see this error:
|
||||
|
||||
```
|
||||
***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
|
||||
```
|
||||
|
||||
you might want to disable the CPU frequency scaling while running the
|
||||
benchmark, as well as consider other ways to stabilize the performance of
|
||||
your system while benchmarking.
|
||||
|
||||
Exactly how to do this depends on the Linux distribution,
|
||||
desktop environment, and installed programs. Specific details are a moving
|
||||
target, so we will not attempt to exhaustively document them here.
|
||||
|
||||
One simple option is to use the `cpupower` program to change the
|
||||
performance governor to "performance". This tool is maintained along with
|
||||
the Linux kernel and provided by your distribution.
|
||||
|
||||
It must be run as root, like this:
|
||||
|
||||
```bash
|
||||
sudo cpupower frequency-set --governor performance
|
||||
```
|
||||
|
||||
After this you can verify that all CPUs are using the performance governor
|
||||
by running this command:
|
||||
|
||||
```bash
|
||||
cpupower frequency-info -o proc
|
||||
```
|
||||
|
||||
The benchmarks you subsequently run will have less variance.
|
||||
|
||||
<a name="reducing-variance" />
|
||||
|
||||
## Reducing Variance in Benchmarks
|
||||
|
||||
The Linux CPU frequency governor [discussed
|
||||
above](user_guide#disabling-cpu-frequency-scaling) is not the only source
|
||||
of noise in benchmarks. Some, but not all, of the sources of variance
|
||||
include:
|
||||
|
||||
1. On multi-core machines not all CPUs/CPU cores/CPU threads run the same
|
||||
speed, so running a benchmark one time and then again may give a
|
||||
different result depending on which CPU it ran on.
|
||||
2. CPU scaling features that run on the CPU, like Intel's Turbo Boost and
|
||||
AMD Turbo Core and Precision Boost, can temporarily change the CPU
|
||||
frequency even when the using the "performance" governor on Linux.
|
||||
3. Context switching between CPUs, or scheduling competition on the CPU the
|
||||
benchmark is running on.
|
||||
4. Intel Hyperthreading or AMD SMT causing the same issue as above.
|
||||
5. Cache effects caused by code running on other CPUs.
|
||||
6. Non-uniform memory architectures (NUMA).
|
||||
|
||||
These can cause variance in benchmarks results within a single run
|
||||
(`--benchmark_repetitions=N`) or across multiple runs of the benchmark
|
||||
program.
|
||||
|
||||
Reducing sources of variance is OS and architecture dependent, which is one
|
||||
reason some companies maintain machines dedicated to performance testing.
|
||||
|
||||
Some of the easier and effective ways of reducing variance on a typical
|
||||
Linux workstation are:
|
||||
|
||||
1. Use the performance governor as [discussed
|
||||
above](user_guide#disabling-cpu-frequency-scaling).
|
||||
1. Disable processor boosting by:
|
||||
```sh
|
||||
echo 0 | sudo tee /sys/devices/system/cpu/cpufreq/boost
|
||||
```
|
||||
See the Linux kernel's
|
||||
[boost.txt](https://www.kernel.org/doc/Documentation/cpu-freq/boost.txt)
|
||||
for more information.
|
||||
2. Set the benchmark program's task affinity to a fixed cpu. For example:
|
||||
```sh
|
||||
taskset -c 0 ./mybenchmark
|
||||
```
|
||||
3. Disabling Hyperthreading/SMT. This can be done in the Bios or using the
|
||||
`/sys` file system (see the LLVM project's [Benchmarking
|
||||
tips](https://llvm.org/docs/Benchmarking.html)).
|
||||
4. Close other programs that do non-trivial things based on timers, such as
|
||||
your web browser, desktop environment, etc.
|
||||
5. Reduce the working set of your benchmark to fit within the L1 cache, but
|
||||
do be aware that this may lead you to optimize for an unrealistic
|
||||
situation.
|
||||
|
||||
Further resources on this topic:
|
||||
|
||||
1. The LLVM project's [Benchmarking
|
||||
tips](https://llvm.org/docs/Benchmarking.html).
|
||||
1. The Arch Wiki [Cpu frequency
|
||||
scaling](https://wiki.archlinux.org/title/CPU_frequency_scaling) page.
|
||||
31
third_party/benchmark/docs/releasing.md
vendored
Normal file
31
third_party/benchmark/docs/releasing.md
vendored
Normal file
@@ -0,0 +1,31 @@
|
||||
# How to release
|
||||
|
||||
* Make sure you're on main and synced to HEAD
|
||||
* Ensure the project builds and tests run
|
||||
* `parallel -j0 exec ::: test/*_test` can help ensure everything at least
|
||||
passes
|
||||
* Prepare release notes
|
||||
* `git log $(git describe --abbrev=0 --tags)..HEAD` gives you the list of
|
||||
commits between the last annotated tag and HEAD
|
||||
* Pick the most interesting.
|
||||
* Create one last commit that updates the version saved in `CMakeLists.txt` and `MODULE.bazel`
|
||||
to the release version you're creating. (This version will be used if benchmark is installed
|
||||
from the archive you'll be creating in the next step.)
|
||||
|
||||
```
|
||||
project (benchmark VERSION 1.8.0 LANGUAGES CXX)
|
||||
```
|
||||
|
||||
```
|
||||
module(name = "com_github_google_benchmark", version="1.8.0")
|
||||
```
|
||||
|
||||
* Create a release through github's interface
|
||||
* Note this will create a lightweight tag.
|
||||
* Update this to an annotated tag:
|
||||
* `git pull --tags`
|
||||
* `git tag -a -f <tag> <tag>`
|
||||
* `git push --force --tags origin`
|
||||
* Confirm that the "Build and upload Python wheels" action runs to completion
|
||||
* Run it manually if it hasn't run.
|
||||
* IMPORTANT: When re-running manually, make sure to select the newly created `<tag>` as the workflow version in the "Run workflow" tab on the GitHub Actions page.
|
||||
343
third_party/benchmark/docs/tools.md
vendored
Normal file
343
third_party/benchmark/docs/tools.md
vendored
Normal file
@@ -0,0 +1,343 @@
|
||||
# Benchmark Tools
|
||||
|
||||
## compare.py
|
||||
|
||||
The `compare.py` can be used to compare the result of benchmarks.
|
||||
|
||||
### Dependencies
|
||||
The utility relies on the [scipy](https://www.scipy.org) package which can be installed using pip:
|
||||
```bash
|
||||
pip3 install -r requirements.txt
|
||||
```
|
||||
|
||||
### Displaying aggregates only
|
||||
|
||||
The switch `-a` / `--display_aggregates_only` can be used to control the
|
||||
displayment of the normal iterations vs the aggregates. When passed, it will
|
||||
be passthrough to the benchmark binaries to be run, and will be accounted for
|
||||
in the tool itself; only the aggregates will be displayed, but not normal runs.
|
||||
It only affects the display, the separate runs will still be used to calculate
|
||||
the U test.
|
||||
|
||||
### Modes of operation
|
||||
|
||||
There are three modes of operation:
|
||||
|
||||
1. Just compare two benchmarks
|
||||
The program is invoked like:
|
||||
|
||||
``` bash
|
||||
$ compare.py benchmarks <benchmark_baseline> <benchmark_contender> [benchmark options]...
|
||||
```
|
||||
Where `<benchmark_baseline>` and `<benchmark_contender>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
|
||||
|
||||
`[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
|
||||
|
||||
Example output:
|
||||
```
|
||||
$ ./compare.py benchmarks ./a.out ./a.out
|
||||
RUNNING: ./a.out --benchmark_out=/tmp/tmprBT5nW
|
||||
Run on (8 X 4000 MHz CPU s)
|
||||
2017-11-07 21:16:44
|
||||
------------------------------------------------------
|
||||
Benchmark Time CPU Iterations
|
||||
------------------------------------------------------
|
||||
BM_memcpy/8 36 ns 36 ns 19101577 211.669MB/s
|
||||
BM_memcpy/64 76 ns 76 ns 9412571 800.199MB/s
|
||||
BM_memcpy/512 84 ns 84 ns 8249070 5.64771GB/s
|
||||
BM_memcpy/1024 116 ns 116 ns 6181763 8.19505GB/s
|
||||
BM_memcpy/8192 643 ns 643 ns 1062855 11.8636GB/s
|
||||
BM_copy/8 222 ns 222 ns 3137987 34.3772MB/s
|
||||
BM_copy/64 1608 ns 1608 ns 432758 37.9501MB/s
|
||||
BM_copy/512 12589 ns 12589 ns 54806 38.7867MB/s
|
||||
BM_copy/1024 25169 ns 25169 ns 27713 38.8003MB/s
|
||||
BM_copy/8192 201165 ns 201112 ns 3486 38.8466MB/s
|
||||
RUNNING: ./a.out --benchmark_out=/tmp/tmpt1wwG_
|
||||
Run on (8 X 4000 MHz CPU s)
|
||||
2017-11-07 21:16:53
|
||||
------------------------------------------------------
|
||||
Benchmark Time CPU Iterations
|
||||
------------------------------------------------------
|
||||
BM_memcpy/8 36 ns 36 ns 19397903 211.255MB/s
|
||||
BM_memcpy/64 73 ns 73 ns 9691174 839.635MB/s
|
||||
BM_memcpy/512 85 ns 85 ns 8312329 5.60101GB/s
|
||||
BM_memcpy/1024 118 ns 118 ns 6438774 8.11608GB/s
|
||||
BM_memcpy/8192 656 ns 656 ns 1068644 11.6277GB/s
|
||||
BM_copy/8 223 ns 223 ns 3146977 34.2338MB/s
|
||||
BM_copy/64 1611 ns 1611 ns 435340 37.8751MB/s
|
||||
BM_copy/512 12622 ns 12622 ns 54818 38.6844MB/s
|
||||
BM_copy/1024 25257 ns 25239 ns 27779 38.6927MB/s
|
||||
BM_copy/8192 205013 ns 205010 ns 3479 38.108MB/s
|
||||
Comparing ./a.out to ./a.out
|
||||
Benchmark Time CPU Time Old Time New CPU Old CPU New
|
||||
------------------------------------------------------------------------------------------------------
|
||||
BM_memcpy/8 +0.0020 +0.0020 36 36 36 36
|
||||
BM_memcpy/64 -0.0468 -0.0470 76 73 76 73
|
||||
BM_memcpy/512 +0.0081 +0.0083 84 85 84 85
|
||||
BM_memcpy/1024 +0.0098 +0.0097 116 118 116 118
|
||||
BM_memcpy/8192 +0.0200 +0.0203 643 656 643 656
|
||||
BM_copy/8 +0.0046 +0.0042 222 223 222 223
|
||||
BM_copy/64 +0.0020 +0.0020 1608 1611 1608 1611
|
||||
BM_copy/512 +0.0027 +0.0026 12589 12622 12589 12622
|
||||
BM_copy/1024 +0.0035 +0.0028 25169 25257 25169 25239
|
||||
BM_copy/8192 +0.0191 +0.0194 201165 205013 201112 205010
|
||||
```
|
||||
|
||||
What it does is for the every benchmark from the first run it looks for the benchmark with exactly the same name in the second run, and then compares the results. If the names differ, the benchmark is omitted from the diff.
|
||||
As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`.
|
||||
|
||||
2. Compare two different filters of one benchmark
|
||||
The program is invoked like:
|
||||
|
||||
``` bash
|
||||
$ compare.py filters <benchmark> <filter_baseline> <filter_contender> [benchmark options]...
|
||||
```
|
||||
Where `<benchmark>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
|
||||
|
||||
Where `<filter_baseline>` and `<filter_contender>` are the same regex filters that you would pass to the `[--benchmark_filter=<regex>]` parameter of the benchmark binary.
|
||||
|
||||
`[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
|
||||
|
||||
Example output:
|
||||
```
|
||||
$ ./compare.py filters ./a.out BM_memcpy BM_copy
|
||||
RUNNING: ./a.out --benchmark_filter=BM_memcpy --benchmark_out=/tmp/tmpBWKk0k
|
||||
Run on (8 X 4000 MHz CPU s)
|
||||
2017-11-07 21:37:28
|
||||
------------------------------------------------------
|
||||
Benchmark Time CPU Iterations
|
||||
------------------------------------------------------
|
||||
BM_memcpy/8 36 ns 36 ns 17891491 211.215MB/s
|
||||
BM_memcpy/64 74 ns 74 ns 9400999 825.646MB/s
|
||||
BM_memcpy/512 87 ns 87 ns 8027453 5.46126GB/s
|
||||
BM_memcpy/1024 111 ns 111 ns 6116853 8.5648GB/s
|
||||
BM_memcpy/8192 657 ns 656 ns 1064679 11.6247GB/s
|
||||
RUNNING: ./a.out --benchmark_filter=BM_copy --benchmark_out=/tmp/tmpAvWcOM
|
||||
Run on (8 X 4000 MHz CPU s)
|
||||
2017-11-07 21:37:33
|
||||
----------------------------------------------------
|
||||
Benchmark Time CPU Iterations
|
||||
----------------------------------------------------
|
||||
BM_copy/8 227 ns 227 ns 3038700 33.6264MB/s
|
||||
BM_copy/64 1640 ns 1640 ns 426893 37.2154MB/s
|
||||
BM_copy/512 12804 ns 12801 ns 55417 38.1444MB/s
|
||||
BM_copy/1024 25409 ns 25407 ns 27516 38.4365MB/s
|
||||
BM_copy/8192 202986 ns 202990 ns 3454 38.4871MB/s
|
||||
Comparing BM_memcpy to BM_copy (from ./a.out)
|
||||
Benchmark Time CPU Time Old Time New CPU Old CPU New
|
||||
--------------------------------------------------------------------------------------------------------------------
|
||||
[BM_memcpy vs. BM_copy]/8 +5.2829 +5.2812 36 227 36 227
|
||||
[BM_memcpy vs. BM_copy]/64 +21.1719 +21.1856 74 1640 74 1640
|
||||
[BM_memcpy vs. BM_copy]/512 +145.6487 +145.6097 87 12804 87 12801
|
||||
[BM_memcpy vs. BM_copy]/1024 +227.1860 +227.1776 111 25409 111 25407
|
||||
[BM_memcpy vs. BM_copy]/8192 +308.1664 +308.2898 657 202986 656 202990
|
||||
```
|
||||
|
||||
As you can see, it applies filter to the benchmarks, both when running the benchmark, and before doing the diff. And to make the diff work, the matches are replaced with some common string. Thus, you can compare two different benchmark families within one benchmark binary.
|
||||
As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`.
|
||||
|
||||
3. Compare filter one from benchmark one to filter two from benchmark two:
|
||||
The program is invoked like:
|
||||
|
||||
``` bash
|
||||
$ compare.py filters <benchmark_baseline> <filter_baseline> <benchmark_contender> <filter_contender> [benchmark options]...
|
||||
```
|
||||
|
||||
Where `<benchmark_baseline>` and `<benchmark_contender>` either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
|
||||
|
||||
Where `<filter_baseline>` and `<filter_contender>` are the same regex filters that you would pass to the `[--benchmark_filter=<regex>]` parameter of the benchmark binary.
|
||||
|
||||
`[benchmark options]` will be passed to the benchmarks invocations. They can be anything that binary accepts, be it either normal `--benchmark_*` parameters, or some custom parameters your binary takes.
|
||||
|
||||
Example output:
|
||||
```
|
||||
$ ./compare.py benchmarksfiltered ./a.out BM_memcpy ./a.out BM_copy
|
||||
RUNNING: ./a.out --benchmark_filter=BM_memcpy --benchmark_out=/tmp/tmp_FvbYg
|
||||
Run on (8 X 4000 MHz CPU s)
|
||||
2017-11-07 21:38:27
|
||||
------------------------------------------------------
|
||||
Benchmark Time CPU Iterations
|
||||
------------------------------------------------------
|
||||
BM_memcpy/8 37 ns 37 ns 18953482 204.118MB/s
|
||||
BM_memcpy/64 74 ns 74 ns 9206578 828.245MB/s
|
||||
BM_memcpy/512 91 ns 91 ns 8086195 5.25476GB/s
|
||||
BM_memcpy/1024 120 ns 120 ns 5804513 7.95662GB/s
|
||||
BM_memcpy/8192 664 ns 664 ns 1028363 11.4948GB/s
|
||||
RUNNING: ./a.out --benchmark_filter=BM_copy --benchmark_out=/tmp/tmpDfL5iE
|
||||
Run on (8 X 4000 MHz CPU s)
|
||||
2017-11-07 21:38:32
|
||||
----------------------------------------------------
|
||||
Benchmark Time CPU Iterations
|
||||
----------------------------------------------------
|
||||
BM_copy/8 230 ns 230 ns 2985909 33.1161MB/s
|
||||
BM_copy/64 1654 ns 1653 ns 419408 36.9137MB/s
|
||||
BM_copy/512 13122 ns 13120 ns 53403 37.2156MB/s
|
||||
BM_copy/1024 26679 ns 26666 ns 26575 36.6218MB/s
|
||||
BM_copy/8192 215068 ns 215053 ns 3221 36.3283MB/s
|
||||
Comparing BM_memcpy (from ./a.out) to BM_copy (from ./a.out)
|
||||
Benchmark Time CPU Time Old Time New CPU Old CPU New
|
||||
--------------------------------------------------------------------------------------------------------------------
|
||||
[BM_memcpy vs. BM_copy]/8 +5.1649 +5.1637 37 230 37 230
|
||||
[BM_memcpy vs. BM_copy]/64 +21.4352 +21.4374 74 1654 74 1653
|
||||
[BM_memcpy vs. BM_copy]/512 +143.6022 +143.5865 91 13122 91 13120
|
||||
[BM_memcpy vs. BM_copy]/1024 +221.5903 +221.4790 120 26679 120 26666
|
||||
[BM_memcpy vs. BM_copy]/8192 +322.9059 +323.0096 664 215068 664 215053
|
||||
```
|
||||
This is a mix of the previous two modes, two (potentially different) benchmark binaries are run, and a different filter is applied to each one.
|
||||
As you can note, the values in `Time` and `CPU` columns are calculated as `(new - old) / |old|`.
|
||||
|
||||
### Note: Interpreting the output
|
||||
|
||||
Performance measurements are an art, and performance comparisons are doubly so.
|
||||
Results are often noisy and don't necessarily have large absolute differences to
|
||||
them, so just by visual inspection, it is not at all apparent if two
|
||||
measurements are actually showing a performance change or not. It is even more
|
||||
confusing with multiple benchmark repetitions.
|
||||
|
||||
Thankfully, what we can do, is use statistical tests on the results to determine
|
||||
whether the performance has statistically-significantly changed. `compare.py`
|
||||
uses [Mann–Whitney U
|
||||
test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test), with a null
|
||||
hypothesis being that there's no difference in performance.
|
||||
|
||||
**The below output is a summary of a benchmark comparison with statistics
|
||||
provided for a multi-threaded process.**
|
||||
```
|
||||
Benchmark Time CPU Time Old Time New CPU Old CPU New
|
||||
-----------------------------------------------------------------------------------------------------------------------------
|
||||
benchmark/threads:1/process_time/real_time_pvalue 0.0000 0.0000 U Test, Repetitions: 27 vs 27
|
||||
benchmark/threads:1/process_time/real_time_mean -0.1442 -0.1442 90 77 90 77
|
||||
benchmark/threads:1/process_time/real_time_median -0.1444 -0.1444 90 77 90 77
|
||||
benchmark/threads:1/process_time/real_time_stddev +0.3974 +0.3933 0 0 0 0
|
||||
benchmark/threads:1/process_time/real_time_cv +0.6329 +0.6280 0 0 0 0
|
||||
OVERALL_GEOMEAN -0.1442 -0.1442 0 0 0 0
|
||||
```
|
||||
--------------------------------------------
|
||||
Here's a breakdown of each row:
|
||||
|
||||
**benchmark/threads:1/process_time/real_time_pvalue**: This shows the _p-value_ for
|
||||
the statistical test comparing the performance of the process running with one
|
||||
thread. A value of 0.0000 suggests a statistically significant difference in
|
||||
performance. The comparison was conducted using the U Test (Mann-Whitney
|
||||
U Test) with 27 repetitions for each case.
|
||||
|
||||
**benchmark/threads:1/process_time/real_time_mean**: This shows the relative
|
||||
difference in mean execution time between two different cases. The negative
|
||||
value (-0.1442) implies that the new process is faster by about 14.42%. The old
|
||||
time was 90 units, while the new time is 77 units.
|
||||
|
||||
**benchmark/threads:1/process_time/real_time_median**: Similarly, this shows the
|
||||
relative difference in the median execution time. Again, the new process is
|
||||
faster by 14.44%.
|
||||
|
||||
**benchmark/threads:1/process_time/real_time_stddev**: This is the relative
|
||||
difference in the standard deviation of the execution time, which is a measure
|
||||
of how much variation or dispersion there is from the mean. A positive value
|
||||
(+0.3974) implies there is more variance in the execution time in the new
|
||||
process.
|
||||
|
||||
**benchmark/threads:1/process_time/real_time_cv**: CV stands for Coefficient of
|
||||
Variation. It is the ratio of the standard deviation to the mean. It provides a
|
||||
standardized measure of dispersion. An increase (+0.6329) indicates more
|
||||
relative variability in the new process.
|
||||
|
||||
**OVERALL_GEOMEAN**: Geomean stands for geometric mean, a type of average that is
|
||||
less influenced by outliers. The negative value indicates a general improvement
|
||||
in the new process. However, given the values are all zero for the old and new
|
||||
times, this seems to be a mistake or placeholder in the output.
|
||||
|
||||
-----------------------------------------
|
||||
|
||||
|
||||
|
||||
Let's first try to see what the different columns represent in the above
|
||||
`compare.py` benchmarking output:
|
||||
|
||||
1. **Benchmark:** The name of the function being benchmarked, along with the
|
||||
size of the input (after the slash).
|
||||
|
||||
2. **Time:** The average time per operation, across all iterations.
|
||||
|
||||
3. **CPU:** The average CPU time per operation, across all iterations.
|
||||
|
||||
4. **Iterations:** The number of iterations the benchmark was run to get a
|
||||
stable estimate.
|
||||
|
||||
5. **Time Old and Time New:** These represent the average time it takes for a
|
||||
function to run in two different scenarios or versions. For example, you
|
||||
might be comparing how fast a function runs before and after you make some
|
||||
changes to it.
|
||||
|
||||
6. **CPU Old and CPU New:** These show the average amount of CPU time that the
|
||||
function uses in two different scenarios or versions. This is similar to
|
||||
Time Old and Time New, but focuses on CPU usage instead of overall time.
|
||||
|
||||
In the comparison section, the relative differences in both time and CPU time
|
||||
are displayed for each input size.
|
||||
|
||||
|
||||
A statistically-significant difference is determined by a **p-value**, which is
|
||||
a measure of the probability that the observed difference could have occurred
|
||||
just by random chance. A smaller p-value indicates stronger evidence against the
|
||||
null hypothesis.
|
||||
|
||||
**Therefore:**
|
||||
1. If the p-value is less than the chosen significance level (alpha), we
|
||||
reject the null hypothesis and conclude the benchmarks are significantly
|
||||
different.
|
||||
2. If the p-value is greater than or equal to alpha, we fail to reject the
|
||||
null hypothesis and treat the two benchmarks as similar.
|
||||
|
||||
|
||||
|
||||
The result of said the statistical test is additionally communicated through color coding:
|
||||
```diff
|
||||
+ Green:
|
||||
```
|
||||
The benchmarks are _**statistically different**_. This could mean the
|
||||
performance has either **significantly improved** or **significantly
|
||||
deteriorated**. You should look at the actual performance numbers to see which
|
||||
is the case.
|
||||
```diff
|
||||
- Red:
|
||||
```
|
||||
The benchmarks are _**statistically similar**_. This means the performance
|
||||
**hasn't significantly changed**.
|
||||
|
||||
In statistical terms, **'green'** means we reject the null hypothesis that
|
||||
there's no difference in performance, and **'red'** means we fail to reject the
|
||||
null hypothesis. This might seem counter-intuitive if you're expecting 'green'
|
||||
to mean 'improved performance' and 'red' to mean 'worsened performance'.
|
||||
```bash
|
||||
But remember, in this context:
|
||||
|
||||
'Success' means 'successfully finding a difference'.
|
||||
'Failure' means 'failing to find a difference'.
|
||||
```
|
||||
|
||||
|
||||
Also, please note that **even if** we determine that there **is** a
|
||||
statistically-significant difference between the two measurements, it does not
|
||||
_necessarily_ mean that the actual benchmarks that were measured **are**
|
||||
different, or vice versa, even if we determine that there is **no**
|
||||
statistically-significant difference between the two measurements, it does not
|
||||
necessarily mean that the actual benchmarks that were measured **are not**
|
||||
different.
|
||||
|
||||
|
||||
|
||||
### U test
|
||||
|
||||
If there is a sufficient repetition count of the benchmarks, the tool can do
|
||||
a [U Test](https://en.wikipedia.org/wiki/Mann%E2%80%93Whitney_U_test), of the
|
||||
null hypothesis that it is equally likely that a randomly selected value from
|
||||
one sample will be less than or greater than a randomly selected value from a
|
||||
second sample.
|
||||
|
||||
If the calculated p-value is below this value is lower than the significance
|
||||
level alpha, then the result is said to be statistically significant and the
|
||||
null hypothesis is rejected. Which in other words means that the two benchmarks
|
||||
aren't identical.
|
||||
|
||||
**WARNING**: requires **LARGE** (no less than 9) number of repetitions to be
|
||||
meaningful!
|
||||
1292
third_party/benchmark/docs/user_guide.md
vendored
Normal file
1292
third_party/benchmark/docs/user_guide.md
vendored
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user