* cleanup: support CMake >= 3.10
This aligns the project with the CMake support policies in:
https://opensource.google/documentation/policies/cplusplus-support
I also simplied the management of CMake policies. Most of the overriden
policies (anything <= CMP0067) are enabled by default when you require
CMake >= 3.10. But it is easier to just declare that you will accept
newer policies when they are available using the `...3.22` notation.
* Address review comments
* inlined links
---------
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
* Build libpfm as a dependency to allow collection of perf counters
This commit builds libpfm using rules_foreign_cc and lets the default
build of the benchmark library support perf counter collection without
needing additional work from users.
Tested with a custom target:
```
bazel run \
--override_repository=com_github_google_benchmark=/home/raghu/benchmark \
-c opt :test-bench -- "--benchmark_perf_counters=INSTRUCTIONS,CYCLES"
Using profile: local
<snip>
----------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
----------------------------------------------------------------------
BM_Test 0.279 ns 0.279 ns 1000000000 CYCLES=1.00888 INSTRUCTIONS=2
```
Signed-off-by: Raghu Raja <raghu@enfabrica.net>
* Adding myself to the CONTRIBUTORS file per CLA guidance
Enfabrica has already signed a corporate CLA.
Signed-off-by: Raghu Raja <raghu@enfabrica.net>
Signed-off-by: Raghu Raja <raghu@enfabrica.net>
* Discuss sources of variance in the user guide
* Mention cpufreq/boost
* Pull variance material into a new document
Add reducing_variance.md as a place to discuss things related to variance
and, in the future, statistical interpretation of benchmark results.
Co-authored-by: Dominic Hamon <dominichamon@users.noreply.github.com>
* Add possibility to ask for libbenchmark version number (#1004)
Add a header which holds the current major, minor, and
patch number of the library. The header is auto generated
by CMake.
* Do not generate unused functions (#1004)
* Add support for version number in bazel (#1004)
* Fix clang format #1004
* Fix more clang format problems (#1004)
* Use git version feature of cmake to determine current lib version
* Rename version_config header to version
* Bake git version into bazel build
* Use same input config header as in cmake for version.h
* Adapt the releasing.md to include versioning in bazel
* Introduce warmup phase to BenchmarkRunner (#1130)
In order to account for caching effects in user
benchmarks introduce a new command line option
"--benchmark_min_warmup_time"
which allows to specify an amount of time for
which the benchmark should be run before results
are meaningful.
* Adapt review suggestions regarding introduction of warmup phase (#1130)
* Fix BM_CHECK call in MinWarmUpTime (#1130)
* Fix comment on requirements of MinWarmUpTime (#1130)
* Add basic description of warmup phase mechanism to user guide (#1130)
This commit adds a small section on how to install and build Python
bindings wheels to the docs, as well as a link to it from the main readme.
Notes were added that clearly state availability of Python wheels based
on Python version and OS/architecture combinations.
For the guide to build a wheel from source, the best practice of
creating a virtual environment and activating it before build was
detailed. Also, a note on the required installation of Bazel was added,
with a link to the official docs on installation.
* Filter out benchmarks that start with "DISABLED_"
This could be slightly more elegant, in that the registration and the
benchmark definition names have to change. Ideally, we'd still register
without the DISABLED_ prefix and it would all "just work".
Fixes#1365
* add some documentation
* Add option to set the default time unit globally
This commit introduces the `--benchmark_time_unit={ns|us|ms|s}` command line argument. The argument only affects benchmarks where the time unit is not set explicitly.
* Update AUTHORS and CONTRIBUTORS
* Test `SetDefaultTimeUnit`
* clang format
* Use `GetDefaultTimeUnit()` for initializing `TimeUnit` variables
* Review fixes
* Export functions
* Add comment
* Revert "Refine docs on changing cpufreq governor (#1325)"
This reverts commit 9e859f5bf5.
* Refine the User Guide CPU Frequency Scaling section
The text now describes the cpupower command, so users in a hurry
have something to copy/paste that will likely work. It then
suggests that there are probably more convenient optons available
that people can look into.
This reverts the prior commit, which introduced a shell script
that doesn't work. It also retains the spirit of the original
fix: no longer recommend setting the frequency governor to
"powersave", which might not be appropriate or available.
Note: I did attempt to write a bash script that set the govenor
to "powersave" for the duration of a single command, but I gave
up for many reasons:
1) it got complex, in part because the cpupower command does not
seem to be designed for scripts (e.g. it prints out complex
English phrases).
2) munging /proc/sys files directly feels unstable and less than
universal. The libcpupower and cpupower are designed to abstract
those away, because the details can vary.
3) there are better options. E.g. various GUI programs, and
even Gnome's core Settings UI, let you adjust the system's
performance mode without root access.
Fixes#1325, #1327
* Add Setup/Teardown option on Benchmark.
Motivations:
- feature parity with our internal library. (which has ~718 callers)
- more flexible than cordinating setup/teardown inside the benchmark routine.
* change Setup/Teardown callback type to raw function pointers
* add test file to cmake file
* move b.Teardown() up
* add const to param of Setup/Teardown callbacks
* fix comment and add doc to user_guide
* fix typo
* fix doc, fix test and add bindings to python/benchmark.cc
* fix binding again
* remove explicit C cast - that was wrong
* change policy to reference_internal
* try removing the bindinds ...
* clean up
* add more tests with repetitions and fixtures
* more comments
* init setup/teardown callbacks to NULL
* s/nullptr/NULL
* removed unused var
* change assertion on fixture_interaction::fixture_setup
* move NULL init to .cc file
* Fix dependency typo and unpin cibuildwheel version in wheel building action
* Move to monolithic build jobs, restrict to x64 architectures
As of this commit, all wheel building jobs complete on GitHub Actions. Since some platform-specific options had to be set to fix different types of build problems underway, the build job matrix was unrolled.
Still left TODO:
* Wheel testing after build (running the Python bindings test)
* Emulating bazel on other architectures to build aarch64/i686/ppc64le
* Enabling Win32 (this fails due to linker errors).
* Add binding test commands for all wheels, set macOSX deployment target to 10.9
* Add instructions for updating Python __version__ variable before release creation
* Allow template arguments to be specifed directly on the BENCHMARK macro/
Use cases:
- more convenient (than having to use a separate BENCHMARK_TEMPLATE)
- feature parity with our internal library.
* fix tests
* updated docs
* Introduce Coefficient of variation aggregate
I believe, it is much more useful / use to understand,
because it is already normalized by the mean,
so it is not affected by the duration of the benchmark,
unlike the standard deviation.
Example of real-world output:
```
raw.pixls.us-unique/GoPro/HERO6 Black$ ~/rawspeed/build-old/src/utilities/rsbench/rsbench GOPR9172.GPR --benchmark_repetitions=27 --benchmark_display_aggregates_only=true --benchmark_counters_tabular=true
2021-09-03T18:05:56+03:00
Running /home/lebedevri/rawspeed/build-old/src/utilities/rsbench/rsbench
Run on (32 X 3596.16 MHz CPU s)
CPU Caches:
L1 Data 32 KiB (x16)
L1 Instruction 32 KiB (x16)
L2 Unified 512 KiB (x16)
L3 Unified 32768 KiB (x2)
Load Average: 7.00, 2.99, 1.85
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Benchmark Time CPU Iterations CPUTime,s CPUTime/WallTime Pixels Pixels/CPUTime Pixels/WallTime Raws/CPUTime Raws/WallTime WallTime,s
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GOPR9172.GPR/threads:32/process_time/real_time_mean 11.1 ms 353 ms 27 0.353122 31.9473 12M 33.9879M 1085.84M 2.83232 90.4864 0.0110535
GOPR9172.GPR/threads:32/process_time/real_time_median 11.0 ms 352 ms 27 0.351696 31.9599 12M 34.1203M 1090.11M 2.84336 90.8425 0.0110081
GOPR9172.GPR/threads:32/process_time/real_time_stddev 0.159 ms 4.60 ms 27 4.59539m 0.0462064 0 426.371k 14.9631M 0.0355309 1.24692 158.944u
GOPR9172.GPR/threads:32/process_time/real_time_cv 1.44 % 1.30 % 27 0.0130136 1.44633m 0 0.0125448 0.0137802 0.0125448 0.0137802 0.0143795
```
Fixes https://github.com/google/benchmark/issues/1146
* Be consistent, it's CV, not 'rel std dev'
* Statistics: add support for percentage unit in addition to time
I think, `stddev` statistic is useful, but confusing.
What does it mean if `stddev` of `1ms` is reported?
Is that good or bad? If the `median` is `1s`,
then that means that the measurements are pretty noise-less.
And what about `stddev` of `100ms` is reported?
If the `median` is `1s` - awful, if the `median` is `10s` - good.
And hurray, there is just the statistic that we need:
https://en.wikipedia.org/wiki/Coefficient_of_variation
But, naturally, that produces a value in percents,
but the statistics are currently hardcoded to produce time.
So this refactors thinkgs a bit, and allows a percentage unit for statistics.
I'm not sure whether or not `benchmark` would be okay
with adding this `RSD` statistic by default,
but regales, that is a separate patch.
Refs. https://github.com/google/benchmark/issues/1146
* Address review notes
Refactoring in 201b981a moved most of the documentation from `README.md` to `docs/user_guide.md`. Some links from `README.md` to other `docs/*.md` files ended up unchanged in `docs/user_guide.md`. Those links were now broken as they did not link from outside the `docs` directory anymore, but from inside it. Removing the leading `docs/` for these links fixes this.
Inspired by the original implementation by Hai Huang @haih-g
from https://github.com/google/benchmark/pull/1105.
The original implementation had design deficiencies that
weren't really addressable without redesign, so it was reverted.
In essence, the original implementation consisted of two separateable parts:
* reducing the amount time each repetition is run for, and symmetrically increasing repetition count
* running the repetitions in random order
While it worked fine for the usual case, it broke down when user would specify repetitions
(it would completely ignore that request), or specified per-repetition min time (while it would
still adjust the repetition count, it would not adjust the per-repetition time,
leading to much greater run times)
Here, like i was originally suggesting in the original review, i'm separating the features,
and only dealing with a single one - running repetitions in random order.
Now that the runs/repetitions are no longer in-order, the tooling may wish to sort the output,
and indeed `compare.py` has been updated to do that: #1168.
* cmake: fix handling the case where `git describe` fails
* cmake: fix version recorded in releases
If downloaded as a tarball release, there will be no info from git
to determine the release, so it ends up v0.0.0. If that's the case,
we'll now use the release specified in the project() command,
which needs to be updated for each new release.
* cmake: add `--tags` to `git describe`
That way, lightweight tags will also be taken into account, which should
never hurt, but it'll help in cases where, for some mysterious reason or
other, annotated tags don't make it into a clone.
* update releasing.md
* Implementation of random interleaving. See
http://github.com/google/benchmark/issues/1051 for the feature requests.
Committer: Hai Huang (http://github.com/haih-g)
On branch fr-1051
Changes to be committed:
modified: include/benchmark/benchmark.h
modified: src/benchmark.cc
new file: src/benchmark_adjust_repetitions.cc
new file: src/benchmark_adjust_repetitions.h
modified: src/benchmark_api_internal.cc
modified: src/benchmark_api_internal.h
modified: src/benchmark_register.cc
modified: src/benchmark_runner.cc
modified: src/benchmark_runner.h
modified: test/CMakeLists.txt
new file: test/benchmark_random_interleaving_gtest.cc
* Fix benchmark_random_interleaving_gtest.cc for fr-1051
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark.cc
modified: src/benchmark_runner.cc
modified: test/benchmark_random_interleaving_gtest.cc
* Fix macos build for fr-1051
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark_api_internal.cc
modified: src/benchmark_api_internal.h
modified: src/benchmark_runner.cc
* Fix macos and windows build for fr-1051.
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark_runner.cc
* Fix benchmark_random_interleaving_test.cc for macos and windows in fr-1051
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: test/benchmark_random_interleaving_gtest.cc
* Fix int type benchmark_random_interleaving_gtest for macos in fr-1051
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: test/benchmark_random_interleaving_gtest.cc
* Address dominichamon's comments 03/29 for fr-1051
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark.cc
modified: src/benchmark_api_internal.cc
modified: src/benchmark_api_internal.h
modified: test/benchmark_random_interleaving_gtest.cc
* Address dominichamon's comment on default min_time / repetitions for fr-1051.
Also change sentinel of random_interleaving_repetitions to -1. Hopefully it
fixes the failures on Windows.
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark.cc
modified: src/benchmark_api_internal.cc
modified: src/benchmark_api_internal.h
* Fix windows test failures for fr-1051
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark_api_internal.cc
modified: src/benchmark_runner.cc
* Add license blurb for fr-1051.
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark_adjust_repetitions.cc
modified: src/benchmark_adjust_repetitions.h
* Switch to std::shuffle() for fr-1105.
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark.cc
* Change to 1e-9 in fr-1105
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark_adjust_repetitions.cc
* Fix broken build caused by bad merge for fr-1105.
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark_api_internal.cc
modified: src/benchmark_runner.cc
* Fix build breakage for fr-1051.
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark.cc
modified: src/benchmark_api_internal.cc
modified: src/benchmark_api_internal.h
modified: src/benchmark_register.cc
modified: src/benchmark_runner.cc
* Print out reports as they come in if random interleaving is disabled (fr-1051)
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark.cc
* size_t, int64_t --> int in benchmark_runner for fr-1051.
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark_runner.cc
modified: src/benchmark_runner.h
* Address comments from dominichamon for fr-1051
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark.cc
modified: src/benchmark_adjust_repetitions.cc
modified: src/benchmark_adjust_repetitions.h
modified: src/benchmark_api_internal.cc
modified: src/benchmark_api_internal.h
modified: test/benchmark_random_interleaving_gtest.cc
* benchmar_indices --> size_t to make CI pass: fr-1051
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark.cc
* Fix min_time not initialized issue for fr-1051.
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark_api_internal.cc
modified: src/benchmark_api_internal.h
* min_time --> MinTime in fr-1051.
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: src/benchmark_api_internal.cc
modified: src/benchmark_api_internal.h
modified: src/benchmark_runner.cc
* Add doc for random interleaving for fr-1051
Committer: Hai Huang <haih@google.com>
On branch fr-1051
Your branch is up to date with 'origin/fr-1051'.
Changes to be committed:
modified: README.md
new file: docs/random_interleaving.md
Co-authored-by: Dominic Hamon <dominichamon@users.noreply.github.com>
* Support optional, user-directed collection of performance counters
The patch allows an engineer wishing to drill into the root causes
of a regression, for example. Currently, only single threaded runs
are supported. The feature is a build-time opt in, and then a runtime
opt in.
The engineer may run the benchmark executable, passing a list of
performance counter names (using libpfm's naming scheme) at the
command line. The counter values will then be collected and reported
back as UserCounters.
This is different from #240 in that it is a benchmark user opt-in, and
the counter collection is transparent to the benchmark.
Currently, this is only supported on platforms where libpfm is
supported.
libpfm: http://perfmon2.sourceforge.net/
* 'Use' values param in Snapshot when BENCHMARK_OS_WINDOWS
This is to avoid unused parameter warning-as-error
* Added missing include for <vector> in perf_counters.cc
* Moved doc to docs
* Added license blurbs
* add requirements.txt for python tools
* adds documentation for requirements.txt
Adds installation instructions for python dependencies using pip and requirements.txt