2016-04-19 16:34:13 +00:00
|
|
|
# benchmark
|
2014-11-14 07:11:45 +00:00
|
|
|
[![Build Status](https://travis-ci.org/google/benchmark.svg?branch=master)](https://travis-ci.org/google/benchmark)
|
2015-05-11 19:34:03 +00:00
|
|
|
[![Build status](https://ci.appveyor.com/api/projects/status/u0qsyp7t1tk7cpxs/branch/master?svg=true)](https://ci.appveyor.com/project/google/benchmark/branch/master)
|
2015-05-12 18:32:44 +00:00
|
|
|
[![Coverage Status](https://coveralls.io/repos/google/benchmark/badge.svg)](https://coveralls.io/r/google/benchmark)
|
2017-12-14 17:40:26 +00:00
|
|
|
[![slackin](https://slackin-iqtfqnpzxd.now.sh/badge.svg)](https://slackin-iqtfqnpzxd.now.sh/)
|
2014-01-08 01:04:19 +00:00
|
|
|
|
2013-12-20 22:51:56 +00:00
|
|
|
A library to support the benchmarking of functions, similar to unit-tests.
|
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
[Discussion group](https://groups.google.com/d/forum/benchmark-discuss)
|
2014-01-09 18:48:18 +00:00
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
IRC channel: [freenode](https://freenode.net) #googlebenchmark
|
2016-08-30 09:41:58 +00:00
|
|
|
|
2016-12-03 02:47:27 +00:00
|
|
|
[Additional Tooling Documentation](docs/tools.md)
|
|
|
|
|
Add tests to verify assembler output -- Fix DoNotOptimize. (#530)
* Add tests to verify assembler output -- Fix DoNotOptimize.
For things like `DoNotOptimize`, `ClobberMemory`, and even `KeepRunning()`,
it is important exactly what assembly they generate. However, we currently
have no way to test this. Instead it must be manually validated every
time a change occurs -- including a change in compiler version.
This patch attempts to introduce a way to test the assembled output automatically.
It's mirrors how LLVM verifies compiler output, and it uses LLVM FileCheck to run
the tests in a similar way.
The tests function by generating the assembly for a test in CMake, and then
using FileCheck to verify the // CHECK lines in the source file are found
in the generated assembly.
Currently, the tests only run on 64-bit x86 systems under GCC and Clang,
and when FileCheck is found on the system.
Additionally, this patch tries to improve the code gen from DoNotOptimize.
This should probably be a separate change, but I needed something to test.
* Disable assembly tests on Bazel for now
* Link FIXME to github issue
* Fix Tests on OS X
* fix strip_asm.py to work on both Linux and OS X like targets
2018-03-23 22:10:47 +00:00
|
|
|
[Assembly Testing Documentation](docs/AssemblyTests.md)
|
|
|
|
|
2017-11-29 17:36:19 +00:00
|
|
|
|
2017-12-13 23:26:47 +00:00
|
|
|
## Building
|
|
|
|
|
|
|
|
The basic steps for configuring and building the library look like this:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
$ git clone https://github.com/google/benchmark.git
|
2018-05-24 09:50:35 +00:00
|
|
|
# Benchmark requires Google Test as a dependency. Add the source tree as a subdirectory.
|
2017-12-13 23:26:47 +00:00
|
|
|
$ git clone https://github.com/google/googletest.git benchmark/googletest
|
|
|
|
$ mkdir build && cd build
|
|
|
|
$ cmake -G <generator> [options] ../benchmark
|
|
|
|
# Assuming a makefile generator was used
|
|
|
|
$ make
|
|
|
|
```
|
|
|
|
|
2018-05-24 09:50:35 +00:00
|
|
|
Note that Google Benchmark requires Google Test to build and run the tests. This
|
|
|
|
dependency can be provided two ways:
|
2017-12-13 23:26:47 +00:00
|
|
|
|
2018-05-24 09:50:35 +00:00
|
|
|
* Checkout the Google Test sources into `benchmark/googletest` as above.
|
2017-12-13 23:26:47 +00:00
|
|
|
* Otherwise, if `-DBENCHMARK_DOWNLOAD_DEPENDENCIES=ON` is specified during
|
|
|
|
configuration, the library will automatically download and build any required
|
|
|
|
dependencies.
|
|
|
|
|
2018-01-05 00:13:34 +00:00
|
|
|
If you do not wish to build and run the tests, add `-DBENCHMARK_ENABLE_GTEST_TESTS=OFF`
|
|
|
|
to `CMAKE_ARGS`.
|
|
|
|
|
2017-12-13 23:26:47 +00:00
|
|
|
|
2017-11-29 17:36:19 +00:00
|
|
|
## Installation Guide
|
|
|
|
|
|
|
|
For Ubuntu and Debian Based System
|
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
First make sure you have git and cmake installed (If not please install them)
|
2017-11-29 17:36:19 +00:00
|
|
|
|
|
|
|
```
|
2018-07-26 13:29:33 +00:00
|
|
|
sudo apt-get install git cmake
|
2017-11-29 17:36:19 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
Now, let's clone the repository and build it
|
|
|
|
|
|
|
|
```
|
|
|
|
git clone https://github.com/google/benchmark.git
|
|
|
|
cd benchmark
|
2018-07-26 13:29:33 +00:00
|
|
|
# If you want to build tests and don't use BENCHMARK_DOWNLOAD_DEPENDENCIES, then
|
|
|
|
# git clone https://github.com/google/googletest.git
|
2017-11-29 17:36:19 +00:00
|
|
|
mkdir build
|
|
|
|
cd build
|
|
|
|
cmake .. -DCMAKE_BUILD_TYPE=RELEASE
|
|
|
|
make
|
|
|
|
```
|
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
If you need to install the library globally
|
2017-11-29 17:36:19 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
sudo make install
|
|
|
|
```
|
|
|
|
|
2017-12-13 21:51:56 +00:00
|
|
|
## Stable and Experimental Library Versions
|
|
|
|
|
|
|
|
The main branch contains the latest stable version of the benchmarking library;
|
|
|
|
the API of which can be considered largely stable, with source breaking changes
|
|
|
|
being made only upon the release of a new major version.
|
|
|
|
|
|
|
|
Newer, experimental, features are implemented and tested on the
|
|
|
|
[`v2` branch](https://github.com/google/benchmark/tree/v2). Users who wish
|
|
|
|
to use, test, and provide feedback on the new features are encouraged to try
|
|
|
|
this branch. However, this branch provides no stability guarantees and reserves
|
|
|
|
the right to change and break the API at any time.
|
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
## Further knowledge
|
2018-05-24 09:50:35 +00:00
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
It may help to read the [Google Test documentation](https://github.com/google/googletest/blob/master/googletest/docs/primer.md)
|
|
|
|
as some of the structural aspects of the APIs are similar.
|
2017-11-29 17:36:19 +00:00
|
|
|
|
2016-04-19 16:34:13 +00:00
|
|
|
## Example usage
|
|
|
|
### Basic usage
|
2018-07-26 13:29:33 +00:00
|
|
|
Define a function that executes the code to be measured, register it as a
|
|
|
|
benchmark function using the `BENCHMARK` macro, and ensure an appropriate `main`
|
|
|
|
function is available:
|
2013-12-20 22:53:25 +00:00
|
|
|
|
2014-02-09 19:45:17 +00:00
|
|
|
```c++
|
2017-09-14 07:31:35 +00:00
|
|
|
#include <benchmark/benchmark.h>
|
|
|
|
|
2014-02-09 19:45:17 +00:00
|
|
|
static void BM_StringCreation(benchmark::State& state) {
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state)
|
2014-02-09 19:45:17 +00:00
|
|
|
std::string empty_string;
|
|
|
|
}
|
|
|
|
// Register the function as a benchmark
|
|
|
|
BENCHMARK(BM_StringCreation);
|
|
|
|
|
|
|
|
// Define another benchmark
|
|
|
|
static void BM_StringCopy(benchmark::State& state) {
|
|
|
|
std::string x = "hello";
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state)
|
2014-02-09 19:45:17 +00:00
|
|
|
std::string copy(x);
|
|
|
|
}
|
|
|
|
BENCHMARK(BM_StringCopy);
|
|
|
|
|
2015-03-13 04:56:45 +00:00
|
|
|
BENCHMARK_MAIN();
|
2014-02-09 19:45:17 +00:00
|
|
|
```
|
2013-12-20 22:51:56 +00:00
|
|
|
|
2018-05-25 10:18:58 +00:00
|
|
|
Don't forget to inform your linker to add benchmark library e.g. through
|
|
|
|
`-lbenchmark` compilation flag. Alternatively, you may leave out the
|
|
|
|
`BENCHMARK_MAIN();` at the end of the source file and link against
|
|
|
|
`-lbenchmark_main` to get the same default behavior.
|
2017-09-14 07:31:35 +00:00
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
The benchmark library will measure and report the timing for code within the
|
|
|
|
`for(...)` loop.
|
|
|
|
|
|
|
|
#### Platform-specific libraries
|
|
|
|
When the library is built using GCC it is necessary to link with the pthread
|
|
|
|
library due to how GCC implements `std::thread`. Failing to link to pthread will
|
|
|
|
lead to runtime exceptions (unless you're using libc++), not linker errors. See
|
|
|
|
[issue #67](https://github.com/google/benchmark/issues/67) for more details. You
|
|
|
|
can link to pthread by adding `-pthread` to your linker command. Note, you can
|
|
|
|
also use `-lpthread`, but there are potential issues with ordering of command
|
|
|
|
line parameters if you use that.
|
|
|
|
|
|
|
|
If you're running benchmarks on Windows, the shlwapi library (`-lshlwapi`) is
|
|
|
|
also required.
|
|
|
|
|
|
|
|
If you're running benchmarks on solaris, you'll want the kstat library linked in
|
|
|
|
too (`-lkstat`).
|
2017-11-13 17:20:12 +00:00
|
|
|
|
2016-04-19 16:34:13 +00:00
|
|
|
### Passing arguments
|
|
|
|
Sometimes a family of benchmarks can be implemented with just one routine that
|
|
|
|
takes an extra argument to specify which one of the family of benchmarks to
|
|
|
|
run. For example, the following code defines a family of benchmarks for
|
|
|
|
measuring the speed of `memcpy()` calls of different lengths:
|
2014-02-09 19:45:17 +00:00
|
|
|
|
|
|
|
```c++
|
|
|
|
static void BM_memcpy(benchmark::State& state) {
|
2016-08-04 19:30:14 +00:00
|
|
|
char* src = new char[state.range(0)];
|
|
|
|
char* dst = new char[state.range(0)];
|
|
|
|
memset(src, 'x', state.range(0));
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state)
|
2016-08-04 19:30:14 +00:00
|
|
|
memcpy(dst, src, state.range(0));
|
2015-09-18 03:14:10 +00:00
|
|
|
state.SetBytesProcessed(int64_t(state.iterations()) *
|
2016-08-04 19:30:14 +00:00
|
|
|
int64_t(state.range(0)));
|
2014-02-09 19:45:17 +00:00
|
|
|
delete[] src;
|
|
|
|
delete[] dst;
|
|
|
|
}
|
|
|
|
BENCHMARK(BM_memcpy)->Arg(8)->Arg(64)->Arg(512)->Arg(1<<10)->Arg(8<<10);
|
|
|
|
```
|
2013-12-20 22:51:56 +00:00
|
|
|
|
2016-04-19 16:34:13 +00:00
|
|
|
The preceding code is quite repetitive, and can be replaced with the following
|
|
|
|
short-hand. The following invocation will pick a few appropriate arguments in
|
|
|
|
the specified range and will generate a benchmark for each such argument.
|
2013-12-20 22:53:25 +00:00
|
|
|
|
2014-02-09 19:45:17 +00:00
|
|
|
```c++
|
|
|
|
BENCHMARK(BM_memcpy)->Range(8, 8<<10);
|
|
|
|
```
|
2013-12-20 22:51:56 +00:00
|
|
|
|
2016-05-24 20:25:59 +00:00
|
|
|
By default the arguments in the range are generated in multiples of eight and
|
|
|
|
the command above selects [ 8, 64, 512, 4k, 8k ]. In the following code the
|
|
|
|
range multiplier is changed to multiples of two.
|
2016-05-21 10:16:40 +00:00
|
|
|
|
|
|
|
```c++
|
|
|
|
BENCHMARK(BM_memcpy)->RangeMultiplier(2)->Range(8, 8<<10);
|
|
|
|
```
|
2016-05-21 14:34:12 +00:00
|
|
|
Now arguments generated are [ 8, 16, 32, 64, 128, 256, 512, 1024, 2k, 4k, 8k ].
|
2016-05-21 10:16:40 +00:00
|
|
|
|
2016-08-04 19:30:14 +00:00
|
|
|
You might have a benchmark that depends on two or more inputs. For example, the
|
2016-04-19 16:34:13 +00:00
|
|
|
following code defines a family of benchmarks for measuring the speed of set
|
|
|
|
insertion.
|
2013-12-20 22:53:25 +00:00
|
|
|
|
2014-02-09 19:45:17 +00:00
|
|
|
```c++
|
|
|
|
static void BM_SetInsert(benchmark::State& state) {
|
2017-10-31 18:00:39 +00:00
|
|
|
std::set<int> data;
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state) {
|
2014-02-09 19:45:17 +00:00
|
|
|
state.PauseTiming();
|
2017-10-31 18:00:39 +00:00
|
|
|
data = ConstructRandomSet(state.range(0));
|
2014-02-09 19:45:17 +00:00
|
|
|
state.ResumeTiming();
|
2016-08-04 19:30:14 +00:00
|
|
|
for (int j = 0; j < state.range(1); ++j)
|
2014-02-09 19:45:17 +00:00
|
|
|
data.insert(RandomNumber());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
BENCHMARK(BM_SetInsert)
|
2017-10-31 18:00:39 +00:00
|
|
|
->Args({1<<10, 128})
|
|
|
|
->Args({2<<10, 128})
|
|
|
|
->Args({4<<10, 128})
|
|
|
|
->Args({8<<10, 128})
|
2016-08-04 19:30:14 +00:00
|
|
|
->Args({1<<10, 512})
|
2017-10-31 18:00:39 +00:00
|
|
|
->Args({2<<10, 512})
|
|
|
|
->Args({4<<10, 512})
|
2016-08-04 19:30:14 +00:00
|
|
|
->Args({8<<10, 512});
|
2014-02-09 19:45:17 +00:00
|
|
|
```
|
2013-12-20 22:51:56 +00:00
|
|
|
|
2016-04-19 16:34:13 +00:00
|
|
|
The preceding code is quite repetitive, and can be replaced with the following
|
|
|
|
short-hand. The following macro will pick a few appropriate arguments in the
|
|
|
|
product of the two specified ranges and will generate a benchmark for each such
|
|
|
|
pair.
|
2013-12-20 22:53:25 +00:00
|
|
|
|
2014-02-09 19:45:17 +00:00
|
|
|
```c++
|
2017-10-31 18:00:39 +00:00
|
|
|
BENCHMARK(BM_SetInsert)->Ranges({{1<<10, 8<<10}, {128, 512}});
|
2014-02-09 19:45:17 +00:00
|
|
|
```
|
2013-12-20 22:51:56 +00:00
|
|
|
|
2016-04-19 16:34:13 +00:00
|
|
|
For more complex patterns of inputs, passing a custom function to `Apply` allows
|
|
|
|
programmatic specification of an arbitrary set of arguments on which to run the
|
|
|
|
benchmark. The following example enumerates a dense range on one parameter,
|
2013-12-20 22:51:56 +00:00
|
|
|
and a sparse range on the second.
|
2013-12-20 22:53:25 +00:00
|
|
|
|
2014-02-09 19:45:17 +00:00
|
|
|
```c++
|
2015-11-30 15:15:00 +00:00
|
|
|
static void CustomArguments(benchmark::internal::Benchmark* b) {
|
2014-02-09 19:45:17 +00:00
|
|
|
for (int i = 0; i <= 10; ++i)
|
|
|
|
for (int j = 32; j <= 1024*1024; j *= 8)
|
2016-08-04 19:30:14 +00:00
|
|
|
b->Args({i, j});
|
2014-02-09 19:45:17 +00:00
|
|
|
}
|
|
|
|
BENCHMARK(BM_SetInsert)->Apply(CustomArguments);
|
|
|
|
```
|
2013-12-20 22:51:56 +00:00
|
|
|
|
2016-05-21 10:40:27 +00:00
|
|
|
### Calculate asymptotic complexity (Big O)
|
2016-05-24 20:25:59 +00:00
|
|
|
Asymptotic complexity might be calculated for a family of benchmarks. The
|
|
|
|
following code will calculate the coefficient for the high-order term in the
|
|
|
|
running time and the normalized root-mean square error of string comparison.
|
2016-05-21 10:40:27 +00:00
|
|
|
|
|
|
|
```c++
|
|
|
|
static void BM_StringCompare(benchmark::State& state) {
|
2016-08-04 19:30:14 +00:00
|
|
|
std::string s1(state.range(0), '-');
|
|
|
|
std::string s2(state.range(0), '-');
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state) {
|
2016-05-21 10:40:27 +00:00
|
|
|
benchmark::DoNotOptimize(s1.compare(s2));
|
2016-06-27 18:24:13 +00:00
|
|
|
}
|
2016-08-04 19:30:14 +00:00
|
|
|
state.SetComplexityN(state.range(0));
|
2016-05-21 10:40:27 +00:00
|
|
|
}
|
|
|
|
BENCHMARK(BM_StringCompare)
|
2016-05-24 20:25:59 +00:00
|
|
|
->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity(benchmark::oN);
|
2016-05-21 10:40:27 +00:00
|
|
|
```
|
|
|
|
|
2016-05-24 20:25:59 +00:00
|
|
|
As shown in the following invocation, asymptotic complexity might also be
|
|
|
|
calculated automatically.
|
2016-05-21 10:40:27 +00:00
|
|
|
|
|
|
|
```c++
|
|
|
|
BENCHMARK(BM_StringCompare)
|
2016-05-25 21:06:27 +00:00
|
|
|
->RangeMultiplier(2)->Range(1<<10, 1<<18)->Complexity();
|
2016-05-21 10:40:27 +00:00
|
|
|
```
|
|
|
|
|
2016-06-02 18:58:14 +00:00
|
|
|
The following code will specify asymptotic complexity with a lambda function,
|
|
|
|
that might be used to customize high-order term calculation.
|
|
|
|
|
|
|
|
```c++
|
|
|
|
BENCHMARK(BM_StringCompare)->RangeMultiplier(2)
|
2018-11-28 22:23:25 +00:00
|
|
|
->Range(1<<10, 1<<18)->Complexity([](int64_t n)->double{return n; });
|
2016-06-02 18:58:14 +00:00
|
|
|
```
|
|
|
|
|
2016-04-19 16:34:13 +00:00
|
|
|
### Templated benchmarks
|
|
|
|
Templated benchmarks work the same way: This example produces and consumes
|
|
|
|
messages of size `sizeof(v)` `range_x` times. It also outputs throughput in the
|
|
|
|
absence of multiprogramming.
|
2013-12-20 22:53:25 +00:00
|
|
|
|
2014-02-09 19:45:17 +00:00
|
|
|
```c++
|
2018-11-26 12:39:36 +00:00
|
|
|
template <class Q> void BM_Sequential(benchmark::State& state) {
|
2014-02-09 19:45:17 +00:00
|
|
|
Q q;
|
|
|
|
typename Q::value_type v;
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state) {
|
2016-08-04 19:30:14 +00:00
|
|
|
for (int i = state.range(0); i--; )
|
2014-02-09 19:45:17 +00:00
|
|
|
q.push(v);
|
2016-08-04 19:30:14 +00:00
|
|
|
for (int e = state.range(0); e--; )
|
2014-02-09 19:45:17 +00:00
|
|
|
q.Wait(&v);
|
|
|
|
}
|
|
|
|
// actually messages, not bytes:
|
|
|
|
state.SetBytesProcessed(
|
2016-08-04 19:30:14 +00:00
|
|
|
static_cast<int64_t>(state.iterations())*state.range(0));
|
2014-02-09 19:45:17 +00:00
|
|
|
}
|
|
|
|
BENCHMARK_TEMPLATE(BM_Sequential, WaitQueue<int>)->Range(1<<0, 1<<10);
|
|
|
|
```
|
2013-12-20 22:51:56 +00:00
|
|
|
|
2015-03-18 20:34:43 +00:00
|
|
|
Three macros are provided for adding benchmark templates.
|
|
|
|
|
|
|
|
```c++
|
2017-09-14 21:50:33 +00:00
|
|
|
#ifdef BENCHMARK_HAS_CXX11
|
2015-03-18 20:34:43 +00:00
|
|
|
#define BENCHMARK_TEMPLATE(func, ...) // Takes any number of parameters.
|
|
|
|
#else // C++ < C++11
|
|
|
|
#define BENCHMARK_TEMPLATE(func, arg1)
|
|
|
|
#endif
|
|
|
|
#define BENCHMARK_TEMPLATE1(func, arg1)
|
|
|
|
#define BENCHMARK_TEMPLATE2(func, arg1, arg2)
|
|
|
|
```
|
|
|
|
|
2017-10-10 15:56:42 +00:00
|
|
|
### A Faster KeepRunning loop
|
|
|
|
|
|
|
|
In C++11 mode, a ranged-based for loop should be used in preference to
|
|
|
|
the `KeepRunning` loop for running the benchmarks. For example:
|
|
|
|
|
|
|
|
```c++
|
2017-10-16 16:17:17 +00:00
|
|
|
static void BM_Fast(benchmark::State &state) {
|
2017-10-10 15:56:42 +00:00
|
|
|
for (auto _ : state) {
|
|
|
|
FastOperation();
|
|
|
|
}
|
|
|
|
}
|
2017-10-16 16:17:17 +00:00
|
|
|
BENCHMARK(BM_Fast);
|
2017-10-10 15:56:42 +00:00
|
|
|
```
|
|
|
|
|
2017-10-17 18:17:02 +00:00
|
|
|
The reason the ranged-for loop is faster than using `KeepRunning`, is
|
2017-10-10 15:56:42 +00:00
|
|
|
because `KeepRunning` requires a memory load and store of the iteration count
|
|
|
|
ever iteration, whereas the ranged-for variant is able to keep the iteration count
|
|
|
|
in a register.
|
|
|
|
|
|
|
|
For example, an empty inner loop of using the ranged-based for method looks like:
|
|
|
|
|
|
|
|
```asm
|
|
|
|
# Loop Init
|
|
|
|
mov rbx, qword ptr [r14 + 104]
|
|
|
|
call benchmark::State::StartKeepRunning()
|
|
|
|
test rbx, rbx
|
|
|
|
je .LoopEnd
|
|
|
|
.LoopHeader: # =>This Inner Loop Header: Depth=1
|
|
|
|
add rbx, -1
|
|
|
|
jne .LoopHeader
|
|
|
|
.LoopEnd:
|
|
|
|
```
|
|
|
|
|
|
|
|
Compared to an empty `KeepRunning` loop, which looks like:
|
|
|
|
|
|
|
|
```asm
|
|
|
|
.LoopHeader: # in Loop: Header=BB0_3 Depth=1
|
|
|
|
cmp byte ptr [rbx], 1
|
|
|
|
jne .LoopInit
|
|
|
|
.LoopBody: # =>This Inner Loop Header: Depth=1
|
|
|
|
mov rax, qword ptr [rbx + 8]
|
|
|
|
lea rcx, [rax + 1]
|
|
|
|
mov qword ptr [rbx + 8], rcx
|
|
|
|
cmp rax, qword ptr [rbx + 104]
|
|
|
|
jb .LoopHeader
|
|
|
|
jmp .LoopEnd
|
|
|
|
.LoopInit:
|
|
|
|
mov rdi, rbx
|
|
|
|
call benchmark::State::StartKeepRunning()
|
|
|
|
jmp .LoopBody
|
|
|
|
.LoopEnd:
|
|
|
|
```
|
|
|
|
|
2017-10-17 18:17:02 +00:00
|
|
|
Unless C++03 compatibility is required, the ranged-for variant of writing
|
|
|
|
the benchmark loop should be preferred.
|
|
|
|
|
2016-05-27 19:37:10 +00:00
|
|
|
## Passing arbitrary arguments to a benchmark
|
|
|
|
In C++11 it is possible to define a benchmark that takes an arbitrary number
|
|
|
|
of extra arguments. The `BENCHMARK_CAPTURE(func, test_case_name, ...args)`
|
|
|
|
macro creates a benchmark that invokes `func` with the `benchmark::State` as
|
|
|
|
the first argument followed by the specified `args...`.
|
|
|
|
The `test_case_name` is appended to the name of the benchmark and
|
|
|
|
should describe the values passed.
|
|
|
|
|
|
|
|
```c++
|
2017-09-13 21:42:45 +00:00
|
|
|
template <class ...ExtraArgs>
|
2016-05-27 19:37:10 +00:00
|
|
|
void BM_takes_args(benchmark::State& state, ExtraArgs&&... extra_args) {
|
|
|
|
[...]
|
|
|
|
}
|
2017-09-13 21:42:45 +00:00
|
|
|
// Registers a benchmark named "BM_takes_args/int_string_test" that passes
|
2016-05-27 19:37:10 +00:00
|
|
|
// the specified values to `extra_args`.
|
|
|
|
BENCHMARK_CAPTURE(BM_takes_args, int_string_test, 42, std::string("abc"));
|
|
|
|
```
|
|
|
|
Note that elements of `...args` may refer to global variables. Users should
|
|
|
|
avoid modifying global state inside of a benchmark.
|
|
|
|
|
2016-08-02 23:22:46 +00:00
|
|
|
## Using RegisterBenchmark(name, fn, args...)
|
|
|
|
|
|
|
|
The `RegisterBenchmark(name, func, args...)` function provides an alternative
|
|
|
|
way to create and register benchmarks.
|
|
|
|
`RegisterBenchmark(name, func, args...)` creates, registers, and returns a
|
|
|
|
pointer to a new benchmark with the specified `name` that invokes
|
|
|
|
`func(st, args...)` where `st` is a `benchmark::State` object.
|
|
|
|
|
|
|
|
Unlike the `BENCHMARK` registration macros, which can only be used at the global
|
|
|
|
scope, the `RegisterBenchmark` can be called anywhere. This allows for
|
|
|
|
benchmark tests to be registered programmatically.
|
|
|
|
|
|
|
|
Additionally `RegisterBenchmark` allows any callable object to be registered
|
Drop Stat1, refactor statistics to be user-providable, add median. (#428)
* Drop Stat1, refactor statistics to be user-providable, add median.
My main goal was to add median statistic. Since Stat1
calculated the stats incrementally, and did not store
the values themselves, it is was not possible. Thus,
i have replaced Stat1 with simple std::vector<double>,
containing all the values.
Then, i have refactored current mean/stdev to be a
function that is provided with values vector, and
returns the statistic. While there, it seemed to make
sense to deduplicate the code by storing all the
statistics functions in a map, and then simply iterate
over it. And the interface to add new statistics is
intentionally exposed, so they may be added easily.
The notable change is that Iterations are no longer
displayed as 0 for stdev. Is could be changed, but
i'm not sure how to nicely fit that into the API.
Similarly, this dance about sometimes (for some fields,
for some statistics) dividing by run.iterations, and
then multiplying the calculated stastic back is also
dropped, and if you do the math, i fail to see why
it was needed there in the first place.
Since that was the only use of stat.h, it is removed.
* complexity.h: attempt to fix MSVC build
* Update README.md
* Store statistics to compute in a vector, ensures ordering.
* Add a bit more tests for repetitions.
* Partially address review notes.
* Fix gcc build: drop extra ';'
clang, why didn't you warn me?
* Address review comments.
* double() -> 0.0
* early return
2017-08-23 23:44:29 +00:00
|
|
|
as a benchmark. Including capturing lambdas and function objects.
|
2016-08-02 23:22:46 +00:00
|
|
|
|
|
|
|
For Example:
|
|
|
|
```c++
|
|
|
|
auto BM_test = [](benchmark::State& st, auto Inputs) { /* ... */ };
|
|
|
|
|
|
|
|
int main(int argc, char** argv) {
|
|
|
|
for (auto& test_input : { /* ... */ })
|
|
|
|
benchmark::RegisterBenchmark(test_input.name(), BM_test, test_input);
|
|
|
|
benchmark::Initialize(&argc, argv);
|
|
|
|
benchmark::RunSpecifiedBenchmarks();
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
2016-04-19 16:34:13 +00:00
|
|
|
### Multithreaded benchmarks
|
2015-12-30 14:01:19 +00:00
|
|
|
In a multithreaded test (benchmark invoked by multiple threads simultaneously),
|
2017-10-17 18:17:02 +00:00
|
|
|
it is guaranteed that none of the threads will start until all have reached
|
|
|
|
the start of the benchmark loop, and all will have finished before any thread
|
|
|
|
exits the benchmark loop. (This behavior is also provided by the `KeepRunning()`
|
|
|
|
API) As such, any global setup or teardown can be wrapped in a check against the thread
|
2016-04-19 16:34:13 +00:00
|
|
|
index:
|
2013-12-20 22:51:56 +00:00
|
|
|
|
2014-02-09 19:45:17 +00:00
|
|
|
```c++
|
|
|
|
static void BM_MultiThreaded(benchmark::State& state) {
|
|
|
|
if (state.thread_index == 0) {
|
|
|
|
// Setup code here.
|
|
|
|
}
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state) {
|
2014-02-09 19:45:17 +00:00
|
|
|
// Run the test as normal.
|
|
|
|
}
|
|
|
|
if (state.thread_index == 0) {
|
|
|
|
// Teardown code here.
|
|
|
|
}
|
|
|
|
}
|
|
|
|
BENCHMARK(BM_MultiThreaded)->Threads(2);
|
2015-11-05 17:53:08 +00:00
|
|
|
```
|
2015-03-27 20:35:46 +00:00
|
|
|
|
2015-12-30 14:01:19 +00:00
|
|
|
If the benchmarked code itself uses threads and you want to compare it to
|
|
|
|
single-threaded code, you may want to use real-time ("wallclock") measurements
|
|
|
|
for latency comparisons:
|
|
|
|
|
|
|
|
```c++
|
|
|
|
BENCHMARK(BM_test)->Range(8, 8<<10)->UseRealTime();
|
|
|
|
```
|
|
|
|
|
|
|
|
Without `UseRealTime`, CPU time is used by default.
|
|
|
|
|
2019-04-09 12:01:33 +00:00
|
|
|
## CPU timers
|
|
|
|
By default, the CPU timer only measures the time spent by the main thread.
|
|
|
|
If the benchmark itself uses threads internally, this measurement may not
|
|
|
|
be what you are looking for. Instead, there is a way to measure the total
|
|
|
|
CPU usage of the process, by all the threads.
|
|
|
|
|
|
|
|
```c++
|
|
|
|
void callee(int i);
|
|
|
|
|
|
|
|
static void MyMain(int size) {
|
|
|
|
#pragma omp parallel for
|
|
|
|
for(int i = 0; i < size; i++)
|
|
|
|
callee(i);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void BM_OpenMP(benchmark::State& state) {
|
|
|
|
for (auto _ : state)
|
|
|
|
MyMain(state.range(0);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Measure the time spent by the main thread, use it to decide for how long to
|
|
|
|
// run the benchmark loop. Depending on the internal implementation detail may
|
|
|
|
// measure to anywhere from near-zero (the overhead spent before/after work
|
|
|
|
// handoff to worker thread[s]) to the whole single-thread time.
|
|
|
|
BENCHMARK(BM_OpenMP)->Range(8, 8<<10);
|
|
|
|
|
|
|
|
// Measure the user-visible time, the wall clock (literally, the time that
|
|
|
|
// has passed on the clock on the wall), use it to decide for how long to
|
|
|
|
// run the benchmark loop. This will always be meaningful, an will match the
|
|
|
|
// time spent by the main thread in single-threaded case, in general decreasing
|
|
|
|
// with the number of internal threads doing the work.
|
|
|
|
BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->UseRealTime();
|
|
|
|
|
|
|
|
// Measure the total CPU consumption, use it to decide for how long to
|
|
|
|
// run the benchmark loop. This will always measure to no less than the
|
|
|
|
// time spent by the main thread in single-threaded case.
|
|
|
|
BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->MeasureProcessCPUTime();
|
|
|
|
|
|
|
|
// A mixture of the last two. Measure the total CPU consumption, but use the
|
|
|
|
// wall clock to decide for how long to run the benchmark loop.
|
|
|
|
BENCHMARK(BM_OpenMP)->Range(8, 8<<10)->MeasureProcessCPUTime()->UseRealTime();
|
|
|
|
```
|
|
|
|
|
2018-12-13 22:20:01 +00:00
|
|
|
## Controlling timers
|
|
|
|
Normally, the entire duration of the work loop (`for (auto _ : state) {}`)
|
|
|
|
is measured. But sometimes, it is nessesary to do some work inside of
|
|
|
|
that loop, every iteration, but without counting that time to the benchmark time.
|
|
|
|
That is possible, althought it is not recommended, since it has high overhead.
|
|
|
|
|
|
|
|
```c++
|
|
|
|
static void BM_SetInsert_With_Timer_Control(benchmark::State& state) {
|
|
|
|
std::set<int> data;
|
|
|
|
for (auto _ : state) {
|
|
|
|
state.PauseTiming(); // Stop timers. They will not count until they are resumed.
|
|
|
|
data = ConstructRandomSet(state.range(0)); // Do something that should not be measured
|
|
|
|
state.ResumeTiming(); // And resume timers. They are now counting again.
|
|
|
|
// The rest will be measured.
|
|
|
|
for (int j = 0; j < state.range(1); ++j)
|
|
|
|
data.insert(RandomNumber());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
BENCHMARK(BM_SetInsert_With_Timer_Control)->Ranges({{1<<10, 8<<10}, {128, 512}});
|
|
|
|
```
|
2016-04-30 13:23:58 +00:00
|
|
|
|
|
|
|
## Manual timing
|
|
|
|
For benchmarking something for which neither CPU time nor real-time are
|
|
|
|
correct or accurate enough, completely manual timing is supported using
|
Drop Stat1, refactor statistics to be user-providable, add median. (#428)
* Drop Stat1, refactor statistics to be user-providable, add median.
My main goal was to add median statistic. Since Stat1
calculated the stats incrementally, and did not store
the values themselves, it is was not possible. Thus,
i have replaced Stat1 with simple std::vector<double>,
containing all the values.
Then, i have refactored current mean/stdev to be a
function that is provided with values vector, and
returns the statistic. While there, it seemed to make
sense to deduplicate the code by storing all the
statistics functions in a map, and then simply iterate
over it. And the interface to add new statistics is
intentionally exposed, so they may be added easily.
The notable change is that Iterations are no longer
displayed as 0 for stdev. Is could be changed, but
i'm not sure how to nicely fit that into the API.
Similarly, this dance about sometimes (for some fields,
for some statistics) dividing by run.iterations, and
then multiplying the calculated stastic back is also
dropped, and if you do the math, i fail to see why
it was needed there in the first place.
Since that was the only use of stat.h, it is removed.
* complexity.h: attempt to fix MSVC build
* Update README.md
* Store statistics to compute in a vector, ensures ordering.
* Add a bit more tests for repetitions.
* Partially address review notes.
* Fix gcc build: drop extra ';'
clang, why didn't you warn me?
* Address review comments.
* double() -> 0.0
* early return
2017-08-23 23:44:29 +00:00
|
|
|
the `UseManualTime` function.
|
2016-04-30 13:23:58 +00:00
|
|
|
|
|
|
|
When `UseManualTime` is used, the benchmarked code must call
|
2017-10-17 18:17:02 +00:00
|
|
|
`SetIterationTime` once per iteration of the benchmark loop to
|
2016-04-30 13:23:58 +00:00
|
|
|
report the manually measured time.
|
|
|
|
|
|
|
|
An example use case for this is benchmarking GPU execution (e.g. OpenCL
|
|
|
|
or CUDA kernels, OpenGL or Vulkan or Direct3D draw calls), which cannot
|
|
|
|
be accurately measured using CPU time or real-time. Instead, they can be
|
|
|
|
measured accurately using a dedicated API, and these measurement results
|
|
|
|
can be reported back with `SetIterationTime`.
|
|
|
|
|
|
|
|
```c++
|
|
|
|
static void BM_ManualTiming(benchmark::State& state) {
|
2016-08-04 19:30:14 +00:00
|
|
|
int microseconds = state.range(0);
|
2016-04-30 13:23:58 +00:00
|
|
|
std::chrono::duration<double, std::micro> sleep_duration {
|
|
|
|
static_cast<double>(microseconds)
|
|
|
|
};
|
|
|
|
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state) {
|
2016-04-30 13:23:58 +00:00
|
|
|
auto start = std::chrono::high_resolution_clock::now();
|
|
|
|
// Simulate some useful workload with a sleep
|
|
|
|
std::this_thread::sleep_for(sleep_duration);
|
|
|
|
auto end = std::chrono::high_resolution_clock::now();
|
|
|
|
|
|
|
|
auto elapsed_seconds =
|
|
|
|
std::chrono::duration_cast<std::chrono::duration<double>>(
|
|
|
|
end - start);
|
|
|
|
|
|
|
|
state.SetIterationTime(elapsed_seconds.count());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
BENCHMARK(BM_ManualTiming)->Range(1, 1<<17)->UseManualTime();
|
|
|
|
```
|
|
|
|
|
2016-04-19 16:34:13 +00:00
|
|
|
### Preventing optimisation
|
2015-03-27 20:35:46 +00:00
|
|
|
To prevent a value or expression from being optimized away by the compiler
|
2016-07-11 20:58:50 +00:00
|
|
|
the `benchmark::DoNotOptimize(...)` and `benchmark::ClobberMemory()`
|
|
|
|
functions can be used.
|
2015-03-27 20:35:46 +00:00
|
|
|
|
|
|
|
```c++
|
|
|
|
static void BM_test(benchmark::State& state) {
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state) {
|
2015-03-27 20:35:46 +00:00
|
|
|
int x = 0;
|
|
|
|
for (int i=0; i < 64; ++i) {
|
|
|
|
benchmark::DoNotOptimize(x += i);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2014-02-09 19:45:17 +00:00
|
|
|
```
|
2014-12-26 16:44:14 +00:00
|
|
|
|
2016-07-11 20:58:50 +00:00
|
|
|
`DoNotOptimize(<expr>)` forces the *result* of `<expr>` to be stored in either
|
|
|
|
memory or a register. For GNU based compilers it acts as read/write barrier
|
|
|
|
for global memory. More specifically it forces the compiler to flush pending
|
|
|
|
writes to memory and reload any other values as necessary.
|
|
|
|
|
|
|
|
Note that `DoNotOptimize(<expr>)` does not prevent optimizations on `<expr>`
|
|
|
|
in any way. `<expr>` may even be removed entirely when the result is already
|
|
|
|
known. For example:
|
|
|
|
|
|
|
|
```c++
|
|
|
|
/* Example 1: `<expr>` is removed entirely. */
|
|
|
|
int foo(int x) { return x + 42; }
|
|
|
|
while (...) DoNotOptimize(foo(0)); // Optimized to DoNotOptimize(42);
|
|
|
|
|
|
|
|
/* Example 2: Result of '<expr>' is only reused */
|
|
|
|
int bar(int) __attribute__((const));
|
|
|
|
while (...) DoNotOptimize(bar(0)); // Optimized to:
|
|
|
|
// int __result__ = bar(0);
|
|
|
|
// while (...) DoNotOptimize(__result__);
|
|
|
|
```
|
|
|
|
|
|
|
|
The second tool for preventing optimizations is `ClobberMemory()`. In essence
|
|
|
|
`ClobberMemory()` forces the compiler to perform all pending writes to global
|
|
|
|
memory. Memory managed by block scope objects must be "escaped" using
|
|
|
|
`DoNotOptimize(...)` before it can be clobbered. In the below example
|
|
|
|
`ClobberMemory()` prevents the call to `v.push_back(42)` from being optimized
|
|
|
|
away.
|
|
|
|
|
|
|
|
```c++
|
|
|
|
static void BM_vector_push_back(benchmark::State& state) {
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state) {
|
2016-07-11 20:58:50 +00:00
|
|
|
std::vector<int> v;
|
|
|
|
v.reserve(1);
|
|
|
|
benchmark::DoNotOptimize(v.data()); // Allow v.data() to be clobbered.
|
|
|
|
v.push_back(42);
|
|
|
|
benchmark::ClobberMemory(); // Force 42 to be written to memory.
|
|
|
|
}
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
2017-03-11 01:47:39 +00:00
|
|
|
Note that `ClobberMemory()` is only available for GNU or MSVC based compilers.
|
2016-07-11 20:58:50 +00:00
|
|
|
|
2016-04-29 19:42:21 +00:00
|
|
|
### Set time unit manually
|
2016-03-28 19:32:11 +00:00
|
|
|
If a benchmark runs a few milliseconds it may be hard to visually compare the
|
|
|
|
measured times, since the output data is given in nanoseconds per default. In
|
|
|
|
order to manually set the time unit, you can specify it manually:
|
|
|
|
|
|
|
|
```c++
|
|
|
|
BENCHMARK(BM_test)->Unit(benchmark::kMillisecond);
|
|
|
|
```
|
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
### Reporting the mean, median and standard deviation by repeated benchmarks
|
2016-05-25 03:52:23 +00:00
|
|
|
By default each benchmark is run once and that single result is reported.
|
|
|
|
However benchmarks are often noisy and a single result may not be representative
|
|
|
|
of the overall behavior. For this reason it's possible to repeatedly rerun the
|
|
|
|
benchmark.
|
|
|
|
|
|
|
|
The number of runs of each benchmark is specified globally by the
|
|
|
|
`--benchmark_repetitions` flag or on a per benchmark basis by calling
|
Drop Stat1, refactor statistics to be user-providable, add median. (#428)
* Drop Stat1, refactor statistics to be user-providable, add median.
My main goal was to add median statistic. Since Stat1
calculated the stats incrementally, and did not store
the values themselves, it is was not possible. Thus,
i have replaced Stat1 with simple std::vector<double>,
containing all the values.
Then, i have refactored current mean/stdev to be a
function that is provided with values vector, and
returns the statistic. While there, it seemed to make
sense to deduplicate the code by storing all the
statistics functions in a map, and then simply iterate
over it. And the interface to add new statistics is
intentionally exposed, so they may be added easily.
The notable change is that Iterations are no longer
displayed as 0 for stdev. Is could be changed, but
i'm not sure how to nicely fit that into the API.
Similarly, this dance about sometimes (for some fields,
for some statistics) dividing by run.iterations, and
then multiplying the calculated stastic back is also
dropped, and if you do the math, i fail to see why
it was needed there in the first place.
Since that was the only use of stat.h, it is removed.
* complexity.h: attempt to fix MSVC build
* Update README.md
* Store statistics to compute in a vector, ensures ordering.
* Add a bit more tests for repetitions.
* Partially address review notes.
* Fix gcc build: drop extra ';'
clang, why didn't you warn me?
* Address review comments.
* double() -> 0.0
* early return
2017-08-23 23:44:29 +00:00
|
|
|
`Repetitions` on the registered benchmark object. When a benchmark is run more
|
|
|
|
than once the mean, median and standard deviation of the runs will be reported.
|
2016-05-25 03:52:23 +00:00
|
|
|
|
2018-09-12 13:26:17 +00:00
|
|
|
Additionally the `--benchmark_report_aggregates_only={true|false}`,
|
|
|
|
`--benchmark_display_aggregates_only={true|false}` flags or
|
|
|
|
`ReportAggregatesOnly(bool)`, `DisplayAggregatesOnly(bool)` functions can be
|
|
|
|
used to change how repeated tests are reported. By default the result of each
|
|
|
|
repeated run is reported. When `report aggregates only` option is `true`,
|
|
|
|
only the aggregates (i.e. mean, median and standard deviation, maybe complexity
|
|
|
|
measurements if they were requested) of the runs is reported, to both the
|
|
|
|
reporters - standard output (console), and the file.
|
|
|
|
However when only the `display aggregates only` option is `true`,
|
|
|
|
only the aggregates are displayed in the standard output, while the file
|
|
|
|
output still contains everything.
|
|
|
|
Calling `ReportAggregatesOnly(bool)` / `DisplayAggregatesOnly(bool)` on a
|
|
|
|
registered benchmark object overrides the value of the appropriate flag for that
|
|
|
|
benchmark.
|
2016-08-11 00:20:54 +00:00
|
|
|
|
Drop Stat1, refactor statistics to be user-providable, add median. (#428)
* Drop Stat1, refactor statistics to be user-providable, add median.
My main goal was to add median statistic. Since Stat1
calculated the stats incrementally, and did not store
the values themselves, it is was not possible. Thus,
i have replaced Stat1 with simple std::vector<double>,
containing all the values.
Then, i have refactored current mean/stdev to be a
function that is provided with values vector, and
returns the statistic. While there, it seemed to make
sense to deduplicate the code by storing all the
statistics functions in a map, and then simply iterate
over it. And the interface to add new statistics is
intentionally exposed, so they may be added easily.
The notable change is that Iterations are no longer
displayed as 0 for stdev. Is could be changed, but
i'm not sure how to nicely fit that into the API.
Similarly, this dance about sometimes (for some fields,
for some statistics) dividing by run.iterations, and
then multiplying the calculated stastic back is also
dropped, and if you do the math, i fail to see why
it was needed there in the first place.
Since that was the only use of stat.h, it is removed.
* complexity.h: attempt to fix MSVC build
* Update README.md
* Store statistics to compute in a vector, ensures ordering.
* Add a bit more tests for repetitions.
* Partially address review notes.
* Fix gcc build: drop extra ';'
clang, why didn't you warn me?
* Address review comments.
* double() -> 0.0
* early return
2017-08-23 23:44:29 +00:00
|
|
|
## User-defined statistics for repeated benchmarks
|
|
|
|
While having mean, median and standard deviation is nice, this may not be
|
|
|
|
enough for everyone. For example you may want to know what is the largest
|
|
|
|
observation, e.g. because you have some real-time constraints. This is easy.
|
|
|
|
The following code will specify a custom statistic to be calculated, defined
|
|
|
|
by a lambda function.
|
|
|
|
|
|
|
|
```c++
|
|
|
|
void BM_spin_empty(benchmark::State& state) {
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state) {
|
Drop Stat1, refactor statistics to be user-providable, add median. (#428)
* Drop Stat1, refactor statistics to be user-providable, add median.
My main goal was to add median statistic. Since Stat1
calculated the stats incrementally, and did not store
the values themselves, it is was not possible. Thus,
i have replaced Stat1 with simple std::vector<double>,
containing all the values.
Then, i have refactored current mean/stdev to be a
function that is provided with values vector, and
returns the statistic. While there, it seemed to make
sense to deduplicate the code by storing all the
statistics functions in a map, and then simply iterate
over it. And the interface to add new statistics is
intentionally exposed, so they may be added easily.
The notable change is that Iterations are no longer
displayed as 0 for stdev. Is could be changed, but
i'm not sure how to nicely fit that into the API.
Similarly, this dance about sometimes (for some fields,
for some statistics) dividing by run.iterations, and
then multiplying the calculated stastic back is also
dropped, and if you do the math, i fail to see why
it was needed there in the first place.
Since that was the only use of stat.h, it is removed.
* complexity.h: attempt to fix MSVC build
* Update README.md
* Store statistics to compute in a vector, ensures ordering.
* Add a bit more tests for repetitions.
* Partially address review notes.
* Fix gcc build: drop extra ';'
clang, why didn't you warn me?
* Address review comments.
* double() -> 0.0
* early return
2017-08-23 23:44:29 +00:00
|
|
|
for (int x = 0; x < state.range(0); ++x) {
|
|
|
|
benchmark::DoNotOptimize(x);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
BENCHMARK(BM_spin_empty)
|
|
|
|
->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
|
|
|
|
return *(std::max_element(std::begin(v), std::end(v)));
|
|
|
|
})
|
|
|
|
->Arg(512);
|
|
|
|
```
|
|
|
|
|
2016-04-19 16:34:13 +00:00
|
|
|
## Fixtures
|
2015-04-06 21:56:05 +00:00
|
|
|
Fixture tests are created by
|
Drop Stat1, refactor statistics to be user-providable, add median. (#428)
* Drop Stat1, refactor statistics to be user-providable, add median.
My main goal was to add median statistic. Since Stat1
calculated the stats incrementally, and did not store
the values themselves, it is was not possible. Thus,
i have replaced Stat1 with simple std::vector<double>,
containing all the values.
Then, i have refactored current mean/stdev to be a
function that is provided with values vector, and
returns the statistic. While there, it seemed to make
sense to deduplicate the code by storing all the
statistics functions in a map, and then simply iterate
over it. And the interface to add new statistics is
intentionally exposed, so they may be added easily.
The notable change is that Iterations are no longer
displayed as 0 for stdev. Is could be changed, but
i'm not sure how to nicely fit that into the API.
Similarly, this dance about sometimes (for some fields,
for some statistics) dividing by run.iterations, and
then multiplying the calculated stastic back is also
dropped, and if you do the math, i fail to see why
it was needed there in the first place.
Since that was the only use of stat.h, it is removed.
* complexity.h: attempt to fix MSVC build
* Update README.md
* Store statistics to compute in a vector, ensures ordering.
* Add a bit more tests for repetitions.
* Partially address review notes.
* Fix gcc build: drop extra ';'
clang, why didn't you warn me?
* Address review comments.
* double() -> 0.0
* early return
2017-08-23 23:44:29 +00:00
|
|
|
first defining a type that derives from `::benchmark::Fixture` and then
|
2015-04-06 21:56:05 +00:00
|
|
|
creating/registering the tests using the following macros:
|
|
|
|
|
|
|
|
* `BENCHMARK_F(ClassName, Method)`
|
|
|
|
* `BENCHMARK_DEFINE_F(ClassName, Method)`
|
|
|
|
* `BENCHMARK_REGISTER_F(ClassName, Method)`
|
|
|
|
|
|
|
|
For Example:
|
|
|
|
|
|
|
|
```c++
|
2019-02-04 13:26:51 +00:00
|
|
|
class MyFixture : public benchmark::Fixture {
|
|
|
|
public:
|
|
|
|
void SetUp(const ::benchmark::State& state) {
|
|
|
|
}
|
|
|
|
|
|
|
|
void TearDown(const ::benchmark::State& state) {
|
|
|
|
}
|
|
|
|
};
|
2015-04-06 21:56:05 +00:00
|
|
|
|
|
|
|
BENCHMARK_F(MyFixture, FooTest)(benchmark::State& st) {
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : st) {
|
2015-04-06 21:56:05 +00:00
|
|
|
...
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
BENCHMARK_DEFINE_F(MyFixture, BarTest)(benchmark::State& st) {
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : st) {
|
2015-04-06 21:56:05 +00:00
|
|
|
...
|
|
|
|
}
|
|
|
|
}
|
|
|
|
/* BarTest is NOT registered */
|
|
|
|
BENCHMARK_REGISTER_F(MyFixture, BarTest)->Threads(2);
|
|
|
|
/* BarTest is now registered */
|
|
|
|
```
|
2015-03-17 22:42:41 +00:00
|
|
|
|
2017-10-09 19:10:37 +00:00
|
|
|
### Templated fixtures
|
|
|
|
Also you can create templated fixture by using the following macros:
|
|
|
|
|
|
|
|
* `BENCHMARK_TEMPLATE_F(ClassName, Method, ...)`
|
|
|
|
* `BENCHMARK_TEMPLATE_DEFINE_F(ClassName, Method, ...)`
|
|
|
|
|
|
|
|
For example:
|
|
|
|
```c++
|
|
|
|
template<typename T>
|
|
|
|
class MyFixture : public benchmark::Fixture {};
|
|
|
|
|
|
|
|
BENCHMARK_TEMPLATE_F(MyFixture, IntTest, int)(benchmark::State& st) {
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : st) {
|
2017-10-09 19:10:37 +00:00
|
|
|
...
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
BENCHMARK_TEMPLATE_DEFINE_F(MyFixture, DoubleTest, double)(benchmark::State& st) {
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : st) {
|
2017-10-09 19:10:37 +00:00
|
|
|
...
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
BENCHMARK_REGISTER_F(MyFixture, DoubleTest)->Threads(2);
|
|
|
|
```
|
2017-03-02 00:23:42 +00:00
|
|
|
|
|
|
|
## User-defined counters
|
|
|
|
|
|
|
|
You can add your own counters with user-defined names. The example below
|
|
|
|
will add columns "Foo", "Bar" and "Baz" in its output:
|
|
|
|
|
|
|
|
```c++
|
|
|
|
static void UserCountersExample1(benchmark::State& state) {
|
|
|
|
double numFoos = 0, numBars = 0, numBazs = 0;
|
2017-10-17 18:17:02 +00:00
|
|
|
for (auto _ : state) {
|
2017-03-02 00:23:42 +00:00
|
|
|
// ... count Foo,Bar,Baz events
|
|
|
|
}
|
|
|
|
state.counters["Foo"] = numFoos;
|
|
|
|
state.counters["Bar"] = numBars;
|
|
|
|
state.counters["Baz"] = numBazs;
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
|
|
|
The `state.counters` object is a `std::map` with `std::string` keys
|
|
|
|
and `Counter` values. The latter is a `double`-like class, via an implicit
|
|
|
|
conversion to `double&`. Thus you can use all of the standard arithmetic
|
|
|
|
assignment operators (`=,+=,-=,*=,/=`) to change the value of each counter.
|
|
|
|
|
|
|
|
In multithreaded benchmarks, each counter is set on the calling thread only.
|
|
|
|
When the benchmark finishes, the counters from each thread will be summed;
|
|
|
|
the resulting sum is the value which will be shown for the benchmark.
|
|
|
|
|
2018-08-29 18:11:06 +00:00
|
|
|
The `Counter` constructor accepts three parameters: the value as a `double`
|
|
|
|
; a bit flag which allows you to show counters as rates, and/or as per-thread
|
|
|
|
iteration, and/or as per-thread averages, and/or iteration invariants;
|
|
|
|
and a flag specifying the 'unit' - i.e. is 1k a 1000 (default,
|
|
|
|
`benchmark::Counter::OneK::kIs1000`), or 1024
|
|
|
|
(`benchmark::Counter::OneK::kIs1024`)?
|
2017-03-02 00:23:42 +00:00
|
|
|
|
|
|
|
```c++
|
|
|
|
// sets a simple counter
|
|
|
|
state.counters["Foo"] = numFoos;
|
|
|
|
|
|
|
|
// Set the counter as a rate. It will be presented divided
|
|
|
|
// by the duration of the benchmark.
|
|
|
|
state.counters["FooRate"] = Counter(numFoos, benchmark::Counter::kIsRate);
|
|
|
|
|
|
|
|
// Set the counter as a thread-average quantity. It will
|
|
|
|
// be presented divided by the number of threads.
|
|
|
|
state.counters["FooAvg"] = Counter(numFoos, benchmark::Counter::kAvgThreads);
|
|
|
|
|
|
|
|
// There's also a combined flag:
|
|
|
|
state.counters["FooAvgRate"] = Counter(numFoos,benchmark::Counter::kAvgThreadsRate);
|
2018-08-29 18:11:06 +00:00
|
|
|
|
|
|
|
// This says that we process with the rate of state.range(0) bytes every iteration:
|
|
|
|
state.counters["BytesProcessed"] = Counter(state.range(0), benchmark::Counter::kIsIterationInvariantRate, benchmark::Counter::OneK::kIs1024);
|
2017-03-02 00:23:42 +00:00
|
|
|
```
|
|
|
|
|
|
|
|
When you're compiling in C++11 mode or later you can use `insert()` with
|
|
|
|
`std::initializer_list`:
|
|
|
|
|
|
|
|
```c++
|
|
|
|
// With C++11, this can be done:
|
|
|
|
state.counters.insert({{"Foo", numFoos}, {"Bar", numBars}, {"Baz", numBazs}});
|
|
|
|
// ... instead of:
|
|
|
|
state.counters["Foo"] = numFoos;
|
|
|
|
state.counters["Bar"] = numBars;
|
|
|
|
state.counters["Baz"] = numBazs;
|
|
|
|
```
|
|
|
|
|
2017-04-01 21:31:39 +00:00
|
|
|
### Counter reporting
|
|
|
|
|
|
|
|
When using the console reporter, by default, user counters are are printed at
|
|
|
|
the end after the table, the same way as ``bytes_processed`` and
|
|
|
|
``items_processed``. This is best for cases in which there are few counters,
|
|
|
|
or where there are only a couple of lines per benchmark. Here's an example of
|
|
|
|
the default output:
|
|
|
|
|
|
|
|
```
|
|
|
|
------------------------------------------------------------------------------
|
|
|
|
Benchmark Time CPU Iterations UserCounters...
|
|
|
|
------------------------------------------------------------------------------
|
|
|
|
BM_UserCounter/threads:8 2248 ns 10277 ns 68808 Bar=16 Bat=40 Baz=24 Foo=8
|
|
|
|
BM_UserCounter/threads:1 9797 ns 9788 ns 71523 Bar=2 Bat=5 Baz=3 Foo=1024m
|
|
|
|
BM_UserCounter/threads:2 4924 ns 9842 ns 71036 Bar=4 Bat=10 Baz=6 Foo=2
|
|
|
|
BM_UserCounter/threads:4 2589 ns 10284 ns 68012 Bar=8 Bat=20 Baz=12 Foo=4
|
|
|
|
BM_UserCounter/threads:8 2212 ns 10287 ns 68040 Bar=16 Bat=40 Baz=24 Foo=8
|
|
|
|
BM_UserCounter/threads:16 1782 ns 10278 ns 68144 Bar=32 Bat=80 Baz=48 Foo=16
|
|
|
|
BM_UserCounter/threads:32 1291 ns 10296 ns 68256 Bar=64 Bat=160 Baz=96 Foo=32
|
|
|
|
BM_UserCounter/threads:4 2615 ns 10307 ns 68040 Bar=8 Bat=20 Baz=12 Foo=4
|
|
|
|
BM_Factorial 26 ns 26 ns 26608979 40320
|
|
|
|
BM_Factorial/real_time 26 ns 26 ns 26587936 40320
|
|
|
|
BM_CalculatePiRange/1 16 ns 16 ns 45704255 0
|
|
|
|
BM_CalculatePiRange/8 73 ns 73 ns 9520927 3.28374
|
|
|
|
BM_CalculatePiRange/64 609 ns 609 ns 1140647 3.15746
|
|
|
|
BM_CalculatePiRange/512 4900 ns 4901 ns 142696 3.14355
|
|
|
|
```
|
|
|
|
|
|
|
|
If this doesn't suit you, you can print each counter as a table column by
|
|
|
|
passing the flag `--benchmark_counters_tabular=true` to the benchmark
|
|
|
|
application. This is best for cases in which there are a lot of counters, or
|
|
|
|
a lot of lines per individual benchmark. Note that this will trigger a
|
|
|
|
reprinting of the table header any time the counter set changes between
|
|
|
|
individual benchmarks. Here's an example of corresponding output when
|
|
|
|
`--benchmark_counters_tabular=true` is passed:
|
|
|
|
|
|
|
|
```
|
|
|
|
---------------------------------------------------------------------------------------
|
|
|
|
Benchmark Time CPU Iterations Bar Bat Baz Foo
|
|
|
|
---------------------------------------------------------------------------------------
|
|
|
|
BM_UserCounter/threads:8 2198 ns 9953 ns 70688 16 40 24 8
|
|
|
|
BM_UserCounter/threads:1 9504 ns 9504 ns 73787 2 5 3 1
|
|
|
|
BM_UserCounter/threads:2 4775 ns 9550 ns 72606 4 10 6 2
|
|
|
|
BM_UserCounter/threads:4 2508 ns 9951 ns 70332 8 20 12 4
|
|
|
|
BM_UserCounter/threads:8 2055 ns 9933 ns 70344 16 40 24 8
|
|
|
|
BM_UserCounter/threads:16 1610 ns 9946 ns 70720 32 80 48 16
|
|
|
|
BM_UserCounter/threads:32 1192 ns 9948 ns 70496 64 160 96 32
|
|
|
|
BM_UserCounter/threads:4 2506 ns 9949 ns 70332 8 20 12 4
|
|
|
|
--------------------------------------------------------------
|
|
|
|
Benchmark Time CPU Iterations
|
|
|
|
--------------------------------------------------------------
|
|
|
|
BM_Factorial 26 ns 26 ns 26392245 40320
|
|
|
|
BM_Factorial/real_time 26 ns 26 ns 26494107 40320
|
|
|
|
BM_CalculatePiRange/1 15 ns 15 ns 45571597 0
|
|
|
|
BM_CalculatePiRange/8 74 ns 74 ns 9450212 3.28374
|
|
|
|
BM_CalculatePiRange/64 595 ns 595 ns 1173901 3.15746
|
|
|
|
BM_CalculatePiRange/512 4752 ns 4752 ns 147380 3.14355
|
|
|
|
BM_CalculatePiRange/4k 37970 ns 37972 ns 18453 3.14184
|
|
|
|
BM_CalculatePiRange/32k 303733 ns 303744 ns 2305 3.14162
|
|
|
|
BM_CalculatePiRange/256k 2434095 ns 2434186 ns 288 3.1416
|
|
|
|
BM_CalculatePiRange/1024k 9721140 ns 9721413 ns 71 3.14159
|
|
|
|
BM_CalculatePi/threads:8 2255 ns 9943 ns 70936
|
|
|
|
```
|
|
|
|
Note above the additional header printed when the benchmark changes from
|
|
|
|
``BM_UserCounter`` to ``BM_Factorial``. This is because ``BM_Factorial`` does
|
|
|
|
not have the same counter set as ``BM_UserCounter``.
|
|
|
|
|
2016-05-24 02:35:09 +00:00
|
|
|
## Exiting Benchmarks in Error
|
|
|
|
|
2016-05-24 21:21:41 +00:00
|
|
|
When errors caused by external influences, such as file I/O and network
|
|
|
|
communication, occur within a benchmark the
|
|
|
|
`State::SkipWithError(const char* msg)` function can be used to skip that run
|
|
|
|
of benchmark and report the error. Note that only future iterations of the
|
2017-10-17 18:17:02 +00:00
|
|
|
`KeepRunning()` are skipped. For the ranged-for version of the benchmark loop
|
|
|
|
Users must explicitly exit the loop, otherwise all iterations will be performed.
|
|
|
|
Users may explicitly return to exit the benchmark immediately.
|
2016-05-24 02:35:09 +00:00
|
|
|
|
|
|
|
The `SkipWithError(...)` function may be used at any point within the benchmark,
|
2017-10-17 18:17:02 +00:00
|
|
|
including before and after the benchmark loop.
|
2016-05-24 02:35:09 +00:00
|
|
|
|
|
|
|
For example:
|
|
|
|
|
|
|
|
```c++
|
|
|
|
static void BM_test(benchmark::State& state) {
|
|
|
|
auto resource = GetResource();
|
|
|
|
if (!resource.good()) {
|
|
|
|
state.SkipWithError("Resource is not good!");
|
|
|
|
// KeepRunning() loop will not be entered.
|
|
|
|
}
|
2017-10-17 18:17:02 +00:00
|
|
|
for (state.KeepRunning()) {
|
2016-05-24 02:35:09 +00:00
|
|
|
auto data = resource.read_data();
|
|
|
|
if (!resource.good()) {
|
|
|
|
state.SkipWithError("Failed to read data!");
|
|
|
|
break; // Needed to skip the rest of the iteration.
|
|
|
|
}
|
|
|
|
do_stuff(data);
|
|
|
|
}
|
|
|
|
}
|
2017-10-17 18:17:02 +00:00
|
|
|
|
|
|
|
static void BM_test_ranged_fo(benchmark::State & state) {
|
|
|
|
state.SkipWithError("test will not be entered");
|
|
|
|
for (auto _ : state) {
|
|
|
|
state.SkipWithError("Failed!");
|
|
|
|
break; // REQUIRED to prevent all further iterations.
|
|
|
|
}
|
|
|
|
}
|
2016-05-24 02:35:09 +00:00
|
|
|
```
|
|
|
|
|
2016-09-05 21:40:12 +00:00
|
|
|
## Running a subset of the benchmarks
|
|
|
|
|
|
|
|
The `--benchmark_filter=<regex>` option can be used to only run the benchmarks
|
|
|
|
which match the specified `<regex>`. For example:
|
|
|
|
|
|
|
|
```bash
|
|
|
|
$ ./run_benchmarks.x --benchmark_filter=BM_memcpy/32
|
|
|
|
Run on (1 X 2300 MHz CPU )
|
|
|
|
2016-06-25 19:34:24
|
|
|
|
Benchmark Time CPU Iterations
|
|
|
|
----------------------------------------------------
|
|
|
|
BM_memcpy/32 11 ns 11 ns 79545455
|
|
|
|
BM_memcpy/32k 2181 ns 2185 ns 324074
|
|
|
|
BM_memcpy/32 12 ns 12 ns 54687500
|
|
|
|
BM_memcpy/32k 1834 ns 1837 ns 357143
|
|
|
|
```
|
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
## Runtime and reporting considerations
|
|
|
|
When the benchmark binary is executed, each benchmark function is run serially.
|
|
|
|
The number of iterations to run is determined dynamically by running the
|
|
|
|
benchmark a few times and measuring the time taken and ensuring that the
|
|
|
|
ultimate result will be statistically stable. As such, faster benchmark
|
|
|
|
functions will be run for more iterations than slower benchmark functions, and
|
|
|
|
the number of iterations is thus reported.
|
2016-09-05 21:40:12 +00:00
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
In all cases, the number of iterations for which the benchmark is run is
|
|
|
|
governed by the amount of time the benchmark takes. Concretely, the number of
|
|
|
|
iterations is at least one, not more than 1e9, until CPU time is greater than
|
|
|
|
the minimum time, or the wallclock time is 5x minimum time. The minimum time is
|
|
|
|
set per benchmark by calling `MinTime` on the registered benchmark object.
|
|
|
|
|
|
|
|
Average timings are then reported over the iterations run. If multiple
|
|
|
|
repetitions are requested using the `--benchmark_repetitions` command-line
|
|
|
|
option, or at registration time, the benchmark function will be run several
|
|
|
|
times and statistical results across these repetitions will also be reported.
|
|
|
|
|
|
|
|
As well as the per-benchmark entries, a preamble in the report will include
|
|
|
|
information about the machine on which the benchmarks are run.
|
|
|
|
|
|
|
|
### Output Formats
|
2015-03-17 22:42:41 +00:00
|
|
|
The library supports multiple output formats. Use the
|
2016-08-02 21:12:43 +00:00
|
|
|
`--benchmark_format=<console|json|csv>` flag to set the format type. `console`
|
|
|
|
is the default format.
|
2015-03-17 22:42:41 +00:00
|
|
|
|
2016-08-02 21:12:43 +00:00
|
|
|
The Console format is intended to be a human readable format. By default
|
Drop Stat1, refactor statistics to be user-providable, add median. (#428)
* Drop Stat1, refactor statistics to be user-providable, add median.
My main goal was to add median statistic. Since Stat1
calculated the stats incrementally, and did not store
the values themselves, it is was not possible. Thus,
i have replaced Stat1 with simple std::vector<double>,
containing all the values.
Then, i have refactored current mean/stdev to be a
function that is provided with values vector, and
returns the statistic. While there, it seemed to make
sense to deduplicate the code by storing all the
statistics functions in a map, and then simply iterate
over it. And the interface to add new statistics is
intentionally exposed, so they may be added easily.
The notable change is that Iterations are no longer
displayed as 0 for stdev. Is could be changed, but
i'm not sure how to nicely fit that into the API.
Similarly, this dance about sometimes (for some fields,
for some statistics) dividing by run.iterations, and
then multiplying the calculated stastic back is also
dropped, and if you do the math, i fail to see why
it was needed there in the first place.
Since that was the only use of stat.h, it is removed.
* complexity.h: attempt to fix MSVC build
* Update README.md
* Store statistics to compute in a vector, ensures ordering.
* Add a bit more tests for repetitions.
* Partially address review notes.
* Fix gcc build: drop extra ';'
clang, why didn't you warn me?
* Address review comments.
* double() -> 0.0
* early return
2017-08-23 23:44:29 +00:00
|
|
|
the format generates color output. Context is output on stderr and the
|
2015-04-01 14:51:37 +00:00
|
|
|
tabular data on stdout. Example tabular output looks like:
|
2015-03-17 22:42:41 +00:00
|
|
|
```
|
|
|
|
Benchmark Time(ns) CPU(ns) Iterations
|
|
|
|
----------------------------------------------------------------------
|
|
|
|
BM_SetInsert/1024/1 28928 29349 23853 133.097kB/s 33.2742k items/s
|
|
|
|
BM_SetInsert/1024/8 32065 32913 21375 949.487kB/s 237.372k items/s
|
|
|
|
BM_SetInsert/1024/10 33157 33648 21431 1.13369MB/s 290.225k items/s
|
|
|
|
```
|
|
|
|
|
|
|
|
The JSON format outputs human readable json split into two top level attributes.
|
|
|
|
The `context` attribute contains information about the run in general, including
|
|
|
|
information about the CPU and the date.
|
2018-02-21 16:41:52 +00:00
|
|
|
The `benchmarks` attribute contains a list of every benchmark run. Example json
|
2015-03-17 22:42:41 +00:00
|
|
|
output looks like:
|
2017-03-02 00:23:42 +00:00
|
|
|
```json
|
2015-03-17 22:42:41 +00:00
|
|
|
{
|
|
|
|
"context": {
|
|
|
|
"date": "2015/03/17-18:40:25",
|
|
|
|
"num_cpus": 40,
|
|
|
|
"mhz_per_cpu": 2801,
|
|
|
|
"cpu_scaling_enabled": false,
|
|
|
|
"build_type": "debug"
|
|
|
|
},
|
|
|
|
"benchmarks": [
|
|
|
|
{
|
|
|
|
"name": "BM_SetInsert/1024/1",
|
|
|
|
"iterations": 94877,
|
|
|
|
"real_time": 29275,
|
|
|
|
"cpu_time": 29836,
|
|
|
|
"bytes_per_second": 134066,
|
|
|
|
"items_per_second": 33516
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"name": "BM_SetInsert/1024/8",
|
|
|
|
"iterations": 21609,
|
|
|
|
"real_time": 32317,
|
|
|
|
"cpu_time": 32429,
|
|
|
|
"bytes_per_second": 986770,
|
|
|
|
"items_per_second": 246693
|
|
|
|
},
|
|
|
|
{
|
|
|
|
"name": "BM_SetInsert/1024/10",
|
|
|
|
"iterations": 21393,
|
|
|
|
"real_time": 32724,
|
|
|
|
"cpu_time": 33355,
|
|
|
|
"bytes_per_second": 1199226,
|
|
|
|
"items_per_second": 299807
|
|
|
|
}
|
|
|
|
]
|
|
|
|
}
|
|
|
|
```
|
|
|
|
|
2015-04-01 14:51:37 +00:00
|
|
|
The CSV format outputs comma-separated values. The `context` is output on stderr
|
|
|
|
and the CSV itself on stdout. Example CSV output looks like:
|
|
|
|
```
|
|
|
|
name,iterations,real_time,cpu_time,bytes_per_second,items_per_second,label
|
|
|
|
"BM_SetInsert/1024/1",65465,17890.7,8407.45,475768,118942,
|
|
|
|
"BM_SetInsert/1024/8",116606,18810.1,9766.64,3.27646e+06,819115,
|
|
|
|
"BM_SetInsert/1024/10",106365,17238.4,8421.53,4.74973e+06,1.18743e+06,
|
|
|
|
```
|
2015-03-17 22:42:41 +00:00
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
### Output Files
|
2016-08-02 21:12:43 +00:00
|
|
|
The library supports writing the output of the benchmark to a file specified
|
|
|
|
by `--benchmark_out=<filename>`. The format of the output can be specified
|
|
|
|
using `--benchmark_out_format={json|console|csv}`. Specifying
|
|
|
|
`--benchmark_out` does not suppress the console output.
|
|
|
|
|
2018-08-13 14:42:35 +00:00
|
|
|
## Result comparison
|
|
|
|
|
|
|
|
It is possible to compare the benchmarking results. See [Additional Tooling Documentation](docs/tools.md)
|
|
|
|
|
2016-04-19 16:34:13 +00:00
|
|
|
## Debug vs Release
|
2018-07-26 13:29:33 +00:00
|
|
|
By default, benchmark builds as a debug library. You will see a warning in the
|
|
|
|
output when this is the case. To build it as a release library instead, use:
|
2016-02-14 17:28:10 +00:00
|
|
|
|
|
|
|
```
|
|
|
|
cmake -DCMAKE_BUILD_TYPE=Release
|
|
|
|
```
|
|
|
|
|
|
|
|
To enable link-time optimisation, use
|
|
|
|
|
|
|
|
```
|
|
|
|
cmake -DCMAKE_BUILD_TYPE=Release -DBENCHMARK_ENABLE_LTO=true
|
|
|
|
```
|
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
If you are using gcc, you might need to set `GCC_AR` and `GCC_RANLIB` cmake
|
|
|
|
cache variables, if autodetection fails.
|
2018-02-21 16:42:16 +00:00
|
|
|
|
2018-07-26 13:29:33 +00:00
|
|
|
If you are using clang, you may need to set `LLVMAR_EXECUTABLE`,
|
|
|
|
`LLVMNM_EXECUTABLE` and `LLVMRANLIB_EXECUTABLE` cmake cache variables.
|
2016-08-09 18:31:44 +00:00
|
|
|
|
|
|
|
## Compiler Support
|
|
|
|
|
|
|
|
Google Benchmark uses C++11 when building the library. As such we require
|
|
|
|
a modern C++ toolchain, both compiler and standard library.
|
|
|
|
|
|
|
|
The following minimum versions are strongly recommended build the library:
|
|
|
|
|
|
|
|
* GCC 4.8
|
|
|
|
* Clang 3.4
|
|
|
|
* Visual Studio 2013
|
2017-03-28 00:30:54 +00:00
|
|
|
* Intel 2015 Update 1
|
2016-08-09 18:31:44 +00:00
|
|
|
|
|
|
|
Anything older *may* work.
|
|
|
|
|
|
|
|
Note: Using the library and its headers in C++03 is supported. C++11 is only
|
|
|
|
required to build the library.
|
2016-08-30 09:41:58 +00:00
|
|
|
|
2017-11-02 15:34:06 +00:00
|
|
|
## Disable CPU frequency scaling
|
|
|
|
If you see this error:
|
|
|
|
```
|
|
|
|
***WARNING*** CPU scaling is enabled, the benchmark real time measurements may be noisy and will incur extra overhead.
|
|
|
|
```
|
|
|
|
you might want to disable the CPU frequency scaling while running the benchmark:
|
|
|
|
```bash
|
|
|
|
sudo cpupower frequency-set --governor performance
|
|
|
|
./mybench
|
|
|
|
sudo cpupower frequency-set --governor powersave
|
|
|
|
```
|