This also fixes#1135. Because StrSplit was returning a vector with an
empty string, it was treated by PerfCounters::Create as a legitimate ask
for setting up a counter with that name. The empty vector is understood
by PerfCounters as "just return NoCounters()".
* Add API to benchmark allowing for custom context to be added
Fixes#525
* add docs
* Add context flag output to JSON reporter
* Plumb everything into the global context.
* Add googletests for custom context
* update docs with duplicate key behaviour
* Add `benchmark_context` flag that allows per-run custom context.
Add support for key-value flags in general.
Added test for key-value flags.
Added `benchmark_context` flag.
Output content of `benchmark_context` to base reporter.
Solves the first part of #525.
* Docs and better help
* Support optional, user-directed collection of performance counters
The patch allows an engineer wishing to drill into the root causes
of a regression, for example. Currently, only single threaded runs
are supported. The feature is a build-time opt in, and then a runtime
opt in.
The engineer may run the benchmark executable, passing a list of
performance counter names (using libpfm's naming scheme) at the
command line. The counter values will then be collected and reported
back as UserCounters.
This is different from #240 in that it is a benchmark user opt-in, and
the counter collection is transparent to the benchmark.
Currently, this is only supported on platforms where libpfm is
supported.
libpfm: http://perfmon2.sourceforge.net/
* 'Use' values param in Snapshot when BENCHMARK_OS_WINDOWS
This is to avoid unused parameter warning-as-error
* Added missing include for <vector> in perf_counters.cc
* Moved doc to docs
* Added license blurbs
Use the benchmark's reported iteration count when estimating
iterations for the next repetition, rather than the requested
iteration count. When the benchmark uses KeepRunningBatch the actual
iteration count can be larger than the one the runner requested.
Prior to this fix the runner was underestimating the next iteration
count, sometimes significantly so. Consider the case of a benchmark
using a batch size of 1024. Prior to this change, the benchmark
runner would attempt iteration counts 1, 10, 100 and 1000, yet the
benchmark itself would do the same amount of work each time: a single
batch of 1024 iterations. The discrepancy could also contribute to
estimation errors once the benchmark time reached 10% of the target.
For example, if the very first batch of 1024 iterations reached 10% of
benchmark_min_min time, the runner would attempt to scale that to 100%
from a basis of one iteration rather than 1024.
This bug was particularly noticeable in benchmarks with large batch
sizes, especially when the benchmark also had slow set up or tear down
phases.
With this fix in place it is possible to use KeepRunningBatch to
achieve a kind of "minimum iteration count" feature by using a larger
fixed batch size. For example, a benchmark may build a map of 500K
elements and test a "find" operation. There is no point in running
"find" just 1, 10, 100, etc., times. The benchmark can now pick a
batch size of something like 10K, and the runner will arrive at the
final max iteration count with in noticeably fewer repetitions.
* Implement custom benchmark name
The benchmark's name can be changed using the Name() function
which internally uses SetName().
* Update AUTHORS and CONTRIBUTORS
* Describe new feature in README
* Move new name function up
Fixes#1106
The existing behavior results in the `0` value being added twice. Since
`lo` is always added to `dst`, we never want to explicitly add `0` if
`lo` is equal to `0`.
* Add CartesianProduct with associated test
* Use CartesianProduct in Ranges to avoid code duplication
* Add new cartesian_product_test to CMakeLists.txt
* Update AUTHORS & CONTRIBUTORS
* Rename CartesianProduct to ArgsProduct
* Rename test & fixture accordingly
* Add example for ArgsProduct to README
As noted in #995, this causes issues when the command line flag already
starts with "benchmark_", which they all do.
Not caught by tests as the test flags didn't start with "benchmark".
Fixes#995
* JSONReporter: don't report on scaling if we didn't get it (#1005)
* JSONReporter: fix due to review (std::pair<bool, bool> -> enum)
* JSONReporter: scaling: fix the algo (due to review discussion)
* benchmark.h: revert to old-fashioned enum's (C++03 compatibility); rreporter_output_test: let's skip scaling
* timestamp: use rfc3339-formatted timestamps in output
Replace localized timestamps with machine-readable IETF RFC 3339 format
timestamps. This is an attempt to make the output timestamps easily
machine-readable. ISO8601 specifies standards for time interchange
formats. IETF RFC 3339: https://tools.ietf.org/html/rfc3339 defines a
subset of these for use in the internet. The general form for these
timestamps is:
YYYY-MM-DDTHH:mm:SS[+-]hhmm
This replaces the localized time formats that are currently being used
in the benchmark output to prioritize interchangeability and
machine-readability.
This might break existing programs that rely on the particular date-time
format. This might also may make times less human readable. RFC3339 was
intended to balance human readability and simplicity for machine
readability, but it is primarily intended as an internal representation.
* timers: remove utc string formatting
We only ever need local time printing. Remove the UTC printing
and cosnolidate the logic slightly.
* timers: manually create rfc3339 string
The C++ standard library does not output the time offset in RFC3339
format, it is missing the : between hours and minutes. VS does not
appear to support timezone information by default. To avoid adding too
much complexity to benchmark around timezone handling e.g. a full
date library like https://github.com/HowardHinnant/date, we fall back
to outputting GMT time with a -00:00 offset for those cases.
* timers: use reentrant form for localtime_r & tmtime_r
For non-windows, use the reentrant form for the time conversion
functions.
* timers: cleanup
Use strtol instead of brittle moving characters around.
* timers: only call strftime twice.
Also size buffers to known maximum necessary size and name constants
more appropriately.
* timers: fix unused variable warning
In a previous commit[1], diagnostic pragmas were used to avoid this
warning. However, the incorrect warning flag was indicated, leaving the
warning in place. -Wdeprecated is for deprecated features while
-Wdeprecated-declarations for deprecated functions, variables, and
types[2].
[1] c408461983
[2] https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html
* Fix type conversion warnings.
Fixes#949
Tested locally (Linux/clang), but warnings are on MSVC so may differ.
* Drop the ULP so the double test passes
* Add State::error_occurred()
* Relax CHECK condition in benchmark_runner.cc
If the benchmark state contains an error, do not expect any iterations has been run.
This allows using SkipWithError() and return early from the benchmark function.
* README.md: document new possible usage of SkipWithError()
* add Jordan Williams to both CONTRIBUTORS and AUTHORS
* alias benchmark libraries
Provide aliased CMake targets for the benchmark and benchmark_main targets.
The alias targets are namespaced under benchmark::, which is the namespace when they are exported.
I chose not to use either the PROJECT_NAME or the namespace variable but to hard-code the namespace.
This is because the benchmark and benchmark_main targets are hard-coded by name themselves.
Hard-coding the namespace is also much cleaner and easier to read.
* link to aliased benchmark targets
It is safer to link against namespaced targets because of how CMake interprets the double colon.
Typo's will be caught by CMake at configuration-time instead of during compile / link time.
* document the provided alias targets
* add "Usage with CMake" section in documentation
This section covers linking against the alias/import CMake targets and including them using either find_package or add_subdirectory.
* format the "Usage with CMake" README section
Added a newline after the "Usage with CMake" section header.
Dropped the header level of the section by one to make it a direct subsection of the "Usage" section.
Wrapped lines to be no longer than 80 characters in length.
* CTest must use proper paths to executables
With the following syntax:
```
add_test(NAME <name> COMMAND <command> [<arg>...])
```
if `<command>` specifies an executable target it will automatically
be replaced by the location of the executable created at build time.
This is important if a `<Configuration>_POSTFIX` like `_d` is used.
* Fix typo in ctest invocation
Instead of `-c` the uppercase `-C` must be used to select a config.
But better use the longopt.
Initialize option flags from environment variables values if they are defined, eg. `BENCHMARK_OUT=<filename>` for `--benchmark_out=<filename>`. Command line flag value always prevails.
Fixes https://github.com/google/benchmark/issues/881.
* Guard ASSERT_THROWS checks with BENCHMARK_HAS_NO_EXCEPTIONS
This allows the test be run with exceptions turned off
* Add myself to CONTRIBUTORS
I don't need to be added to AUTHORS, as I am a Google employee
While current counters can e.g. answer the question
"how many items is processed per second", it is impossible to get
it to tell "how many seconds it takes to process a single item".
The solution is to add a yet another modifier `kInvert`,
that is *always* considered last, which simply inverts the answer.
Fixes#781, #830, #848.
The CSVReporter is deprecated, but we still need to reference it in
a few places. To avoid breaking the build when warnings are errors,
we need to disable the warning when we do so.
* Update AUTHORS and CONTRIBUTORS
* Fix WSL self-test failures
Some of the benchmark self-tests expect and check for a particular
output format from the benchmark library. The numerical values must
not be infinity or not-a-number, or the test will report an error.
Some of the values are computed bytes-per-second or items-per-second
values, so these require that the measured CPU time for the test to be
non-zero. But the loop that is being measured was empty, so the
measured CPU time for the loop was extremely small. On systems like
Windows Subsystem for Linux (WSL) the timer doesn't have enough
resolution to measure this, so the measured CPU time was zero.
This fix just makes sure that these tests have something within the
timing loop, so that the benchmark library will not decide that the
loop takes zero CPU time. This makes these tests more robust, and in
particular makes them pass on WSL.
This is a shameless rip-off of https://github.com/google/benchmark/pull/646
I did promise to look into why that proposed PR was producing
so much worse assembly, and so i finally did.
The reason is - that diff changes `size_t` (unsigned) to `int64_t` (signed).
There is this nice little `assert`:
7a1c370283/include/benchmark/benchmark.h (L744)
It ensures that we didn't magically decide to advance our iterator
when we should have finished benchmarking.
When `cached_` was unsigned, the `assert` was `cached_ UGT 0`.
But we only ever get to that `assert` if `cached_ NE 0`,
and naturally if `cached_` is not `0`, then it is bigger than `0`,
so the `assert` is tautological, and gets folded away.
But now that `cached_` became signed, the assert became `cached_ SGT 0`.
And we still only know that `cached_ NE 0`, so the assert can't be
optimized out, or at least it doesn't currently.
Regardless of whether or not that is a bug in itself,
that particular diff would have regressed the normal 64-bit systems,
by halving the maximal iteration space (since we go from unsigned counter
to signed one, of the same bit-width), which seems like a bug.
And just so it happens, fixing *this* bug, fixes the other bug.
This produces fully (bit-by-bit) identical state_assembly_test.s
The filecheck change is actually needed regardless of this patch,
else this test does not pass for me even without this diff.
https://github.com/google/benchmark/pull/801 is stuck with some cryptic cmake failure due to
some linking issue between googletest and threading libraries.
I suspect that is mostly happening because of the, uhm,
intentionally extremely twisted-in-the-brains approach that is being used to
actually build the library as part of the buiild,
except without actually building it as part of the build.
If we do actually build it as part of the build,
then all the transitive dependencies should magically be in order,
and maybe everything will just work.
This new version of cmake magic was written by me in
0e22f085c5/cmake/Modules/GoogleTest.cmake.in0e22f085c5/cmake/Modules/GoogleTest.cmake, based on the official googletest docs and LOTS of experimentation.
* escape special chars in csv and json output.
- escape \b,\f,\n,\r,\t,\," from strings before dumping
them to json or csv.
- also faithfully reproduce the sign of nan in json.
this fixes github issue #745.
* functionalize.
* split string escape functions between csv and json
* Update src/csv_reporter.cc
Co-Authored-By: tesch1 <tesch1@gmail.com>
* Update src/json_reporter.cc
Co-Authored-By: tesch1 <tesch1@gmail.com>
* Add FIXME in multiple_ranges_test.cc
* Improve handling of large bounds in AddRange.
Due to breaking the loop too early, AddRange
would miss a final multplier of 'mult' that
was within the numeric range of T.
* Enable negative values for Range argument
Fixes#762.
* Try to fix build of benchmark_gtest
* Try some more to fix build
* Attempt to fix format macros
* Attempt to resolve format errors for mingw32
* Review feedback
Put unit tests in benchmark::internal namespace
Fix error reporting in multiple_ranges_test.cc
* [JSON] add threads and repetitions to the json output, for better ide…
[Tests] explicitly check for thread == 1
[Tests] specifically mark all repetition checks
[JSON] add repetition_index reporting, but only for non-aggregates (i…
* [Formatting] Be very, very explicit about pointer alignment so clang-format can not put pointers/references on the wrong side of arguments.
[Benchmark::Run] Make sure to use explanatory sentinel variable rather than a magic number.
* Do not pass redundant information
Created BenchmarkName class which holds the full benchmark
name and allows specifying and retrieving different components
of the name (e.g. ARGS, THREADS etc.)
Fixes#730.
Some benchmarks are particularly sensitive and they run in less than
a nanosecond. In order for the console reporter to provide meaningful
output for such benchmarks it needs to be able to display the times
using more resolution than a single nanosecond.
This patch changes the console reporter to print at least three
significant digits for all results.
Unlike the initial attempt, this patch does not align the decimal point.
* Adding Host Name and test
* Addressing Review Comments
* Adding Test for JSON Reporter
* Adding HOST_NAME_MAX for MacOS systems
* Adding Explaination for MacOS HOST_NAME_MAX Addition
* Addressing Peer Review Comments
* Adding codecvt in windows header guard
* Changing name SystemInfo and adding empty message incase host name fetch fails
* Adding Comment on Struct SystemInfo
Unit-tests fail to build due to the following errors:
/home/cfx/Dev/google-benchmark/benchmark.git/test/string_util_gtest.cc:12:5: required from here
/home/cfx/Applications/googletest-1.8.1/include/gtest/gtest.h:1444:11: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare]
if (lhs == rhs) {
~~~~^~~~~~
Fixes#741
std::tmpnam is deprecated and its use is discouraged. For our purposes
in the tests, we really just need a file name which is unlikely to
exist.
This patch converts the tests to using a dummy random file name
generator, which should hopefully avoid name conflicts.
It is incorrect to say that an aggregate is computed over
run's iterations, because those iterations already got averaged.
Similarly, if there are N repetitions with 1 iterations each,
an aggregate will be computed over N measurements, not 1.
Thus it is best to simply use the count of separate reports.
Fixes#586.
As discussed with @dominichamon and @dbabokin, sugar is nice.
Well, maybe not for the health, but it's sweet.
Alright, enough puns.
A special care needs to be applied not to break csv reporter. UGH.
We end up shedding some code over this.
We no longer specially pretty-print them, they are printed just like the rest of custom counters.
Fixes#627.
This is related to @BaaMeow's work in https://github.com/google/benchmark/pull/616 but is not based on it.
Two new fields are tracked, and dumped into JSON:
* If the run is an aggregate, the aggregate's name is stored.
It can be RMS, BigO, mean, median, stddev, or any custom stat name.
* The aggregate-name-less run name is additionally stored.
I.e. not some name of the benchmark function, but the actual
name, but without the 'aggregate name' suffix.
This way one can group/filter all the runs,
and filter by the particular aggregate type.
I *might* need this for further tooling improvement.
Or maybe not.
But this is certainly worthwhile for custom tooling.
* Counter(): add 'one thousand' param.
Needed for https://github.com/google/benchmark/pull/654
Custom user counters are quite custom. It is not guaranteed
that the user *always* expects for these to have 1k == 1000.
If the counter represents bytes/memory/etc, 1k should be 1024.
Some bikeshedding points:
1. Is this sufficient, or do we really want to go full on
into custom types with names?
I think just the '1000' is sufficient for now.
2. Should there be a helper benchmark::Counter::Counter{1000,1024}()
static 'constructor' functions, since these two, by far,
will be the most used?
3. In the future, we should be somehow encoding this info into JSON.
* Counter(): use std::pair<> to represent 'one thousand'
* Counter(): just use a new enum with two values 1000 vs 1024.
Simpler is better. If someone comes up with a real reason
to need something more advanced, it can be added later on.
* Counter: just store the 1000 or 1024 in the One_K values directly
* Counter: s/One_K/OneK/
This is *only* exposed in the JSON. Not in CSV, which is deprecated.
This *only* supposed to track these two states.
An additional field could later track which aggregate this is,
specifically (statistic name, rms, bigo, ...)
The motivation is that we already have ReportAggregatesOnly,
but it affects the entire reports, both the display,
and the reporters (json files), which isn't ideal.
It would be very useful to have a 'display aggregates only' option,
both in the library's console reporter, and the python tooling,
This will be especially needed for the 'store separate iterations'.
found while working on reproducible builds for openSUSE
To reproduce there
osc checkout openSUSE:Factory/benchmark && cd $_
osc build -j1 --vm-type=kvm