* Rewrite complexity_test to use (hardcoded) manual time
This test is fundamentally flaky, because it tried to read tea leafs,
and is inherently misbehaving in CI environments,
since there are unmitigated sources of noise.
That being said, the computed Big-O also depends on the `--benchmark_min_time=`
Fixes https://github.com/google/benchmark/issues/272
* Correctly compute Big-O for manual timings. Fixes#1758.
* complexity_test: do more stuff in empty loop
* Make all empty loops be a bit longer empty
Looks like on windows, some of these tests still fail,
i guess clock precision is too small.
* CMake: `get_git_version()`: just use `--dirty` flag of `git describe`
* CMake: move version normalization out of `get_git_version()`
Mainly, i want `get_git_version()` to return true version,
not something sanitized.
* JSON reporter: store library version and schema version in `context`
* Tools: discard inputs with unexpected `json_schema_version`
* Extract version string into `GetBenchmarkVersiom()`
---------
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
* Make json and csv output consistent.
Currently, the --benchmark_format=csv option does not output the correct value for the cv statistics. Also, the json output should not contain a time unit for the cv statistics.
* fix formatting
* undo json change
---------
Co-authored-by: dominic <510002+dmah42@users.noreply.github.com>
* Add possibility to ask for libbenchmark version number (#1004)
Add a header which holds the current major, minor, and
patch number of the library. The header is auto generated
by CMake.
* Do not generate unused functions (#1004)
* Add support for version number in bazel (#1004)
* Fix clang format #1004
* Fix more clang format problems (#1004)
* Use git version feature of cmake to determine current lib version
* Rename version_config header to version
* Bake git version into bazel build
* Use same input config header as in cmake for version.h
* Adapt the releasing.md to include versioning in bazel
* add multiple OSes to bazel workflow
* correct indent
* only set copts when they're supported by the OS
* os check should work
* pull out cxx03_test for per-platform stuff
* attempt to fix windows test output
* Statistics: add support for percentage unit in addition to time
I think, `stddev` statistic is useful, but confusing.
What does it mean if `stddev` of `1ms` is reported?
Is that good or bad? If the `median` is `1s`,
then that means that the measurements are pretty noise-less.
And what about `stddev` of `100ms` is reported?
If the `median` is `1s` - awful, if the `median` is `10s` - good.
And hurray, there is just the statistic that we need:
https://en.wikipedia.org/wiki/Coefficient_of_variation
But, naturally, that produces a value in percents,
but the statistics are currently hardcoded to produce time.
So this refactors thinkgs a bit, and allows a percentage unit for statistics.
I'm not sure whether or not `benchmark` would be okay
with adding this `RSD` statistic by default,
but regales, that is a separate patch.
Refs. https://github.com/google/benchmark/issues/1146
* Address review notes
Much like it makes sense to enumerate all the families,
it makes sense to enumerate stuff within families.
Alternatively, we could have a global instance index,
but i'm not sure why that would be better.
This will be useful when the benchmarks are run not in order,
for the tools to sort the results properly.
It may be useful for those wishing to further post-process JSON results,
but it is mainly geared towards better support for run interleaving,
where results from the same family may not be close-by in the JSON.
While we won't be able to do much about that for outputs,
the tools can and perhaps should reorder the results to that
at least in their output they are in proper order, not run order.
Note that this only counts the families that were filtered-in,
so if e.g. there were three families, and we filtered-out
the second one, the two families (which were first and third)
will have family indexes 0 and 1.
* Implement custom benchmark name
The benchmark's name can be changed using the Name() function
which internally uses SetName().
* Update AUTHORS and CONTRIBUTORS
* Describe new feature in README
* Move new name function up
Fixes#1106
* JSONReporter: don't report on scaling if we didn't get it (#1005)
* JSONReporter: fix due to review (std::pair<bool, bool> -> enum)
* JSONReporter: scaling: fix the algo (due to review discussion)
* benchmark.h: revert to old-fashioned enum's (C++03 compatibility); rreporter_output_test: let's skip scaling
* timestamp: use rfc3339-formatted timestamps in output
Replace localized timestamps with machine-readable IETF RFC 3339 format
timestamps. This is an attempt to make the output timestamps easily
machine-readable. ISO8601 specifies standards for time interchange
formats. IETF RFC 3339: https://tools.ietf.org/html/rfc3339 defines a
subset of these for use in the internet. The general form for these
timestamps is:
YYYY-MM-DDTHH:mm:SS[+-]hhmm
This replaces the localized time formats that are currently being used
in the benchmark output to prioritize interchangeability and
machine-readability.
This might break existing programs that rely on the particular date-time
format. This might also may make times less human readable. RFC3339 was
intended to balance human readability and simplicity for machine
readability, but it is primarily intended as an internal representation.
* timers: remove utc string formatting
We only ever need local time printing. Remove the UTC printing
and cosnolidate the logic slightly.
* timers: manually create rfc3339 string
The C++ standard library does not output the time offset in RFC3339
format, it is missing the : between hours and minutes. VS does not
appear to support timezone information by default. To avoid adding too
much complexity to benchmark around timezone handling e.g. a full
date library like https://github.com/HowardHinnant/date, we fall back
to outputting GMT time with a -00:00 offset for those cases.
* timers: use reentrant form for localtime_r & tmtime_r
For non-windows, use the reentrant form for the time conversion
functions.
* timers: cleanup
Use strtol instead of brittle moving characters around.
* timers: only call strftime twice.
Also size buffers to known maximum necessary size and name constants
more appropriately.
* timers: fix unused variable warning
* Update AUTHORS and CONTRIBUTORS
* Fix WSL self-test failures
Some of the benchmark self-tests expect and check for a particular
output format from the benchmark library. The numerical values must
not be infinity or not-a-number, or the test will report an error.
Some of the values are computed bytes-per-second or items-per-second
values, so these require that the measured CPU time for the test to be
non-zero. But the loop that is being measured was empty, so the
measured CPU time for the loop was extremely small. On systems like
Windows Subsystem for Linux (WSL) the timer doesn't have enough
resolution to measure this, so the measured CPU time was zero.
This fix just makes sure that these tests have something within the
timing loop, so that the benchmark library will not decide that the
loop takes zero CPU time. This makes these tests more robust, and in
particular makes them pass on WSL.
* escape special chars in csv and json output.
- escape \b,\f,\n,\r,\t,\," from strings before dumping
them to json or csv.
- also faithfully reproduce the sign of nan in json.
this fixes github issue #745.
* functionalize.
* split string escape functions between csv and json
* Update src/csv_reporter.cc
Co-Authored-By: tesch1 <tesch1@gmail.com>
* Update src/json_reporter.cc
Co-Authored-By: tesch1 <tesch1@gmail.com>
* [JSON] add threads and repetitions to the json output, for better ide…
[Tests] explicitly check for thread == 1
[Tests] specifically mark all repetition checks
[JSON] add repetition_index reporting, but only for non-aggregates (i…
* [Formatting] Be very, very explicit about pointer alignment so clang-format can not put pointers/references on the wrong side of arguments.
[Benchmark::Run] Make sure to use explanatory sentinel variable rather than a magic number.
* Do not pass redundant information
Some benchmarks are particularly sensitive and they run in less than
a nanosecond. In order for the console reporter to provide meaningful
output for such benchmarks it needs to be able to display the times
using more resolution than a single nanosecond.
This patch changes the console reporter to print at least three
significant digits for all results.
Unlike the initial attempt, this patch does not align the decimal point.
* Adding Host Name and test
* Addressing Review Comments
* Adding Test for JSON Reporter
* Adding HOST_NAME_MAX for MacOS systems
* Adding Explaination for MacOS HOST_NAME_MAX Addition
* Addressing Peer Review Comments
* Adding codecvt in windows header guard
* Changing name SystemInfo and adding empty message incase host name fetch fails
* Adding Comment on Struct SystemInfo
It is incorrect to say that an aggregate is computed over
run's iterations, because those iterations already got averaged.
Similarly, if there are N repetitions with 1 iterations each,
an aggregate will be computed over N measurements, not 1.
Thus it is best to simply use the count of separate reports.
Fixes#586.
As discussed with @dominichamon and @dbabokin, sugar is nice.
Well, maybe not for the health, but it's sweet.
Alright, enough puns.
A special care needs to be applied not to break csv reporter. UGH.
We end up shedding some code over this.
We no longer specially pretty-print them, they are printed just like the rest of custom counters.
Fixes#627.
This is related to @BaaMeow's work in https://github.com/google/benchmark/pull/616 but is not based on it.
Two new fields are tracked, and dumped into JSON:
* If the run is an aggregate, the aggregate's name is stored.
It can be RMS, BigO, mean, median, stddev, or any custom stat name.
* The aggregate-name-less run name is additionally stored.
I.e. not some name of the benchmark function, but the actual
name, but without the 'aggregate name' suffix.
This way one can group/filter all the runs,
and filter by the particular aggregate type.
I *might* need this for further tooling improvement.
Or maybe not.
But this is certainly worthwhile for custom tooling.
This is *only* exposed in the JSON. Not in CSV, which is deprecated.
This *only* supposed to track these two states.
An additional field could later track which aggregate this is,
specifically (statistic name, rms, bigo, ...)
The motivation is that we already have ReportAggregatesOnly,
but it affects the entire reports, both the display,
and the reporters (json files), which isn't ideal.
It would be very useful to have a 'display aggregates only' option,
both in the library's console reporter, and the python tooling,
This will be especially needed for the 'store separate iterations'.
found while working on reproducible builds for openSUSE
To reproduce there
osc checkout openSUSE:Factory/benchmark && cd $_
osc build -j1 --vm-type=kvm
High system load can skew benchmark results. By including system load averages
in the library's output, we help users identify a potential issue in the
quality of their measurements, and thus assist them in producing better (more
reproducible) results.
I got the idea for this from Brendan Gregg's checklist for benchmark accuracy
(http://www.brendangregg.com/blog/2018-06-30/benchmarking-checklist.html).
* format all documents according to contributor guidelines and specifications
use clang-format on/off to stop formatting when it makes excessively poor decisions
* format all tests as well, and mark blocks which change too much
* Print the executable name as part of the context.
A common use case of the library is to run two different
versions of a benchmark to compare them. In my experience
this often means compiling a benchmark twice, renaming
one of the executables, and then running the executables
back-to-back. In this case the name of the executable
is important contextually information. Unfortunately the
benchmark does not report this information.
This patch adds the executable name to the context reported
by the benchmark.
* attempt to fix tests on Windows
* attempt to fix tests on Windows
* Improve CPU Cache info reporting -- Add Windows support.
This patch does a couple of thing regarding CPU Cache reporting.
First, it adds an implementation on Windows. Second it fixes
the JSONReporter to correctly (and actually) output the CPU
configuration information.
And finally, third, it detects and reports the number of
physical CPU's that share the same cache.
Recently the library added a new ranged-for variant of the KeepRunning
loop that is much faster. For this reason it should be preferred in all
new code.
Because a library, its documentation, and its tests should all embody
the best practices of using the library, this patch changes all but a
few usages of KeepRunning() into for (auto _ : state).
The remaining usages in the tests and documentation persist only
to document and test behavior that is different between the two formulations.
Also note that because the range-for loop requires C++11, the KeepRunning
variant has not been deprecated at this time.
* Drop Stat1, refactor statistics to be user-providable, add median.
My main goal was to add median statistic. Since Stat1
calculated the stats incrementally, and did not store
the values themselves, it is was not possible. Thus,
i have replaced Stat1 with simple std::vector<double>,
containing all the values.
Then, i have refactored current mean/stdev to be a
function that is provided with values vector, and
returns the statistic. While there, it seemed to make
sense to deduplicate the code by storing all the
statistics functions in a map, and then simply iterate
over it. And the interface to add new statistics is
intentionally exposed, so they may be added easily.
The notable change is that Iterations are no longer
displayed as 0 for stdev. Is could be changed, but
i'm not sure how to nicely fit that into the API.
Similarly, this dance about sometimes (for some fields,
for some statistics) dividing by run.iterations, and
then multiplying the calculated stastic back is also
dropped, and if you do the math, i fail to see why
it was needed there in the first place.
Since that was the only use of stat.h, it is removed.
* complexity.h: attempt to fix MSVC build
* Update README.md
* Store statistics to compute in a vector, ensures ordering.
* Add a bit more tests for repetitions.
* Partially address review notes.
* Fix gcc build: drop extra ';'
clang, why didn't you warn me?
* Address review comments.
* double() -> 0.0
* early return
* Json reporter: passthrough fp, don't cast it to int; adjust tooling
Json output format is generally meant for further processing
using some automated tools. Thus, it makes sense not to
intentionally limit the precision of the values contained
in the report.
As it can be seen, FormatKV() for doubles, used %.2f format,
which was meant to preserve at least some of the precision.
However, before that function is ever called, the doubles
were already cast to the integer via RoundDouble()...
This is also the case for console reporter, where it makes
sense because the screen space is limited, and this reporter,
however the CSV reporter does output some( decimal digits.
Thus i can only conclude that the loss of the precision
was not really considered, so i have decided to adjust the
code of the json reporter to output the full fp precision.
There can be several reasons why that is the right thing
to do, the bigger the time_unit used, the greater the
precision loss, so i'd say any sort of further processing
(like e.g. tools/compare_bench.py does) is best done
on the values with most precision.
Also, that cast skewed the data away from zero, which
i think may or may not result in false- positives/negatives
in the output of tools/compare_bench.py
* Json reporter: FormatKV(double): address review note
* tools/gbench/report.py: skip benchmarks with different time units
While it may be useful to teach it to operate on the
measurements with different time units, which is now
possible since floats are stored, and not the integers,
but for now at least doing such a sanity-checking
is better than providing misinformation.
* Added user counters, and move use of bytes_processed and items_processed to user counter logic.
Each counter is a string-value pair. The counters were
made available through the State class. Two helper virtual
methods were added to the Fixture class to allow convenient
initialization and termination of the counters: InitState()
and TerminateState(). The reporting of the counters is buggy
and is still a work in progress, to be completed in the next commits.
* fix bad removal of BenchmarkCounters code during the merge
* add myself to AUTHORS/CONTRIBUTORS
* fix printing to std::cout in csv_reporter
* bytes_per_second and items_per_second are now in the UserCounters class
* add user counters to json reporter
* moving bytes_per_second and items_per_second to their old state
* console reporter dealing ok with user counters.
* update unit tests for user counters
* CSVReporter now prints user counters too.
* cleanup user counters
* reverted changes to cmake files which should have gone into later commits
* fixture_test: fix gcc 4.6 compilation
* remove ctor with default argument
see https://github.com/google/benchmark/pull/262#discussion_r72298055
* use (auto-defined) BENCHMARK_HAS_CXX11 instead of BENCHMARK_INITLIST.
https://github.com/google/benchmark/pull/262#discussion_r72298310
* leanify counters API
Discussions:
API complexity: https://github.com/google/benchmark/pull/262#discussion_r72298731
remove std::string dependency (WIP): https://github.com/google/benchmark/pull/262#discussion_r72298142
spacing & alignment: https://github.com/google/benchmark/pull/262#discussion_r72298422
* remove std::string dependency on public API - changed counter name storage to char*
* Counter ctor: use overloads instead of default arguments
discussion:
https://github.com/google/benchmark/pull/262#discussion_r72298055
* Use raw pointers to remove dependency on std::vector from public API .
For more info, see discussion at https://github.com/google/benchmark/pull/262#discussion_r72319678 .
* Move counter implementation from benchmark.cc to counter.cc.
See discussion: https://github.com/google/benchmark/pull/262#discussion_r72298980 .
* Remove unused (commented-out) code.
* Moved thread counters to ThreadStats.
* Counters: fixed copy and move constructors.
* Counter: use an inplace buffer for small names.
* benchmark_test: move counters test out of CXX11 preprocessor conditional.
* Counter: fix VS2013 compilation error in char[] initialization.
* Fix typo.
* Expose counters from State.
See discussion: https://github.com/google/benchmark/pull/262#issuecomment-237156951
* Changed counters interface to map-like.
* Fix printing of user counters in ConsoleReporter.
* Applied clang-format to counter.cc and console_reporter.cc.
Command was `clang-format -style=Google -i counter.cc console_reporter.cc`
I also applied to all other files, but the changes were very
far-reaching so I rolled those back.
* Rename Counter::Flags_e to Counter::Flags
* Fix use of reserved names in Counter and BenchmarkCounters.
* Counter: Fix move ctor bug + change order of members.
* Fixture: remove tentative methods InitState() and TerminateState().
* Update fixture_test to the new Fixture interface.
* BenchmarkCounters: fixed a bug in the move ctor. Remove call to CHECK_LT().
CHECK_LT() was making the size_t lookup take ~double the time of a string lookup!
* BenchmarkCounters: add option to not print zero counters (defaults to false).
* Add test to compare counter storage and access with std::map.
* README: clarify cost of counter access modes.
* move counter access test to an own test.
* BenchmarkCounters: add move Insert()
* Counters access test: add accelerated lookup by name.
* Fix old range syntax.
* Fix missing include of cstdio
* Fix Visual Studio warning
* VS2013 and lower: fix use of snprintf()
* VS2013: fix use of char[] as a member of std::pair<>.
* change counter storage to std::map
* Remove skipZeroCounters logic
* Fix VS compilation error.
* Implemented request changes to PR #262.
* PR #262: More requested changes.
* README: cleanup counter text.
* PR #262: remove clang-format changes for preexisting code
* Complexity+Counters: fix counter flags which were being ignored.
* Document all Counter::Flag members
* fixed loss of counter values
* ConsoleReporter: remove tabular printing of user counters.
* ConsoleReporter: header printing should not be contingent on user counter names.
* Minor white space and alignment fixes.
* cxx03_test + counters: reuse the BM_empty() function.
* user counters: add note to README on how counters are gathered across threads
* Test bytes_per_second and items_per_second.
* Test SetLabel.
* Reformat.
* Make State::error_occurred_ private.
* Fix tests with floats.
* Merge private blocks
* refactor
* Move default substitutions into library
* Move default substitutions to the *right* place in the library
* Fix init order issues that caused test failures
* improve diagnostics
* add missing include
* general cleanup
* Address review comments