Commit graph

11 commits

Author SHA1 Message Date
Roman Lebedev 886585a3b7 [RFC] Tools: compare-bench.py: print change% with two decimal digits (#440)
* Tools: compare-bench.py: print change% with two decimal digits

Here is a comparison of before vs. after:
```diff
-Benchmark                      Time           CPU      Time Old      Time New       CPU Old       CPU New
----------------------------------------------------------------------------------------------------------
-BM_SameTimes                  +0.00         +0.00            10            10            10            10
-BM_2xFaster                   -0.50         -0.50            50            25            50            25
-BM_2xSlower                   +1.00         +1.00            50           100            50           100
-BM_1PercentFaster             -0.01         -0.01           100            99           100            99
-BM_1PercentSlower             +0.01         +0.01           100           101           100           101
-BM_10PercentFaster            -0.10         -0.10           100            90           100            90
-BM_10PercentSlower            +0.10         +0.10           100           110           100           110
-BM_100xSlower                +99.00        +99.00           100         10000           100         10000
-BM_100xFaster                 -0.99         -0.99         10000           100         10000           100
-BM_10PercentCPUToTime         +0.10         -0.10           100           110           100            90
+Benchmark                        Time             CPU      Time Old      Time New       CPU Old       CPU New
+-------------------------------------------------------------------------------------------------------------
+BM_SameTimes                  +0.0000         +0.0000            10            10            10            10
+BM_2xFaster                   -0.5000         -0.5000            50            25            50            25
+BM_2xSlower                   +1.0000         +1.0000            50           100            50           100
+BM_1PercentFaster             -0.0100         -0.0100           100            99           100            99
+BM_1PercentSlower             +0.0100         +0.0100           100           101           100           101
+BM_10PercentFaster            -0.1000         -0.1000           100            90           100            90
+BM_10PercentSlower            +0.1000         +0.1000           100           110           100           110
+BM_100xSlower                +99.0000        +99.0000           100         10000           100         10000
+BM_100xFaster                 -0.9900         -0.9900         10000           100         10000           100
+BM_10PercentCPUToTime         +0.1000         -0.1000           100           110           100            90
+BM_ThirdFaster                -0.3333         -0.3333           100            67           100            67

```

So the first ("Time") column is exactly where it was, but with
two more decimal digits. The position of the '.' in the second
("CPU") column is shifted right by those two positions, and the
rest is unmodified, but simply shifted right by those 4 positions.

As for the reasoning, i guess it is more or less the same as
with #426. In some sad times, microbenchmarking is not applicable.
In those cases, the more precise the change report is, the better.

The current formatting prints not so much the percentages,
but the fraction i'd say. It is more useful for huge changes,
much more than 100%. That is not always the case, especially
if it is not a microbenchmark. Then, even though the change
may be good/bad, the change is small (<0.5% or so),
rounding happens, and it is no longer possible to tell.

I do acknowledge that this change does not fix that problem. Of
course, confidence intervals and such would be better, and they
would probably fix the problem. But i think this is good as-is
too, because now the you see 2 fractional percentage digits!1

The obvious downside is that the output is now even wider.

* Revisit tests, more closely documents the current behavior.
2017-08-28 16:12:18 -07:00
Roman Lebedev c7192c8a9a compare_bench.py: fixup benchmark_options. (#435)
2373382284
reworked parsing, and introduced a regression
in handling of the optional options that
should be passed to both of the benchmarks.

Now, unless the *first* optional argument starts with
'-', it would just complain about that argument:
	Unrecognized positional argument arguments: '['q']'
which is wrong. However if some dummy arg like '-q' was
passed first, it would then happily passthrough them all...

This commit fixes benchmark_options behavior, by
restoring original passthrough behavior for all
the optional positional arguments.
2017-08-18 10:55:27 -07:00
Roman Lebedev d474450b89 Tooling: generate_difference_report(): show old/new for both values (#427)
While the percentages are displayed for both of the columns,
the old/new values are only displayed for the second column,
for the CPU time. And the column is not even spelled out.

In cases where b->UseRealTime(); is used, this is at the
very least highly confusing. So why don't we just
display both the old/new for both the columns?

Fixes #425
2017-07-25 09:09:26 -07:00
Roman Lebedev b9be142d1e Json reporter: don't cast floating-point to int; adjust tooling (#426)
* Json reporter: passthrough fp, don't cast it to int; adjust tooling

Json output format is generally meant for further processing
using some automated tools. Thus, it makes sense not to
intentionally limit the precision of the values contained
in the report.

As it can be seen, FormatKV() for doubles, used %.2f format,
which was meant to preserve at least some of the precision.
However, before that function is ever called, the doubles
were already cast to the integer via RoundDouble()...

This is also the case for console reporter, where it makes
sense because the screen space is limited, and this reporter,
however the CSV reporter does output some( decimal digits.

Thus i can only conclude that the loss of the precision
was not really considered, so i have decided to adjust the
code of the json reporter to output the full fp precision.

There can be several reasons why that is the right thing
to do, the bigger the time_unit used, the greater the
precision loss, so i'd say any sort of further processing
(like e.g. tools/compare_bench.py does) is best done
on the values with most precision.

Also, that cast skewed the data away from zero, which
i think may or may not result in false- positives/negatives
in the output of tools/compare_bench.py

* Json reporter: FormatKV(double): address review note

* tools/gbench/report.py: skip benchmarks with different time units

While it may be useful to teach it to operate on the
measurements with different time units, which is now
possible since floats are stored, and not the integers,
but for now at least doing such a sanity-checking
is better than providing misinformation.
2017-07-24 16:13:55 -07:00
Felix Duvallet feb69ae710 Ensure all the necessary keys are present before parsing JSON data (#380)
This prevents errors when additional non-timing data are present in
the JSON that is loaded, for example when complexity data has been
computed (see #379).
2017-05-02 08:19:35 -07:00
Ray Glover 17298b2dc0 Python 2/3 compatibility (#361)
* [tools] python 2/3 support

* update authors/contributors
2017-03-29 03:39:18 -07:00
Pavel Campr e381139474 fix compare script - output formatting - correctly align numbers >9999 (#322)
* fix compare script - output formatting - correctly align numbers >9999

* fix failing test (report.py); fix compare script output formatting (large numbers alignment)
2016-12-09 05:24:31 -07:00
Eric Fiselier a8aa40c596 Fix obvious typo in string formatting 2016-11-19 05:17:52 -07:00
Eric Fiselier 2373382284 Rewrite compare_bench.py argument parsing.
This patch cleans up a number of issues with how compare_bench.py handled
the command line arguments.

* Use the 'argparse' python module instead of hand rolled parsing. This gives
  better usage messages.

* Add diagnostics for certain --benchmark flags that cannot or should not
  be used with compare_bench.py (eg --benchmark_out_format=csv).

* Don't override the user specified --benchmark_out flag if it's provided.

In future I would like the user to be able to capture both benchmark output
files, but this change is big enough for now.

This fixes issue #313.
2016-11-18 15:42:02 -07:00
Eric cba945e37d Make PauseTiming() and ResumeTiming() per thread. (#286)
* Change to using per-thread timers

* fix bad assertions

* fix copy paste error on windows

* Fix thread safety annotations

* Make null-log thread safe

* remove remaining globals

* use chrono for walltime since it is thread safe

* consolidate timer functions

* Add missing ctime include

* Rename to be consistent with Google style

* Format patch using clang-format

* cleanup -Wthread-safety configuration

* Don't trust _POSIX_FEATURE macros because OS X lies.

* Fix OS X thread timings

* attempt to fix mingw build

* Attempt to make mingw work again

* Revert old mingw workaround

* improve diagnostics

* Drastically improve OS X measurements

* Use average real time instead of max
2016-09-02 21:34:34 -06:00
Eric 5eac66249c Add a "compare_bench.py" tooling script. (#266)
This patch adds the compare_bench.py utility which can be used to compare the result of benchmarks.
The program is invoked like:

$ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]...
Where <old-benchmark> and <new-benchmark> either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
2016-08-09 12:33:57 -06:00