Commit graph

5 commits

Author SHA1 Message Date
Pavel Campr e381139474 fix compare script - output formatting - correctly align numbers >9999 (#322)
* fix compare script - output formatting - correctly align numbers >9999

* fix failing test (report.py); fix compare script output formatting (large numbers alignment)
2016-12-09 05:24:31 -07:00
Eric Fiselier a8aa40c596 Fix obvious typo in string formatting 2016-11-19 05:17:52 -07:00
Eric Fiselier 2373382284 Rewrite compare_bench.py argument parsing.
This patch cleans up a number of issues with how compare_bench.py handled
the command line arguments.

* Use the 'argparse' python module instead of hand rolled parsing. This gives
  better usage messages.

* Add diagnostics for certain --benchmark flags that cannot or should not
  be used with compare_bench.py (eg --benchmark_out_format=csv).

* Don't override the user specified --benchmark_out flag if it's provided.

In future I would like the user to be able to capture both benchmark output
files, but this change is big enough for now.

This fixes issue #313.
2016-11-18 15:42:02 -07:00
Eric cba945e37d Make PauseTiming() and ResumeTiming() per thread. (#286)
* Change to using per-thread timers

* fix bad assertions

* fix copy paste error on windows

* Fix thread safety annotations

* Make null-log thread safe

* remove remaining globals

* use chrono for walltime since it is thread safe

* consolidate timer functions

* Add missing ctime include

* Rename to be consistent with Google style

* Format patch using clang-format

* cleanup -Wthread-safety configuration

* Don't trust _POSIX_FEATURE macros because OS X lies.

* Fix OS X thread timings

* attempt to fix mingw build

* Attempt to make mingw work again

* Revert old mingw workaround

* improve diagnostics

* Drastically improve OS X measurements

* Use average real time instead of max
2016-09-02 21:34:34 -06:00
Eric 5eac66249c Add a "compare_bench.py" tooling script. (#266)
This patch adds the compare_bench.py utility which can be used to compare the result of benchmarks.
The program is invoked like:

$ compare_bench.py <old-benchmark> <new-benchmark> [benchmark options]...
Where <old-benchmark> and <new-benchmark> either specify a benchmark executable file, or a JSON output file. The type of the input file is automatically detected. If a benchmark executable is specified then the benchmark is run to obtain the results. Otherwise the results are simply loaded from the output file.
2016-08-09 12:33:57 -06:00