benchmark/test/reporter_output_test.cc

383 lines
18 KiB
C++
Raw Normal View History

#undef NDEBUG
#include <utility>
#include "benchmark/benchmark.h"
#include "output_test.h"
// ========================================================================= //
// ---------------------- Testing Prologue Output -------------------------- //
// ========================================================================= //
Add user-defined counters. (#262) * Added user counters, and move use of bytes_processed and items_processed to user counter logic. Each counter is a string-value pair. The counters were made available through the State class. Two helper virtual methods were added to the Fixture class to allow convenient initialization and termination of the counters: InitState() and TerminateState(). The reporting of the counters is buggy and is still a work in progress, to be completed in the next commits. * fix bad removal of BenchmarkCounters code during the merge * add myself to AUTHORS/CONTRIBUTORS * fix printing to std::cout in csv_reporter * bytes_per_second and items_per_second are now in the UserCounters class * add user counters to json reporter * moving bytes_per_second and items_per_second to their old state * console reporter dealing ok with user counters. * update unit tests for user counters * CSVReporter now prints user counters too. * cleanup user counters * reverted changes to cmake files which should have gone into later commits * fixture_test: fix gcc 4.6 compilation * remove ctor with default argument see https://github.com/google/benchmark/pull/262#discussion_r72298055 * use (auto-defined) BENCHMARK_HAS_CXX11 instead of BENCHMARK_INITLIST. https://github.com/google/benchmark/pull/262#discussion_r72298310 * leanify counters API Discussions: API complexity: https://github.com/google/benchmark/pull/262#discussion_r72298731 remove std::string dependency (WIP): https://github.com/google/benchmark/pull/262#discussion_r72298142 spacing & alignment: https://github.com/google/benchmark/pull/262#discussion_r72298422 * remove std::string dependency on public API - changed counter name storage to char* * Counter ctor: use overloads instead of default arguments discussion: https://github.com/google/benchmark/pull/262#discussion_r72298055 * Use raw pointers to remove dependency on std::vector from public API . For more info, see discussion at https://github.com/google/benchmark/pull/262#discussion_r72319678 . * Move counter implementation from benchmark.cc to counter.cc. See discussion: https://github.com/google/benchmark/pull/262#discussion_r72298980 . * Remove unused (commented-out) code. * Moved thread counters to ThreadStats. * Counters: fixed copy and move constructors. * Counter: use an inplace buffer for small names. * benchmark_test: move counters test out of CXX11 preprocessor conditional. * Counter: fix VS2013 compilation error in char[] initialization. * Fix typo. * Expose counters from State. See discussion: https://github.com/google/benchmark/pull/262#issuecomment-237156951 * Changed counters interface to map-like. * Fix printing of user counters in ConsoleReporter. * Applied clang-format to counter.cc and console_reporter.cc. Command was `clang-format -style=Google -i counter.cc console_reporter.cc` I also applied to all other files, but the changes were very far-reaching so I rolled those back. * Rename Counter::Flags_e to Counter::Flags * Fix use of reserved names in Counter and BenchmarkCounters. * Counter: Fix move ctor bug + change order of members. * Fixture: remove tentative methods InitState() and TerminateState(). * Update fixture_test to the new Fixture interface. * BenchmarkCounters: fixed a bug in the move ctor. Remove call to CHECK_LT(). CHECK_LT() was making the size_t lookup take ~double the time of a string lookup! * BenchmarkCounters: add option to not print zero counters (defaults to false). * Add test to compare counter storage and access with std::map. * README: clarify cost of counter access modes. * move counter access test to an own test. * BenchmarkCounters: add move Insert() * Counters access test: add accelerated lookup by name. * Fix old range syntax. * Fix missing include of cstdio * Fix Visual Studio warning * VS2013 and lower: fix use of snprintf() * VS2013: fix use of char[] as a member of std::pair<>. * change counter storage to std::map * Remove skipZeroCounters logic * Fix VS compilation error. * Implemented request changes to PR #262. * PR #262: More requested changes. * README: cleanup counter text. * PR #262: remove clang-format changes for preexisting code * Complexity+Counters: fix counter flags which were being ignored. * Document all Counter::Flag members * fixed loss of counter values * ConsoleReporter: remove tabular printing of user counters. * ConsoleReporter: header printing should not be contingent on user counter names. * Minor white space and alignment fixes. * cxx03_test + counters: reuse the BM_empty() function. * user counters: add note to README on how counters are gathered across threads
2017-03-02 00:23:42 +00:00
ADD_CASES(TC_ConsoleOut,
{{"^[-]+$", MR_Next},
{"^Benchmark %s Time %s CPU %s Iterations$", MR_Next},
{"^[-]+$", MR_Next}});
static int AddContextCases() {
AddCases(TC_ConsoleErr,
{
{"%int[-/]%int[-/]%int %int:%int:%int$", MR_Default},
{"Running .*/reporter_output_test(\\.exe)?$", MR_Next},
{"Run on \\(%int X %float MHz CPU s\\)", MR_Next},
});
AddCases(TC_JSONOut, {{"^\\{", MR_Default},
{"\"context\":", MR_Next},
{"\"date\": \"", MR_Next},
{"\"executable\": \".*/reporter_output_test(\\.exe)?\",", MR_Next},
{"\"num_cpus\": %int,$", MR_Next},
{"\"mhz_per_cpu\": %float,$", MR_Next},
{"\"cpu_scaling_enabled\": ", MR_Next},
{"\"caches\": \\[$", MR_Next}});
auto const& Caches = benchmark::CPUInfo::Get().caches;
if (!Caches.empty()) {
AddCases(TC_ConsoleErr, {{"CPU Caches:$", MR_Next}});
}
for (size_t I = 0; I < Caches.size(); ++I) {
std::string num_caches_str =
Caches[I].num_sharing != 0 ? " \\(x%int\\)$" : "$";
AddCases(
TC_ConsoleErr,
{{"L%int (Data|Instruction|Unified) %intK" + num_caches_str, MR_Next}});
AddCases(TC_JSONOut, {{"\\{$", MR_Next},
{"\"type\": \"", MR_Next},
{"\"level\": %int,$", MR_Next},
{"\"size\": %int,$", MR_Next},
{"\"num_sharing\": %int$", MR_Next},
{"}[,]{0,1}$", MR_Next}});
}
AddCases(TC_JSONOut, {{"],$"}});
return 0;
}
int dummy_register = AddContextCases();
2017-04-27 18:24:06 +00:00
ADD_CASES(TC_CSVOut, {{"%csv_header"}});
// ========================================================================= //
// ------------------------ Testing Basic Output --------------------------- //
// ========================================================================= //
void BM_basic(benchmark::State& state) {
for (auto _ : state) {
}
}
BENCHMARK(BM_basic);
ADD_CASES(TC_ConsoleOut, {{"^BM_basic %console_report$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_basic\",$"},
{"\"iterations\": %int,$", MR_Next},
Json reporter: don't cast floating-point to int; adjust tooling (#426) * Json reporter: passthrough fp, don't cast it to int; adjust tooling Json output format is generally meant for further processing using some automated tools. Thus, it makes sense not to intentionally limit the precision of the values contained in the report. As it can be seen, FormatKV() for doubles, used %.2f format, which was meant to preserve at least some of the precision. However, before that function is ever called, the doubles were already cast to the integer via RoundDouble()... This is also the case for console reporter, where it makes sense because the screen space is limited, and this reporter, however the CSV reporter does output some( decimal digits. Thus i can only conclude that the loss of the precision was not really considered, so i have decided to adjust the code of the json reporter to output the full fp precision. There can be several reasons why that is the right thing to do, the bigger the time_unit used, the greater the precision loss, so i'd say any sort of further processing (like e.g. tools/compare_bench.py does) is best done on the values with most precision. Also, that cast skewed the data away from zero, which i think may or may not result in false- positives/negatives in the output of tools/compare_bench.py * Json reporter: FormatKV(double): address review note * tools/gbench/report.py: skip benchmarks with different time units While it may be useful to teach it to operate on the measurements with different time units, which is now possible since floats are stored, and not the integers, but for now at least doing such a sanity-checking is better than providing misinformation.
2017-07-24 23:13:55 +00:00
{"\"real_time\": %float,$", MR_Next},
{"\"cpu_time\": %float,$", MR_Next},
{"\"time_unit\": \"ns\"$", MR_Next},
{"}", MR_Next}});
ADD_CASES(TC_CSVOut, {{"^\"BM_basic\",%csv_report$"}});
// ========================================================================= //
// ------------------------ Testing Bytes per Second Output ---------------- //
// ========================================================================= //
void BM_bytes_per_second(benchmark::State& state) {
for (auto _ : state) {
}
state.SetBytesProcessed(1);
}
BENCHMARK(BM_bytes_per_second);
ADD_CASES(TC_ConsoleOut,
{{"^BM_bytes_per_second %console_report +%float[kM]{0,1}B/s$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_bytes_per_second\",$"},
{"\"iterations\": %int,$", MR_Next},
Json reporter: don't cast floating-point to int; adjust tooling (#426) * Json reporter: passthrough fp, don't cast it to int; adjust tooling Json output format is generally meant for further processing using some automated tools. Thus, it makes sense not to intentionally limit the precision of the values contained in the report. As it can be seen, FormatKV() for doubles, used %.2f format, which was meant to preserve at least some of the precision. However, before that function is ever called, the doubles were already cast to the integer via RoundDouble()... This is also the case for console reporter, where it makes sense because the screen space is limited, and this reporter, however the CSV reporter does output some( decimal digits. Thus i can only conclude that the loss of the precision was not really considered, so i have decided to adjust the code of the json reporter to output the full fp precision. There can be several reasons why that is the right thing to do, the bigger the time_unit used, the greater the precision loss, so i'd say any sort of further processing (like e.g. tools/compare_bench.py does) is best done on the values with most precision. Also, that cast skewed the data away from zero, which i think may or may not result in false- positives/negatives in the output of tools/compare_bench.py * Json reporter: FormatKV(double): address review note * tools/gbench/report.py: skip benchmarks with different time units While it may be useful to teach it to operate on the measurements with different time units, which is now possible since floats are stored, and not the integers, but for now at least doing such a sanity-checking is better than providing misinformation.
2017-07-24 23:13:55 +00:00
{"\"real_time\": %float,$", MR_Next},
{"\"cpu_time\": %float,$", MR_Next},
{"\"time_unit\": \"ns\",$", MR_Next},
Json reporter: don't cast floating-point to int; adjust tooling (#426) * Json reporter: passthrough fp, don't cast it to int; adjust tooling Json output format is generally meant for further processing using some automated tools. Thus, it makes sense not to intentionally limit the precision of the values contained in the report. As it can be seen, FormatKV() for doubles, used %.2f format, which was meant to preserve at least some of the precision. However, before that function is ever called, the doubles were already cast to the integer via RoundDouble()... This is also the case for console reporter, where it makes sense because the screen space is limited, and this reporter, however the CSV reporter does output some( decimal digits. Thus i can only conclude that the loss of the precision was not really considered, so i have decided to adjust the code of the json reporter to output the full fp precision. There can be several reasons why that is the right thing to do, the bigger the time_unit used, the greater the precision loss, so i'd say any sort of further processing (like e.g. tools/compare_bench.py does) is best done on the values with most precision. Also, that cast skewed the data away from zero, which i think may or may not result in false- positives/negatives in the output of tools/compare_bench.py * Json reporter: FormatKV(double): address review note * tools/gbench/report.py: skip benchmarks with different time units While it may be useful to teach it to operate on the measurements with different time units, which is now possible since floats are stored, and not the integers, but for now at least doing such a sanity-checking is better than providing misinformation.
2017-07-24 23:13:55 +00:00
{"\"bytes_per_second\": %float$", MR_Next},
{"}", MR_Next}});
ADD_CASES(TC_CSVOut, {{"^\"BM_bytes_per_second\",%csv_bytes_report$"}});
// ========================================================================= //
// ------------------------ Testing Items per Second Output ---------------- //
// ========================================================================= //
void BM_items_per_second(benchmark::State& state) {
for (auto _ : state) {
}
state.SetItemsProcessed(1);
}
BENCHMARK(BM_items_per_second);
ADD_CASES(TC_ConsoleOut,
{{"^BM_items_per_second %console_report +%float[kM]{0,1} items/s$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_items_per_second\",$"},
{"\"iterations\": %int,$", MR_Next},
Json reporter: don't cast floating-point to int; adjust tooling (#426) * Json reporter: passthrough fp, don't cast it to int; adjust tooling Json output format is generally meant for further processing using some automated tools. Thus, it makes sense not to intentionally limit the precision of the values contained in the report. As it can be seen, FormatKV() for doubles, used %.2f format, which was meant to preserve at least some of the precision. However, before that function is ever called, the doubles were already cast to the integer via RoundDouble()... This is also the case for console reporter, where it makes sense because the screen space is limited, and this reporter, however the CSV reporter does output some( decimal digits. Thus i can only conclude that the loss of the precision was not really considered, so i have decided to adjust the code of the json reporter to output the full fp precision. There can be several reasons why that is the right thing to do, the bigger the time_unit used, the greater the precision loss, so i'd say any sort of further processing (like e.g. tools/compare_bench.py does) is best done on the values with most precision. Also, that cast skewed the data away from zero, which i think may or may not result in false- positives/negatives in the output of tools/compare_bench.py * Json reporter: FormatKV(double): address review note * tools/gbench/report.py: skip benchmarks with different time units While it may be useful to teach it to operate on the measurements with different time units, which is now possible since floats are stored, and not the integers, but for now at least doing such a sanity-checking is better than providing misinformation.
2017-07-24 23:13:55 +00:00
{"\"real_time\": %float,$", MR_Next},
{"\"cpu_time\": %float,$", MR_Next},
{"\"time_unit\": \"ns\",$", MR_Next},
Json reporter: don't cast floating-point to int; adjust tooling (#426) * Json reporter: passthrough fp, don't cast it to int; adjust tooling Json output format is generally meant for further processing using some automated tools. Thus, it makes sense not to intentionally limit the precision of the values contained in the report. As it can be seen, FormatKV() for doubles, used %.2f format, which was meant to preserve at least some of the precision. However, before that function is ever called, the doubles were already cast to the integer via RoundDouble()... This is also the case for console reporter, where it makes sense because the screen space is limited, and this reporter, however the CSV reporter does output some( decimal digits. Thus i can only conclude that the loss of the precision was not really considered, so i have decided to adjust the code of the json reporter to output the full fp precision. There can be several reasons why that is the right thing to do, the bigger the time_unit used, the greater the precision loss, so i'd say any sort of further processing (like e.g. tools/compare_bench.py does) is best done on the values with most precision. Also, that cast skewed the data away from zero, which i think may or may not result in false- positives/negatives in the output of tools/compare_bench.py * Json reporter: FormatKV(double): address review note * tools/gbench/report.py: skip benchmarks with different time units While it may be useful to teach it to operate on the measurements with different time units, which is now possible since floats are stored, and not the integers, but for now at least doing such a sanity-checking is better than providing misinformation.
2017-07-24 23:13:55 +00:00
{"\"items_per_second\": %float$", MR_Next},
{"}", MR_Next}});
ADD_CASES(TC_CSVOut, {{"^\"BM_items_per_second\",%csv_items_report$"}});
// ========================================================================= //
// ------------------------ Testing Label Output --------------------------- //
// ========================================================================= //
void BM_label(benchmark::State& state) {
for (auto _ : state) {
}
state.SetLabel("some label");
}
BENCHMARK(BM_label);
ADD_CASES(TC_ConsoleOut, {{"^BM_label %console_report some label$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_label\",$"},
{"\"iterations\": %int,$", MR_Next},
Json reporter: don't cast floating-point to int; adjust tooling (#426) * Json reporter: passthrough fp, don't cast it to int; adjust tooling Json output format is generally meant for further processing using some automated tools. Thus, it makes sense not to intentionally limit the precision of the values contained in the report. As it can be seen, FormatKV() for doubles, used %.2f format, which was meant to preserve at least some of the precision. However, before that function is ever called, the doubles were already cast to the integer via RoundDouble()... This is also the case for console reporter, where it makes sense because the screen space is limited, and this reporter, however the CSV reporter does output some( decimal digits. Thus i can only conclude that the loss of the precision was not really considered, so i have decided to adjust the code of the json reporter to output the full fp precision. There can be several reasons why that is the right thing to do, the bigger the time_unit used, the greater the precision loss, so i'd say any sort of further processing (like e.g. tools/compare_bench.py does) is best done on the values with most precision. Also, that cast skewed the data away from zero, which i think may or may not result in false- positives/negatives in the output of tools/compare_bench.py * Json reporter: FormatKV(double): address review note * tools/gbench/report.py: skip benchmarks with different time units While it may be useful to teach it to operate on the measurements with different time units, which is now possible since floats are stored, and not the integers, but for now at least doing such a sanity-checking is better than providing misinformation.
2017-07-24 23:13:55 +00:00
{"\"real_time\": %float,$", MR_Next},
{"\"cpu_time\": %float,$", MR_Next},
{"\"time_unit\": \"ns\",$", MR_Next},
{"\"label\": \"some label\"$", MR_Next},
{"}", MR_Next}});
ADD_CASES(TC_CSVOut, {{"^\"BM_label\",%csv_label_report_begin\"some "
"label\"%csv_label_report_end$"}});
// ========================================================================= //
// ------------------------ Testing Error Output --------------------------- //
// ========================================================================= //
void BM_error(benchmark::State& state) {
state.SkipWithError("message");
for (auto _ : state) {
}
}
BENCHMARK(BM_error);
ADD_CASES(TC_ConsoleOut, {{"^BM_error[ ]+ERROR OCCURRED: 'message'$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_error\",$"},
{"\"error_occurred\": true,$", MR_Next},
{"\"error_message\": \"message\",$", MR_Next}});
ADD_CASES(TC_CSVOut, {{"^\"BM_error\",,,,,,,,true,\"message\"$"}});
// ========================================================================= //
// ------------------------ Testing No Arg Name Output -----------------------
// //
// ========================================================================= //
void BM_no_arg_name(benchmark::State& state) {
for (auto _ : state) {
}
}
BENCHMARK(BM_no_arg_name)->Arg(3);
ADD_CASES(TC_ConsoleOut, {{"^BM_no_arg_name/3 %console_report$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_no_arg_name/3\",$"}});
ADD_CASES(TC_CSVOut, {{"^\"BM_no_arg_name/3\",%csv_report$"}});
// ========================================================================= //
// ------------------------ Testing Arg Name Output ----------------------- //
// ========================================================================= //
void BM_arg_name(benchmark::State& state) {
for (auto _ : state) {
}
}
BENCHMARK(BM_arg_name)->ArgName("first")->Arg(3);
ADD_CASES(TC_ConsoleOut, {{"^BM_arg_name/first:3 %console_report$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_arg_name/first:3\",$"}});
ADD_CASES(TC_CSVOut, {{"^\"BM_arg_name/first:3\",%csv_report$"}});
// ========================================================================= //
// ------------------------ Testing Arg Names Output ----------------------- //
// ========================================================================= //
void BM_arg_names(benchmark::State& state) {
for (auto _ : state) {
}
}
BENCHMARK(BM_arg_names)->Args({2, 5, 4})->ArgNames({"first", "", "third"});
ADD_CASES(TC_ConsoleOut,
{{"^BM_arg_names/first:2/5/third:4 %console_report$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_arg_names/first:2/5/third:4\",$"}});
ADD_CASES(TC_CSVOut, {{"^\"BM_arg_names/first:2/5/third:4\",%csv_report$"}});
// ========================================================================= //
// ----------------------- Testing Complexity Output ----------------------- //
// ========================================================================= //
void BM_Complexity_O1(benchmark::State& state) {
for (auto _ : state) {
}
state.SetComplexityN(state.range(0));
}
BENCHMARK(BM_Complexity_O1)->Range(1, 1 << 18)->Complexity(benchmark::o1);
SET_SUBSTITUTIONS({{"%bigOStr", "[ ]* %float \\([0-9]+\\)"},
{"%RMS", "[ ]*[0-9]+ %"}});
ADD_CASES(TC_ConsoleOut, {{"^BM_Complexity_O1_BigO %bigOStr %bigOStr[ ]*$"},
{"^BM_Complexity_O1_RMS %RMS %RMS[ ]*$"}});
// ========================================================================= //
// ----------------------- Testing Aggregate Output ------------------------ //
// ========================================================================= //
// Test that non-aggregate data is printed by default
void BM_Repeat(benchmark::State& state) {
for (auto _ : state) {
}
}
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
// need two repetitions min to be able to output any aggregate output
BENCHMARK(BM_Repeat)->Repetitions(2);
ADD_CASES(TC_ConsoleOut, {{"^BM_Repeat/repeats:2 %console_report$"},
{"^BM_Repeat/repeats:2 %console_report$"},
{"^BM_Repeat/repeats:2_mean %console_report$"},
{"^BM_Repeat/repeats:2_median %console_report$"},
{"^BM_Repeat/repeats:2_stddev %console_report$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Repeat/repeats:2\",$"},
{"\"name\": \"BM_Repeat/repeats:2\",$"},
{"\"name\": \"BM_Repeat/repeats:2_mean\",$"},
{"\"name\": \"BM_Repeat/repeats:2_median\",$"},
{"\"name\": \"BM_Repeat/repeats:2_stddev\",$"}});
ADD_CASES(TC_CSVOut, {{"^\"BM_Repeat/repeats:2\",%csv_report$"},
{"^\"BM_Repeat/repeats:2\",%csv_report$"},
{"^\"BM_Repeat/repeats:2_mean\",%csv_report$"},
{"^\"BM_Repeat/repeats:2_median\",%csv_report$"},
{"^\"BM_Repeat/repeats:2_stddev\",%csv_report$"}});
// but for two repetitions, mean and median is the same, so let's repeat..
BENCHMARK(BM_Repeat)->Repetitions(3);
ADD_CASES(TC_ConsoleOut, {{"^BM_Repeat/repeats:3 %console_report$"},
{"^BM_Repeat/repeats:3 %console_report$"},
{"^BM_Repeat/repeats:3 %console_report$"},
{"^BM_Repeat/repeats:3_mean %console_report$"},
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
{"^BM_Repeat/repeats:3_median %console_report$"},
{"^BM_Repeat/repeats:3_stddev %console_report$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Repeat/repeats:3\",$"},
{"\"name\": \"BM_Repeat/repeats:3\",$"},
{"\"name\": \"BM_Repeat/repeats:3\",$"},
{"\"name\": \"BM_Repeat/repeats:3_mean\",$"},
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
{"\"name\": \"BM_Repeat/repeats:3_median\",$"},
{"\"name\": \"BM_Repeat/repeats:3_stddev\",$"}});
ADD_CASES(TC_CSVOut, {{"^\"BM_Repeat/repeats:3\",%csv_report$"},
{"^\"BM_Repeat/repeats:3\",%csv_report$"},
{"^\"BM_Repeat/repeats:3\",%csv_report$"},
{"^\"BM_Repeat/repeats:3_mean\",%csv_report$"},
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
{"^\"BM_Repeat/repeats:3_median\",%csv_report$"},
{"^\"BM_Repeat/repeats:3_stddev\",%csv_report$"}});
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
// median differs between even/odd number of repetitions, so just to be sure
BENCHMARK(BM_Repeat)->Repetitions(4);
ADD_CASES(TC_ConsoleOut, {{"^BM_Repeat/repeats:4 %console_report$"},
{"^BM_Repeat/repeats:4 %console_report$"},
{"^BM_Repeat/repeats:4 %console_report$"},
{"^BM_Repeat/repeats:4 %console_report$"},
{"^BM_Repeat/repeats:4_mean %console_report$"},
{"^BM_Repeat/repeats:4_median %console_report$"},
{"^BM_Repeat/repeats:4_stddev %console_report$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_Repeat/repeats:4\",$"},
{"\"name\": \"BM_Repeat/repeats:4\",$"},
{"\"name\": \"BM_Repeat/repeats:4\",$"},
{"\"name\": \"BM_Repeat/repeats:4\",$"},
{"\"name\": \"BM_Repeat/repeats:4_mean\",$"},
{"\"name\": \"BM_Repeat/repeats:4_median\",$"},
{"\"name\": \"BM_Repeat/repeats:4_stddev\",$"}});
ADD_CASES(TC_CSVOut, {{"^\"BM_Repeat/repeats:4\",%csv_report$"},
{"^\"BM_Repeat/repeats:4\",%csv_report$"},
{"^\"BM_Repeat/repeats:4\",%csv_report$"},
{"^\"BM_Repeat/repeats:4\",%csv_report$"},
{"^\"BM_Repeat/repeats:4_mean\",%csv_report$"},
{"^\"BM_Repeat/repeats:4_median\",%csv_report$"},
{"^\"BM_Repeat/repeats:4_stddev\",%csv_report$"}});
// Test that a non-repeated test still prints non-aggregate results even when
// only-aggregate reports have been requested
void BM_RepeatOnce(benchmark::State& state) {
for (auto _ : state) {
}
}
BENCHMARK(BM_RepeatOnce)->Repetitions(1)->ReportAggregatesOnly();
ADD_CASES(TC_ConsoleOut, {{"^BM_RepeatOnce/repeats:1 %console_report$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_RepeatOnce/repeats:1\",$"}});
ADD_CASES(TC_CSVOut, {{"^\"BM_RepeatOnce/repeats:1\",%csv_report$"}});
// Test that non-aggregate data is not reported
void BM_SummaryRepeat(benchmark::State& state) {
for (auto _ : state) {
}
}
BENCHMARK(BM_SummaryRepeat)->Repetitions(3)->ReportAggregatesOnly();
ADD_CASES(TC_ConsoleOut,
{{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
{"^BM_SummaryRepeat/repeats:3_mean %console_report$"},
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
{"^BM_SummaryRepeat/repeats:3_median %console_report$"},
{"^BM_SummaryRepeat/repeats:3_stddev %console_report$"}});
ADD_CASES(TC_JSONOut, {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
{"\"name\": \"BM_SummaryRepeat/repeats:3_mean\",$"},
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
{"\"name\": \"BM_SummaryRepeat/repeats:3_median\",$"},
{"\"name\": \"BM_SummaryRepeat/repeats:3_stddev\",$"}});
ADD_CASES(TC_CSVOut, {{".*BM_SummaryRepeat/repeats:3 ", MR_Not},
{"^\"BM_SummaryRepeat/repeats:3_mean\",%csv_report$"},
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
{"^\"BM_SummaryRepeat/repeats:3_median\",%csv_report$"},
{"^\"BM_SummaryRepeat/repeats:3_stddev\",%csv_report$"}});
2016-10-21 12:59:06 +00:00
void BM_RepeatTimeUnit(benchmark::State& state) {
for (auto _ : state) {
2016-10-21 12:59:06 +00:00
}
}
BENCHMARK(BM_RepeatTimeUnit)
->Repetitions(3)
->ReportAggregatesOnly()
->Unit(benchmark::kMicrosecond);
ADD_CASES(TC_ConsoleOut,
{{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
{"^BM_RepeatTimeUnit/repeats:3_mean %console_us_report$"},
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
{"^BM_RepeatTimeUnit/repeats:3_median %console_us_report$"},
2016-10-21 12:59:06 +00:00
{"^BM_RepeatTimeUnit/repeats:3_stddev %console_us_report$"}});
ADD_CASES(TC_JSONOut, {{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
{"\"name\": \"BM_RepeatTimeUnit/repeats:3_mean\",$"},
{"\"time_unit\": \"us\",?$"},
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
{"\"name\": \"BM_RepeatTimeUnit/repeats:3_median\",$"},
{"\"time_unit\": \"us\",?$"},
2016-10-21 12:59:06 +00:00
{"\"name\": \"BM_RepeatTimeUnit/repeats:3_stddev\",$"},
{"\"time_unit\": \"us\",?$"}});
ADD_CASES(TC_CSVOut,
{{".*BM_RepeatTimeUnit/repeats:3 ", MR_Not},
{"^\"BM_RepeatTimeUnit/repeats:3_mean\",%csv_us_report$"},
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
{"^\"BM_RepeatTimeUnit/repeats:3_median\",%csv_us_report$"},
2016-10-21 12:59:06 +00:00
{"^\"BM_RepeatTimeUnit/repeats:3_stddev\",%csv_us_report$"}});
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
// ========================================================================= //
// -------------------- Testing user-provided statistics ------------------- //
// ========================================================================= //
const auto UserStatistics = [](const std::vector<double>& v) {
return v.back();
};
void BM_UserStats(benchmark::State& state) {
for (auto _ : state) {
Drop Stat1, refactor statistics to be user-providable, add median. (#428) * Drop Stat1, refactor statistics to be user-providable, add median. My main goal was to add median statistic. Since Stat1 calculated the stats incrementally, and did not store the values themselves, it is was not possible. Thus, i have replaced Stat1 with simple std::vector<double>, containing all the values. Then, i have refactored current mean/stdev to be a function that is provided with values vector, and returns the statistic. While there, it seemed to make sense to deduplicate the code by storing all the statistics functions in a map, and then simply iterate over it. And the interface to add new statistics is intentionally exposed, so they may be added easily. The notable change is that Iterations are no longer displayed as 0 for stdev. Is could be changed, but i'm not sure how to nicely fit that into the API. Similarly, this dance about sometimes (for some fields, for some statistics) dividing by run.iterations, and then multiplying the calculated stastic back is also dropped, and if you do the math, i fail to see why it was needed there in the first place. Since that was the only use of stat.h, it is removed. * complexity.h: attempt to fix MSVC build * Update README.md * Store statistics to compute in a vector, ensures ordering. * Add a bit more tests for repetitions. * Partially address review notes. * Fix gcc build: drop extra ';' clang, why didn't you warn me? * Address review comments. * double() -> 0.0 * early return
2017-08-23 23:44:29 +00:00
}
}
BENCHMARK(BM_UserStats)
->Repetitions(3)
->ComputeStatistics("", UserStatistics);
// check that user-provided stats is calculated, and is after the default-ones
// empty string as name is intentional, it would sort before anything else
ADD_CASES(TC_ConsoleOut, {{"^BM_UserStats/repeats:3 %console_report$"},
{"^BM_UserStats/repeats:3 %console_report$"},
{"^BM_UserStats/repeats:3 %console_report$"},
{"^BM_UserStats/repeats:3_mean %console_report$"},
{"^BM_UserStats/repeats:3_median %console_report$"},
{"^BM_UserStats/repeats:3_stddev %console_report$"},
{"^BM_UserStats/repeats:3_ %console_report$"}});
ADD_CASES(TC_JSONOut, {{"\"name\": \"BM_UserStats/repeats:3\",$"},
{"\"name\": \"BM_UserStats/repeats:3\",$"},
{"\"name\": \"BM_UserStats/repeats:3\",$"},
{"\"name\": \"BM_UserStats/repeats:3_mean\",$"},
{"\"name\": \"BM_UserStats/repeats:3_median\",$"},
{"\"name\": \"BM_UserStats/repeats:3_stddev\",$"},
{"\"name\": \"BM_UserStats/repeats:3_\",$"}});
ADD_CASES(TC_CSVOut, {{"^\"BM_UserStats/repeats:3\",%csv_report$"},
{"^\"BM_UserStats/repeats:3\",%csv_report$"},
{"^\"BM_UserStats/repeats:3\",%csv_report$"},
{"^\"BM_UserStats/repeats:3_mean\",%csv_report$"},
{"^\"BM_UserStats/repeats:3_median\",%csv_report$"},
{"^\"BM_UserStats/repeats:3_stddev\",%csv_report$"},
{"^\"BM_UserStats/repeats:3_\",%csv_report$"}});
// ========================================================================= //
// --------------------------- TEST CASES END ------------------------------ //
// ========================================================================= //
2016-10-21 12:59:06 +00:00
int main(int argc, char* argv[]) { RunOutputTests(argc, argv); }