Both `.` and `<empty>` already means "run all benchmarks" here, as commented on this flag's declaration (and below around line 448-449).
So this is a NFC.
On the other hand, this help internally because internally, if the flag is empty (or if it's not a specified by a binary), we don't call the RunSpecifiedBenchmarks.
There is still a difference in what <empty> means internally (runs no benchmarks) and externally (runs all benchmarks).
But we can work around this.
* Remove `min` with dead path.
When `isSignificant` is false, the smallest value `multiplier` can
approach is 14. Thus, min(10, multiplier) will always return 10.
Addresses part one of #1205.
* Remove always false condition.
1. `multiplier <= 1.0` implies `i.seconds >= min_time * 1.4`
2. By (1), `isSignficant` is true because `i.seconds > min_time * 1.4` implies `i.seconds > min_time` implies that `i.seconds / minTime > 1 > 0.1`. Thus, the ternary maintains the same multiplier value.
3. `ShouldReportResults` is always called before `PredictNumItersNeeded`, if `i.seconds >= min_time` then the loop is broken and `PredictNumItersNeeded` is never called.
4. 1 and 3 together imply that `multiplier <= 1.0` is never true.
Addresses part 2 of #1205.
* add g++ to sanitizer buildbots
* add compiler to sanitizer build name
* spell g++ correctly. look, it's early, ok?
* only set libcxx if we're using clang
This can be used together with ArgsProduct() to allow multiple ranges
with different multipliers and mixing dense and sparse ranges.
Example:
BENCHMARK(MyTest)->ArgsProduct({
CreateRange(0, 1024, /*multi=*/32),
CreateRange(0, 100, /*multi=*/4),
CreateDenseRange(0, 4, /*step=*/1)
});
Co-authored-by: Jen-yee Hong <pcmantw@google.com>
* Enable various sanitizer builds in github actions
* try with off the shelf versions
* nope
* specific version?
* rats
* oops
* remove msan for now
* reorder so env is set before building libc++
Inspired by the original implementation by Hai Huang @haih-g
from https://github.com/google/benchmark/pull/1105.
The original implementation had design deficiencies that
weren't really addressable without redesign, so it was reverted.
In essence, the original implementation consisted of two separateable parts:
* reducing the amount time each repetition is run for, and symmetrically increasing repetition count
* running the repetitions in random order
While it worked fine for the usual case, it broke down when user would specify repetitions
(it would completely ignore that request), or specified per-repetition min time (while it would
still adjust the repetition count, it would not adjust the per-repetition time,
leading to much greater run times)
Here, like i was originally suggesting in the original review, i'm separating the features,
and only dealing with a single one - running repetitions in random order.
Now that the runs/repetitions are no longer in-order, the tooling may wish to sort the output,
and indeed `compare.py` has been updated to do that: #1168.
Currently the lifetime of a single BenchmarkRunner is constrained
to a RunBenchmark(), but that will have to change for interleaved
benchmark execution, because we'll need to keep it around to not
forget how much repetitions of an instance we've done.
Currently, the tooling just keeps the whatever benchmark order
that was present, and this is fine nowadays, but once the benchmarks
will be optionally run interleaved, that will be rather suboptimal.
So, now that i have introduced family index and per-family instance index,
we can define an order for the benchmarks, and sort them accordingly.
There is a caveat with aggregates, we assume that they are in-order,
and hopefully we won't mess that order up..
While the current variant works, it assumes that all the instances of
a single family will be run together, with nothing inbetween them.
Naturally, that won't work once the runs may be interleaved.
Much like it makes sense to enumerate all the families,
it makes sense to enumerate stuff within families.
Alternatively, we could have a global instance index,
but i'm not sure why that would be better.
This will be useful when the benchmarks are run not in order,
for the tools to sort the results properly.
It may be useful for those wishing to further post-process JSON results,
but it is mainly geared towards better support for run interleaving,
where results from the same family may not be close-by in the JSON.
While we won't be able to do much about that for outputs,
the tools can and perhaps should reorder the results to that
at least in their output they are in proper order, not run order.
Note that this only counts the families that were filtered-in,
so if e.g. there were three families, and we filtered-out
the second one, the two families (which were first and third)
will have family indexes 0 and 1.
It seems that by setting the /topic in freenode #googlebenchmark to point to libera I have angered the powers that be and we've been locked out of the channel. Libera it is then.