* Build libpfm as a dependency to allow collection of perf counters
This commit builds libpfm using rules_foreign_cc and lets the default
build of the benchmark library support perf counter collection without
needing additional work from users.
Tested with a custom target:
```
bazel run \
--override_repository=com_github_google_benchmark=/home/raghu/benchmark \
-c opt :test-bench -- "--benchmark_perf_counters=INSTRUCTIONS,CYCLES"
Using profile: local
<snip>
----------------------------------------------------------------------
Benchmark Time CPU Iterations UserCounters...
----------------------------------------------------------------------
BM_Test 0.279 ns 0.279 ns 1000000000 CYCLES=1.00888 INSTRUCTIONS=2
```
Signed-off-by: Raghu Raja <raghu@enfabrica.net>
* Adding myself to the CONTRIBUTORS file per CLA guidance
Enfabrica has already signed a corporate CLA.
Signed-off-by: Raghu Raja <raghu@enfabrica.net>
Signed-off-by: Raghu Raja <raghu@enfabrica.net>
* Support optional, user-directed collection of performance counters
The patch allows an engineer wishing to drill into the root causes
of a regression, for example. Currently, only single threaded runs
are supported. The feature is a build-time opt in, and then a runtime
opt in.
The engineer may run the benchmark executable, passing a list of
performance counter names (using libpfm's naming scheme) at the
command line. The counter values will then be collected and reported
back as UserCounters.
This is different from #240 in that it is a benchmark user opt-in, and
the counter collection is transparent to the benchmark.
Currently, this is only supported on platforms where libpfm is
supported.
libpfm: http://perfmon2.sourceforge.net/
* 'Use' values param in Snapshot when BENCHMARK_OS_WINDOWS
This is to avoid unused parameter warning-as-error
* Added missing include for <vector> in perf_counters.cc
* Moved doc to docs
* Added license blurbs