If someone or something ever needs the dynamic library as a Bazel build
artifact, we can figure that out for them then, but right now, there is
no strong reason to be wrangling various `export.h`-controlling macros.
Fixes#1372.
This commit fixes the previous breakage in Python wheel builds for Windows by adding a `local_defines` field to the `cc_binary` generated in the process of the Python bindings builds. This define is being
picked up by the auto-generated export header `benchmark_export.h`, unsetting the benchmark export macro.
Furthermore, the `linkshared` and `linkstatic` attributes are passed booleans now instead of ints, making the command more directly interpretable to the human reader.
The fix was suggested by @junyer in the corresponding GitHub issue thread https://github.com/google/benchmark/issues/1367 - thank you for the suggestion!
This commit adds a job running after the wheel building job responsible for uploading the built wheels to PyPI.
The job only runs on successful completion of all build jobs, and uploads to PyPI using a secret added to the Google Benchmark repo (TBD).
Also, the setup-python action has been bumped to the latest version v3.
* Add SetBenchmarkFilter() to set --benchmark_filter flag value in user code.
Use case: Provide an API to set this flag indepedence of the flag's implementation (ie., absl flag vs benchmark's flag facility)
* add test
* added notes on Initialize()
This commit adds the two fields `long_description` and `long_description_content_type` to `setup.py`. These can be used for proper project presentation on the PyPI project page, which is currently a placeholder.
* Add option to set the default time unit globally
This commit introduces the `--benchmark_time_unit={ns|us|ms|s}` command line argument. The argument only affects benchmarks where the time unit is not set explicitly.
* Update AUTHORS and CONTRIBUTORS
* Test `SetDefaultTimeUnit`
* clang format
* Use `GetDefaultTimeUnit()` for initializing `TimeUnit` variables
* Review fixes
* Export functions
* Add comment
* Make generate_export_header.bzl work for Windows.
While I'm here, bring the generated code slightly closer to what CMake
would generate nowadays.
Fixes#1351.
* Fix define.
* Fix export_import_condition.
* Fix guard.
* introduce the possibility to customize the help printer function
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
* fixed naming convertion, and introduce the option function in the init method
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
* remove the macros to inject the helper function
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
* remove the default implementation, and introduce the nullprt
Signed-off-by: Vincenzo Palazzo <vincenzopalazzodev@gmail.com>
* Expose default display reporter creation in public API
this is useful when a custom reporter wants to fall back on the default
display reporter, but doesn't necessarily have access to the benchmark
library flag configuration.
* Make use of unique_ptr in the random interleaving test.
* clang-format
* The parameterized tests check both floating point and integral types. We might as well use types that avoid truncation warnings across the platforms
* static_cast version of how to avoid truncation warnings in basic_test
Co-authored-by: Staffan Tjernstrom <staffantj@users.noreply.github.com>
This commit contains a fix for macOS ARM64 wheel buils in Google Benchmark's wheel building CI.
Previously, while `cibuildwheel` itself properly identified the need for cross-compilations and produced valid ARM platform wheels, the included shared library containing the Python bindings
built by `bazel` was built for x86, resulting in immediate errors upon import.
To fix this, logic was added to the setup.py file that adds the "--cpu=darwin_arm64" and "--macos_cpus=arm64" switches to the `bazel build` command if
1) The current system platform is macOS Darwin running on the x86_64 architecture, and
2) The ARCHFLAGS environment variable, set by wheel build systems like conda and cibuildwheel, contains the tag "arm64".
This way, bazel correctly sets the target CPU to ARM64, and produces functional wheels for the macOS ARM line of CPUs.
This patch fixes#1306, by reducing the pinned instances of
PerfCounters.
The issue is caused by creating multiple pinned events in the
same thread, doing so results in the Snapshot(PerfCounterValues* values)
failing, and that's now discoverable.
Creating multile pinned events is an unsupported behavior currently.
The error would be detected at read() time, not
perf_event_open() / iotcl() time.
The unsupported benavior above is confirmed by Stephane Eranian @seranian,
and he also pointed the dectection method.
Finished this patch under the guidance of Mircea Trofin @mtrofin.
* Revert "Refine docs on changing cpufreq governor (#1325)"
This reverts commit 9e859f5bf5.
* Refine the User Guide CPU Frequency Scaling section
The text now describes the cpupower command, so users in a hurry
have something to copy/paste that will likely work. It then
suggests that there are probably more convenient optons available
that people can look into.
This reverts the prior commit, which introduced a shell script
that doesn't work. It also retains the spirit of the original
fix: no longer recommend setting the frequency governor to
"powersave", which might not be appropriate or available.
Note: I did attempt to write a bash script that set the govenor
to "powersave" for the duration of a single command, but I gave
up for many reasons:
1) it got complex, in part because the cpupower command does not
seem to be designed for scripts (e.g. it prints out complex
English phrases).
2) munging /proc/sys files directly feels unstable and less than
universal. The libcpupower and cpupower are designed to abstract
those away, because the details can vary.
3) there are better options. E.g. various GUI programs, and
even Gnome's core Settings UI, let you adjust the system's
performance mode without root access.
Fixes#1325, #1327
* Address MSVC C4722 warning in tests
Some test paths deliberately exit, and it appears that the appropriate declspec
does not stop the compiler generating the C4722 warning as one might expect.
Per https://github.com/google/benchmark/issues/826#issuecomment-851995549
this commit ignores the warning for the affected call site.
* Fix up Formatting
* Fix up formatting issue on pragmas
* Fix up formatting issue on pragmas take 2
Co-authored-by: Staffan Tjernstrom <staffantj@users.noreply.github.com>
* Fix#1159 Harmonize the types between the loop counter and the vector of indices
The loop counter is a size_t, but they're stored in a vector-of-int leading to
potential overflow warnings. In order to avoid accidental run-time overflow
this change modifies the vector type, rather than the loop counter.
* Fix up line endings
* Update AUTHORS
Add Staffan Tjernstrom <staffantj@gmail.com>
Co-authored-by: Staffan Tjernstrom <staffantj@users.noreply.github.com>
This applies a fix that used to exist in LLVM's downstream copy of
this library, from
948ce4e6ed.
I presume this warning isn't present if built with MSVC or Clang-cl,
but it's printed in MinGW mode. As the benchmark library adds
-Werror, this is a fatal error when builtin MinGW mode.
Despite the wide variety of the features we provide,
some people still have the audacity to complain and demand more.
Concretely, i *very* often would like to see the overall result
of the benchmark. Is the 'new' better or worse, overall,
over all the non-aggregate time/cpu measurements.
This comes up for me most often when i want to quickly see
what effect some LLVM optimization change has on the benchmark.
The idea is straight-forward, just produce four lists:
wall times for LHS benchmark, CPU times for LHS benchmark,
wall times for RHS benchmark, CPU times for RHS benchmark;
then compute geomean for each one of those four lists,
and compute the two percentage change between
* geomean wall time for LHS benchmark and geomean wall time for RHS benchmark
* geomean CPU time for LHS benchmark and geomean CPU time for RHS benchmark
and voila!
It is complicated by the fact that it needs to graciously handle
different time units, so pandas.Timedelta dependency is introduced.
That is the only library that does not barf upon floating times,
i have tried numpy.timedelta64 (only takes integers)
and python's datetime.timedelta (does not take nanosecons),
and they won't do.
Fixes https://github.com/google/benchmark/issues/1147