The reason for this is that `setuptools-scm` installs a version relative
to the last release tag - if no tag is found, the default version is taken
to be v0.1.0. This was the case in GitHub Actions, where only the PR
branch is checked out.
Also unpins build system requirements in the `pyproject.toml`.
The sdist build system was changed to `build` from `python setup.py sdist`
for forward compatibility - `build` is superior in every way, and the
advertised solution by both cibuildwheel and PyPA itself.
Bump `actions/setup-python` to v5, `pypa/gh-action-pypi-publish` to v1.8.11,
and `docker/setup-qemu-action` to v3.
For some reason, editable pip installs are now broken, which means that
they will break the pre-commit workflow due to the `pip install -e .`
instruction.
Since the normal install is unaffected, we can just drop the `-e` switch.
It does not matter which mode is used, since the environment is only
used for linting.
* Add pre-commit config and GitHub Actions job
Contains the following hooks:
* buildifier - for formatting and linting Bazel files.
* mypy, ruff, isort, black - for Python typechecking, import hygiene,
static analysis, and formatting.
The pylint CI job was changed to be a pre-commit CI job, where pre-commit
is bootstrapped via Python.
Pylint is currently no longer part of the
code checks, but can be re-added if requested. The reason to drop was
that it does not play nicely with pre-commit, and lots of its
functionality and responsibilities are actually covered in ruff.
* Add dev extra to pyproject.toml for development installs
* Clarify that pre-commit contains only Python and Bazel hooks
* Add one-line docstrings to Bazel modules
* Apply buildifier pre-commit fixes to Bazel files
* Apply pre-commit fixes to Python files
* Supply --profile=black to isort to prevent conflicts
* Fix nanobind build file formatting
* Add tooling configs to `pyproject.toml`
In particular, set line length 80 for all Python files.
* Reformat all Python files to line length 80, fix return type annotations
Also ignores the `tools/compare.py` and `tools/gbench/report.py` files
for mypy, since they emit a barrage of errors which we can deal with
later. The errors are mostly related to dynamic classmethod definition.
* Change nanobind linkage to response file approach
This change needs https://github.com/bazelbuild/bazel/pull/18952 to be
merged first. Fixes macOS linkage of GBM's nanobind bindings on macOS by
supplying a linker response file instead of `-undefined dynamic_lookup`.
The latter has since been deprecated on macOS.
* Fix bazel_skylib checksum, bump skylib version in MODULE.bazel
* Bump Bazel to version 6.4.0 for linker response file support
* Add Python 3.12 support tag
* Bump nanobind to latest stable v1.6.2 tag
* Add PyPI trusted publishing to GitHub workflow, add Python 3.12 wheel builds
Trusted publishing has been available since v1.8.0 of the pypa-publish
action. It enables password-less authentication and wheel uploads from
the wheel upload job.
`cibuildwheel` was bumped to v2.16.2 to allow Python 3.12 wheel builds.
More info on trusted publishing:
https://github.com/marketplace/actions/pypi-publish#trusted-publishing
The Windows distribution was reverted to `latest` in the OS matrix,
since the discovery problem of MSVC was fixed in a Bazel patch release.
* Bump nanobind to stable v1.7.0 tag
The Windows toolchain detection fix made it into Bazel 6.3.0, so the CI
should work again with the re-enabled `windows-latest` marker.
Require Bazel 6.3.0 in the Linux container setup in `cibuildwheel`.
* add compiler to build-and-test and create min-cmake CI bot
* fix CXX env var
* downgrade msvc generator for cmake-3.10
* assume windows users have the latest cmake
* End support for Python 3.7, update cibuildwheel and publish actions
Removes Python 3.7 from the support matrix, since it does not support
PEP590 vectorcalls.
Bumps the `cibuildwheel` and `pypa-publish` actions to their latest
available versions respectively.
* Add nanobind to the Bazel dependencies, add a BUILD file
The build file builds nanobind as a static `cc_library`. Currently,
the git SHA points to HEAD, since some necessary features have not
been included in a release yet.
* Delete pybind11 BUILD file
* Switch bindings implementation to nanobind
Switches over the binding tool to `nanobind` from `pybind11`. Most
changes in the build setup itself were drop-in replacements of existing
code changed to nanobind names, no new concepts needed to be
implemented.
Sets the minimum required macOS to 10.14 for full C++17 support. Also,
to avoid ambiguities in Bazel, build for macOS 11 on Mac ARM64.
* Use Bazel select for linker options
Guards against unknown linker option errors by selecting required
linker options for nanobind only on macOS, where they are relevant.
Other changes:
* Bump cibuildwheel action to v2.12.0
* Bump Bazel for aarch64 linux wheels to 6.0.0
* Remove C++17 flag from build files since it is present in setup.py `bazel build` command
* Bump nanobind commit to current HEAD (TBD: Bump to next stable release)
* Unbreak Windows builds of nanobind-based bindings
Guards compiler options behind a new `select` macro choosing between
MSVC and not MSVC.
Other changes:
* Inject the proper C++17 standard cxxopt in the `setup.py` build
command.
* Bump nanobind to current HEAD.
* Make `macos` a benchmark-wide condition, with public visibility to
allow its use in the nanobind BUILD file.
* Fall back to `nb::implicitly_convertible` for Counter construction
Since `benchmark::Counter` only has a constructor for `double`,
the nanobind `nb::init_implicit` template cannot be used. Therefore,
to support implicit construction from ints, we fall back to the
`nb::implicitly_convertible` template instead.
distutils is deprecated and will be removed in Python 3.12, so this
commit modernizes the Python bindings `setup.py` file in order to
future-proof the code.
On top of this, type hints were added for all of the convenience
functions to make static type checking adoption easier in the future,
if desired.
A context manager was added to temporarily write the Python include
path to the Bazel WORKSPACE file - but unlike previously, the
WORKSPACE file is reverted to its previous state after the build to not
produce changes on every rebuild.
Lastly, the Python bindings test matrix was extended to all major
platforms to create a more complete picture of the current state of
the bindings, especially with regards to upcoming wheel builds.
This commit bumps the pybind11 version to 2.10.0, which is the first
pybind version coming with Python 3.11 support. This change is necessary
to facilitate wheel builds for Python 3.11 and upward, as changes to
Python internals in 3.11 broke compatibility with older pybind11
versions.
Co-authored-by: Dominic Hamon <dominichamon@users.noreply.github.com>
This commit enables arm64 Linux wheel builds for Python.
It also changes the build procedure on Linux using
cibuildwheel in GitHub Actions. Instead of the more granular, verbose
approach that was used until now, we opt for the GitHub Action released
by cibuildwheel directly.
We also change the Bazel install procedure in the manylinux Docker
container image. Previously, Bazel was installed from an added RHEL repo, since that is
the recommended official way of installing Bazel on CentOS platforms.
However, the last successful build available for manylinux2014 has been Bazel 4,
which is showing its age with the release of Bazel 6 coming up as of this commit.
After this change, prebuilt Bazel binaries are downloaded using
wget directly from the Bazel GitHub release page. Since Bazel is built
for both x86 and arm64 on Linux, we immediately gain wheel build
support for these architectures. However, since the architecture
of the manylinux image is aarch64 instead of arm64,
a shell script was added that normalizes aarch64 to arm64,
and installs the correct arm64 Bazel binary if necessary.
* attempt to fix sanitizer builds by moving away from llvm head
* extra verbosity
* try clang 13 and add extra logging
* get latest clang and try again
* add multiple OSes to bazel workflow
* correct indent
* only set copts when they're supported by the OS
* os check should work
* pull out cxx03_test for per-platform stuff
* attempt to fix windows test output
Previously, with the unrolled job matrix, all jobs had to be listed individually in the `needs` section of the PyPI upload job. But as the wheel build job was reimplemented as a job matrix now, with a
single build job name `build_wheels`, we need to adjust the name in the PyPI upload job as well here to avoid errors.
This commit adds a `bazel shutdown` command to the setuptools BazelExtension. This has the effect that wheel builds shut down the Bazel server and terminate gracefully after the build, something
that was previously an issue on Windows builds.
Since the windows-specific `--no-clean` flag option to `pip wheel` becomes unnecessary due to this change, this change has the side-effect that GitHub Actions wheel builds via `cibuildwheel` can now
be written as a compact job matrix again, which leads to a lot of deduplicated code in the corresponding workflow file.
Lastly, some GitHub-provided actions (checkout, setup-python, upload/download-artifact) were bumped to the latest v3 version.
This commit adds a job running after the wheel building job responsible for uploading the built wheels to PyPI.
The job only runs on successful completion of all build jobs, and uploads to PyPI using a secret added to the Google Benchmark repo (TBD).
Also, the setup-python action has been bumped to the latest version v3.
* Fix dependency typo and unpin cibuildwheel version in wheel building action
* Move to monolithic build jobs, restrict to x64 architectures
As of this commit, all wheel building jobs complete on GitHub Actions. Since some platform-specific options had to be set to fix different types of build problems underway, the build job matrix was unrolled.
Still left TODO:
* Wheel testing after build (running the Python bindings test)
* Emulating bazel on other architectures to build aarch64/i686/ppc64le
* Enabling Win32 (this fails due to linker errors).
* Add binding test commands for all wheels, set macOSX deployment target to 10.9
* Add instructions for updating Python __version__ variable before release creation
* use docker container for ubuntu-16.04 builds
* install some bits
* no sudo in docker container
* cmake, not cmake3
* include perfcounters
* still no sudo in docker containers
* yes please, apt
* add g++ to sanitizer buildbots
* add compiler to sanitizer build name
* spell g++ correctly. look, it's early, ok?
* only set libcxx if we're using clang
* Enable various sanitizer builds in github actions
* try with off the shelf versions
* nope
* specific version?
* rats
* oops
* remove msan for now
* reorder so env is set before building libc++
* add g++-6 to ubuntu-14.04
* fix syntax
* fix yamllint errors for build-and-test
* fix 'add-apt-repository' command not found
* make 'run tests' explicit
* enable testing and run both release and debug
* oops
* Add ubuntu-14.04 build and test workflow
* avoid '.' in job name
* no need for fail fast
* fix workflow syntax
* install some stuff
* better compiler installations
* update before install
* just say yes
* trying to match up some paths
* Difference between runner and github context in container?
* Try some judicious logging
* cmake 3.5+ required
* specific compiler versions
* need git for googletest
* Disable testing on old compilers
* disable testing properly
* Add 32-bit build support to build-and-test
* attempt different yaml multiline string format
* syntax fixes to yaml
* switch to getting alternative compilers working
* remove done TODO
* trying to separate out windows
* oops, typo.
* add TODOs for missing builds wrt travis
* Support optional, user-directed collection of performance counters
The patch allows an engineer wishing to drill into the root causes
of a regression, for example. Currently, only single threaded runs
are supported. The feature is a build-time opt in, and then a runtime
opt in.
The engineer may run the benchmark executable, passing a list of
performance counter names (using libpfm's naming scheme) at the
command line. The counter values will then be collected and reported
back as UserCounters.
This is different from #240 in that it is a benchmark user opt-in, and
the counter collection is transparent to the benchmark.
Currently, this is only supported on platforms where libpfm is
supported.
libpfm: http://perfmon2.sourceforge.net/
* 'Use' values param in Snapshot when BENCHMARK_OS_WINDOWS
This is to avoid unused parameter warning-as-error
* Added missing include for <vector> in perf_counters.cc
* Moved doc to docs
* Added license blurbs