This website requires JavaScript.
Explore
Help
Sign In
dolysis
/
benchmark
mirror of
https://github.com/google/benchmark.git
Watch
3
Star
0
Fork
You've already forked benchmark
0
Code
Issues
Packages
Projects
Releases
Wiki
Activity
eacce0b503
benchmark
/
requirements.txt
4 lines
45 B
Plaintext
Raw
Normal View
History
Unescape
Escape
bump numby, as per dependabot (#1336)
2022-01-31 10:28:11 +00:00
numpy == 1.21
bazel support for tools (#982) * packages versions updated to be in sync with modern python versions
2020-11-06 09:10:04 +00:00
scipy == 1.5.4
compare.py: compute and print 'OVERALL GEOMEAN' aggregate (#1289) Despite the wide variety of the features we provide, some people still have the audacity to complain and demand more. Concretely, i *very* often would like to see the overall result of the benchmark. Is the 'new' better or worse, overall, over all the non-aggregate time/cpu measurements. This comes up for me most often when i want to quickly see what effect some LLVM optimization change has on the benchmark. The idea is straight-forward, just produce four lists: wall times for LHS benchmark, CPU times for LHS benchmark, wall times for RHS benchmark, CPU times for RHS benchmark; then compute geomean for each one of those four lists, and compute the two percentage change between * geomean wall time for LHS benchmark and geomean wall time for RHS benchmark * geomean CPU time for LHS benchmark and geomean CPU time for RHS benchmark and voila! It is complicated by the fact that it needs to graciously handle different time units, so pandas.Timedelta dependency is introduced. That is the only library that does not barf upon floating times, i have tried numpy.timedelta64 (only takes integers) and python's datetime.timedelta (does not take nanosecons), and they won't do. Fixes https://github.com/google/benchmark/issues/1147
2021-11-24 10:47:08 +00:00
pandas == 1.1.5