2019-09-16 23:15:18 +00:00
|
|
|
// Copyright (c) 2019-present, Facebook, Inc. All rights reserved.
|
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
//
|
|
|
|
// Implementation details of various Bloom filter implementations used in
|
|
|
|
// RocksDB. (DynamicBloom is in a separate file for now because it
|
|
|
|
// supports concurrent write.)
|
|
|
|
|
|
|
|
#pragma once
|
|
|
|
#include <stddef.h>
|
|
|
|
#include <stdint.h>
|
2020-11-03 20:45:18 +00:00
|
|
|
|
Warn on excessive keys for legacy Bloom filter with 32-bit hash (#6317)
Summary:
With many millions of keys, the old Bloom filter implementation
for the block-based table (format_version <= 4) would have excessive FP
rate due to the limitations of feeding the Bloom filter with a 32-bit hash.
This change computes an estimated inflated FP rate due to this effect
and warns in the log whenever an SST filter is constructed (almost
certainly a "full" not "partitioned" filter) that exceeds 1.5x FP rate
due to this effect. The detailed condition is only checked if 3 million
keys or more have been added to a filter, as this should be a lower
bound for common bits/key settings (< 20).
Recommended remedies include smaller SST file size, using
format_version >= 5 (for new Bloom filter), or using partitioned
filters.
This does not change behavior other than generating warnings for some
constructed filters using the old implementation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6317
Test Plan:
Example with warning, 15M keys @ 15 bits / key: (working_mem_size_mb is just to stop after building one filter if it's large)
$ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=15000000 2>&1 | grep 'FP rate'
[WARN] [/block_based/filter_policy.cc:292] Using legacy SST/BBT Bloom filter with excessive key count (15.0M @ 15bpk), causing estimated 1.8x higher filter FP rate. Consider using new Bloom with format_version>=5, smaller SST file size, or partitioned filters.
Predicted FP rate %: 0.766702
Average FP rate %: 0.66846
Example without warning (150K keys):
$ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=150000 2>&1 | grep 'FP rate'
Predicted FP rate %: 0.422857
Average FP rate %: 0.379301
$
With more samples at 15 bits/key:
150K keys -> no warning; actual: 0.379% FP rate (baseline)
1M keys -> no warning; actual: 0.396% FP rate, 1.045x
9M keys -> no warning; actual: 0.563% FP rate, 1.485x
10M keys -> warning (1.5x); actual: 0.564% FP rate, 1.488x
15M keys -> warning (1.8x); actual: 0.668% FP rate, 1.76x
25M keys -> warning (2.4x); actual: 0.880% FP rate, 2.32x
At 10 bits/key:
150K keys -> no warning; actual: 1.17% FP rate (baseline)
1M keys -> no warning; actual: 1.16% FP rate
10M keys -> no warning; actual: 1.32% FP rate, 1.13x
25M keys -> no warning; actual: 1.63% FP rate, 1.39x
35M keys -> warning (1.6x); actual: 1.81% FP rate, 1.55x
At 5 bits/key:
150K keys -> no warning; actual: 9.32% FP rate (baseline)
25M keys -> no warning; actual: 9.62% FP rate, 1.03x
200M keys -> no warning; actual: 12.2% FP rate, 1.31x
250M keys -> warning (1.5x); actual: 12.8% FP rate, 1.37x
300M keys -> warning (1.6x); actual: 13.4% FP rate, 1.43x
The reason for the modest inaccuracy at low bits/key is that the assumption of independence between a collision between 32-hash values feeding the filter and an FP in the filter is not quite true for implementations using "simple" logic to compute indices from the stock hash result. There's math on this in my dissertation, but I don't think it's worth the effort just for these extreme cases (> 100 million keys and low-ish bits/key).
Differential Revision: D19471715
Pulled By: pdillinger
fbshipit-source-id: f80c96893a09bf1152630ff0b964e5cdd7e35c68
2020-01-21 05:30:22 +00:00
|
|
|
#include <cmath>
|
2019-09-16 23:15:18 +00:00
|
|
|
|
2020-11-03 20:45:18 +00:00
|
|
|
#include "port/port.h" // for PREFETCH
|
2019-09-16 23:15:18 +00:00
|
|
|
#include "rocksdb/slice.h"
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
#include "util/hash.h"
|
|
|
|
|
Simplify detection of x86 CPU features (#11419)
Summary:
**Background** - runtime detection of certain x86 CPU features was added for optimizing CRC32c checksums, where performance is dramatically affected by the availability of certain CPU instructions and code using intrinsics for those instructions. And Java builds with native library try to be broadly compatible but performant.
What has changed is that CRC32c is no longer the most efficient cheecksum on contemporary x86_64 hardware, nor the default checksum. XXH3 is generally faster and not as dramatically impacted by the availability of certain CPU instructions. For example, on my Skylake system using db_bench (similar on an older Skylake system without AVX512):
PORTABLE=1 empty USE_SSE : xxh3->8 GB/s crc32c->0.8 GB/s (no SSE4.2 nor AVX2 instructions)
PORTABLE=1 USE_SSE=1 : xxh3->19 GB/s crc32c->16 GB/s (with SSE4.2 and AVX2)
PORTABLE=0 USE_SSE ignored: xxh3->28 GB/s crc32c->16 GB/s (also some AVX512)
Testing a ~10 year old system, with SSE4.2 but without AVX2, crc32c is a similar speed to the new systems but xxh3 is only about half that speed, also 8GB/s like the non-AVX2 compile above. Given that xxh3 has specific optimization for AVX2, I think we can infer that that crc32c is only fastest for that ~2008-2013 period when SSE4.2 was included but not AVX2. And given that xxh3 is only about 2x slower on these systems (not like >10x slower for unoptimized crc32c), I don't think we need to invest too much in optimally adapting to these old cases.
x86 hardware that doesn't support fast CRC32c is now extremely rare, so requiring a custom build to support such hardware is fine IMHO.
**This change** does two related things:
* Remove runtime CPU detection for optimizing CRC32c on x86. Maintaining this code is non-zero work, and compiling special code that doesn't work on the configured target instruction set for code generation is always dubious. (On the one hand we have to ensure the CRC32c code uses SSE4.2 but on the other hand we have to ensure nothing else does.)
* Detect CPU features in source code, not in build scripts. Although there are some hypothetical advantages to detectiong in build scripts (compiler generality), RocksDB supports at least three build systems: make, cmake, and buck. It's not practical to support feature detection on all three, and we have suffered from missed optimization opportunities by relying on missing or incomplete detection in cmake and buck. We also depend on some components like xxhash that do source code detection anyway.
**In more detail:**
* `HAVE_SSE42`, `HAVE_AVX2`, and `HAVE_PCLMUL` replaced by standard macros `__SSE4_2__`, `__AVX2__`, and `__PCLMUL__`.
* MSVC does not provide high fidelity defines for SSE, PCLMUL, or POPCNT, but we can infer those from `__AVX__` or `__AVX2__` in a compatibility header. In rare cases of false negative or false positive feature detection, a build engineer should be able to set defines to work around the issue.
* `__POPCNT__` is another standard define, but we happen to only need it on MSVC, where it is set by that compatibility header, or can be set by the build engineer.
* `PORTABLE` can be set to a CPU type, e.g. "haswell", to compile for that CPU type.
* `USE_SSE` is deprecated, now equivalent to PORTABLE=haswell, which roughly approximates its old behavior.
Notably, this change should enable more builds to use the AVX2-optimized Bloom filter implementation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11419
Test Plan:
existing tests, CI
Manual performance tests after the change match the before above (none expected with make build).
We also see AVX2 optimized Bloom filter code enabled when expected, by injecting a compiler error. (Performance difference is not big on my current CPU.)
Reviewed By: ajkr
Differential Revision: D45489041
Pulled By: pdillinger
fbshipit-source-id: 60ceb0dd2aa3b365c99ed08a8b2a087a9abb6a70
2023-05-10 05:25:45 +00:00
|
|
|
#ifdef __AVX2__
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
#include <immintrin.h>
|
|
|
|
#endif
|
2019-09-16 23:15:18 +00:00
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
namespace ROCKSDB_NAMESPACE {
|
2019-09-16 23:15:18 +00:00
|
|
|
|
Warn on excessive keys for legacy Bloom filter with 32-bit hash (#6317)
Summary:
With many millions of keys, the old Bloom filter implementation
for the block-based table (format_version <= 4) would have excessive FP
rate due to the limitations of feeding the Bloom filter with a 32-bit hash.
This change computes an estimated inflated FP rate due to this effect
and warns in the log whenever an SST filter is constructed (almost
certainly a "full" not "partitioned" filter) that exceeds 1.5x FP rate
due to this effect. The detailed condition is only checked if 3 million
keys or more have been added to a filter, as this should be a lower
bound for common bits/key settings (< 20).
Recommended remedies include smaller SST file size, using
format_version >= 5 (for new Bloom filter), or using partitioned
filters.
This does not change behavior other than generating warnings for some
constructed filters using the old implementation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6317
Test Plan:
Example with warning, 15M keys @ 15 bits / key: (working_mem_size_mb is just to stop after building one filter if it's large)
$ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=15000000 2>&1 | grep 'FP rate'
[WARN] [/block_based/filter_policy.cc:292] Using legacy SST/BBT Bloom filter with excessive key count (15.0M @ 15bpk), causing estimated 1.8x higher filter FP rate. Consider using new Bloom with format_version>=5, smaller SST file size, or partitioned filters.
Predicted FP rate %: 0.766702
Average FP rate %: 0.66846
Example without warning (150K keys):
$ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=150000 2>&1 | grep 'FP rate'
Predicted FP rate %: 0.422857
Average FP rate %: 0.379301
$
With more samples at 15 bits/key:
150K keys -> no warning; actual: 0.379% FP rate (baseline)
1M keys -> no warning; actual: 0.396% FP rate, 1.045x
9M keys -> no warning; actual: 0.563% FP rate, 1.485x
10M keys -> warning (1.5x); actual: 0.564% FP rate, 1.488x
15M keys -> warning (1.8x); actual: 0.668% FP rate, 1.76x
25M keys -> warning (2.4x); actual: 0.880% FP rate, 2.32x
At 10 bits/key:
150K keys -> no warning; actual: 1.17% FP rate (baseline)
1M keys -> no warning; actual: 1.16% FP rate
10M keys -> no warning; actual: 1.32% FP rate, 1.13x
25M keys -> no warning; actual: 1.63% FP rate, 1.39x
35M keys -> warning (1.6x); actual: 1.81% FP rate, 1.55x
At 5 bits/key:
150K keys -> no warning; actual: 9.32% FP rate (baseline)
25M keys -> no warning; actual: 9.62% FP rate, 1.03x
200M keys -> no warning; actual: 12.2% FP rate, 1.31x
250M keys -> warning (1.5x); actual: 12.8% FP rate, 1.37x
300M keys -> warning (1.6x); actual: 13.4% FP rate, 1.43x
The reason for the modest inaccuracy at low bits/key is that the assumption of independence between a collision between 32-hash values feeding the filter and an FP in the filter is not quite true for implementations using "simple" logic to compute indices from the stock hash result. There's math on this in my dissertation, but I don't think it's worth the effort just for these extreme cases (> 100 million keys and low-ish bits/key).
Differential Revision: D19471715
Pulled By: pdillinger
fbshipit-source-id: f80c96893a09bf1152630ff0b964e5cdd7e35c68
2020-01-21 05:30:22 +00:00
|
|
|
class BloomMath {
|
|
|
|
public:
|
|
|
|
// False positive rate of a standard Bloom filter, for given ratio of
|
|
|
|
// filter memory bits to added keys, and number of probes per operation.
|
|
|
|
// (The false positive rate is effectively independent of scale, assuming
|
|
|
|
// the implementation scales OK.)
|
|
|
|
static double StandardFpRate(double bits_per_key, int num_probes) {
|
|
|
|
// Standard very-good-estimate formula. See
|
|
|
|
// https://en.wikipedia.org/wiki/Bloom_filter#Probability_of_false_positives
|
|
|
|
return std::pow(1.0 - std::exp(-num_probes / bits_per_key), num_probes);
|
|
|
|
}
|
|
|
|
|
|
|
|
// False positive rate of a "blocked"/"shareded"/"cache-local" Bloom filter,
|
|
|
|
// for given ratio of filter memory bits to added keys, number of probes per
|
|
|
|
// operation (all within the given block or cache line size), and block or
|
|
|
|
// cache line size.
|
|
|
|
static double CacheLocalFpRate(double bits_per_key, int num_probes,
|
|
|
|
int cache_line_bits) {
|
2022-02-08 21:54:29 +00:00
|
|
|
if (bits_per_key <= 0.0) {
|
|
|
|
// Fix a discontinuity
|
|
|
|
return 1.0;
|
|
|
|
}
|
Warn on excessive keys for legacy Bloom filter with 32-bit hash (#6317)
Summary:
With many millions of keys, the old Bloom filter implementation
for the block-based table (format_version <= 4) would have excessive FP
rate due to the limitations of feeding the Bloom filter with a 32-bit hash.
This change computes an estimated inflated FP rate due to this effect
and warns in the log whenever an SST filter is constructed (almost
certainly a "full" not "partitioned" filter) that exceeds 1.5x FP rate
due to this effect. The detailed condition is only checked if 3 million
keys or more have been added to a filter, as this should be a lower
bound for common bits/key settings (< 20).
Recommended remedies include smaller SST file size, using
format_version >= 5 (for new Bloom filter), or using partitioned
filters.
This does not change behavior other than generating warnings for some
constructed filters using the old implementation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6317
Test Plan:
Example with warning, 15M keys @ 15 bits / key: (working_mem_size_mb is just to stop after building one filter if it's large)
$ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=15000000 2>&1 | grep 'FP rate'
[WARN] [/block_based/filter_policy.cc:292] Using legacy SST/BBT Bloom filter with excessive key count (15.0M @ 15bpk), causing estimated 1.8x higher filter FP rate. Consider using new Bloom with format_version>=5, smaller SST file size, or partitioned filters.
Predicted FP rate %: 0.766702
Average FP rate %: 0.66846
Example without warning (150K keys):
$ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=150000 2>&1 | grep 'FP rate'
Predicted FP rate %: 0.422857
Average FP rate %: 0.379301
$
With more samples at 15 bits/key:
150K keys -> no warning; actual: 0.379% FP rate (baseline)
1M keys -> no warning; actual: 0.396% FP rate, 1.045x
9M keys -> no warning; actual: 0.563% FP rate, 1.485x
10M keys -> warning (1.5x); actual: 0.564% FP rate, 1.488x
15M keys -> warning (1.8x); actual: 0.668% FP rate, 1.76x
25M keys -> warning (2.4x); actual: 0.880% FP rate, 2.32x
At 10 bits/key:
150K keys -> no warning; actual: 1.17% FP rate (baseline)
1M keys -> no warning; actual: 1.16% FP rate
10M keys -> no warning; actual: 1.32% FP rate, 1.13x
25M keys -> no warning; actual: 1.63% FP rate, 1.39x
35M keys -> warning (1.6x); actual: 1.81% FP rate, 1.55x
At 5 bits/key:
150K keys -> no warning; actual: 9.32% FP rate (baseline)
25M keys -> no warning; actual: 9.62% FP rate, 1.03x
200M keys -> no warning; actual: 12.2% FP rate, 1.31x
250M keys -> warning (1.5x); actual: 12.8% FP rate, 1.37x
300M keys -> warning (1.6x); actual: 13.4% FP rate, 1.43x
The reason for the modest inaccuracy at low bits/key is that the assumption of independence between a collision between 32-hash values feeding the filter and an FP in the filter is not quite true for implementations using "simple" logic to compute indices from the stock hash result. There's math on this in my dissertation, but I don't think it's worth the effort just for these extreme cases (> 100 million keys and low-ish bits/key).
Differential Revision: D19471715
Pulled By: pdillinger
fbshipit-source-id: f80c96893a09bf1152630ff0b964e5cdd7e35c68
2020-01-21 05:30:22 +00:00
|
|
|
double keys_per_cache_line = cache_line_bits / bits_per_key;
|
|
|
|
// A reasonable estimate is the average of the FP rates for one standard
|
|
|
|
// deviation above and below the mean bucket occupancy. See
|
|
|
|
// https://github.com/facebook/rocksdb/wiki/RocksDB-Bloom-Filter#the-math
|
|
|
|
double keys_stddev = std::sqrt(keys_per_cache_line);
|
|
|
|
double crowded_fp = StandardFpRate(
|
|
|
|
cache_line_bits / (keys_per_cache_line + keys_stddev), num_probes);
|
|
|
|
double uncrowded_fp = StandardFpRate(
|
|
|
|
cache_line_bits / (keys_per_cache_line - keys_stddev), num_probes);
|
|
|
|
return (crowded_fp + uncrowded_fp) / 2;
|
|
|
|
}
|
|
|
|
|
|
|
|
// False positive rate of querying a new item against `num_keys` items, all
|
|
|
|
// hashed to `fingerprint_bits` bits. (This assumes the fingerprint hashes
|
|
|
|
// themselves are stored losslessly. See Section 4 of
|
|
|
|
// http://www.ccs.neu.edu/home/pete/pub/bloom-filters-verification.pdf)
|
|
|
|
static double FingerprintFpRate(size_t num_keys, int fingerprint_bits) {
|
|
|
|
double inv_fingerprint_space = std::pow(0.5, fingerprint_bits);
|
|
|
|
// Base estimate assumes each key maps to a unique fingerprint.
|
|
|
|
// Could be > 1 in extreme cases.
|
|
|
|
double base_estimate = num_keys * inv_fingerprint_space;
|
|
|
|
// To account for potential overlap, we choose between two formulas
|
|
|
|
if (base_estimate > 0.0001) {
|
|
|
|
// A very good formula assuming we don't construct a floating point
|
|
|
|
// number extremely close to 1. Always produces a probability < 1.
|
|
|
|
return 1.0 - std::exp(-base_estimate);
|
|
|
|
} else {
|
|
|
|
// A very good formula when base_estimate is far below 1. (Subtract
|
|
|
|
// away the integral-approximated sum that some key has same hash as
|
|
|
|
// one coming before it in a list.)
|
|
|
|
return base_estimate - (base_estimate * base_estimate * 0.5);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Returns the probably of either of two independent(-ish) events
|
|
|
|
// happening, given their probabilities. (This is useful for combining
|
|
|
|
// results from StandardFpRate or CacheLocalFpRate with FingerprintFpRate
|
|
|
|
// for a hash-efficient Bloom filter's FP rate. See Section 4 of
|
|
|
|
// http://www.ccs.neu.edu/home/pete/pub/bloom-filters-verification.pdf)
|
|
|
|
static double IndependentProbabilitySum(double rate1, double rate2) {
|
|
|
|
// Use formula that avoids floating point extremely close to 1 if
|
|
|
|
// rates are extremely small.
|
|
|
|
return rate1 + rate2 - (rate1 * rate2);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
// A fast, flexible, and accurate cache-local Bloom implementation with
|
|
|
|
// SIMD-optimized query performance (currently using AVX2 on Intel). Write
|
2020-09-28 18:33:31 +00:00
|
|
|
// performance and non-SIMD read are very good, benefiting from FastRange32
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
// used in place of % and single-cycle multiplication on recent processors.
|
|
|
|
//
|
|
|
|
// Most other SIMD Bloom implementations sacrifice flexibility and/or
|
|
|
|
// accuracy by requiring num_probes to be a power of two and restricting
|
|
|
|
// where each probe can occur in a cache line. This implementation sacrifices
|
|
|
|
// SIMD-optimization for add (might still be possible, especially with AVX512)
|
|
|
|
// in favor of allowing any num_probes, not crossing cache line boundary,
|
|
|
|
// and accuracy close to theoretical best accuracy for a cache-local Bloom.
|
|
|
|
// E.g. theoretical best for 10 bits/key, num_probes=6, and 512-bit bucket
|
|
|
|
// (Intel cache line size) is 0.9535% FP rate. This implementation yields
|
|
|
|
// about 0.957%. (Compare to LegacyLocalityBloomImpl<false> at 1.138%, or
|
|
|
|
// about 0.951% for 1024-bit buckets, cache line size for some ARM CPUs.)
|
|
|
|
//
|
|
|
|
// This implementation can use a 32-bit hash (let h2 be h1 * 0x9e3779b9) or
|
|
|
|
// a 64-bit hash (split into two uint32s). With many millions of keys, the
|
|
|
|
// false positive rate associated with using a 32-bit hash can dominate the
|
|
|
|
// false positive rate of the underlying filter. At 10 bits/key setting, the
|
|
|
|
// inflection point is about 40 million keys, so 32-bit hash is a bad idea
|
|
|
|
// with 10s of millions of keys or more.
|
|
|
|
//
|
|
|
|
// Despite accepting a 64-bit hash, this implementation uses 32-bit fastrange
|
|
|
|
// to pick a cache line, which can be faster than 64-bit in some cases.
|
|
|
|
// This only hurts accuracy as you get into 10s of GB for a single filter,
|
|
|
|
// and accuracy abruptly breaks down at 256GB (2^32 cache lines). Switch to
|
|
|
|
// 64-bit fastrange if you need filters so big. ;)
|
|
|
|
//
|
|
|
|
// Using only a 32-bit input hash within each cache line has negligible
|
|
|
|
// impact for any reasonable cache line / bucket size, for arbitrary filter
|
|
|
|
// size, and potentially saves intermediate data size in some cases vs.
|
|
|
|
// tracking full 64 bits. (Even in an implementation using 64-bit arithmetic
|
|
|
|
// to generate indices, I might do the same, as a single multiplication
|
|
|
|
// suffices to generate a sufficiently mixed 64 bits from 32 bits.)
|
|
|
|
//
|
|
|
|
// This implementation is currently tied to Intel cache line size, 64 bytes ==
|
|
|
|
// 512 bits. If there's sufficient demand for other cache line sizes, this is
|
|
|
|
// a pretty good implementation to extend, but slight performance enhancements
|
|
|
|
// are possible with an alternate implementation (probably not very compatible
|
|
|
|
// with SIMD):
|
|
|
|
// (1) Use rotation in addition to multiplication for remixing
|
|
|
|
// (like murmur hash). (Using multiplication alone *slightly* hurts accuracy
|
|
|
|
// because lower bits never depend on original upper bits.)
|
|
|
|
// (2) Extract more than one bit index from each re-mix. (Only if rotation
|
|
|
|
// or similar is part of remix, because otherwise you're making the
|
|
|
|
// multiplication-only problem worse.)
|
|
|
|
// (3) Re-mix full 64 bit hash, to get maximum number of bit indices per
|
|
|
|
// re-mix.
|
|
|
|
//
|
|
|
|
class FastLocalBloomImpl {
|
|
|
|
public:
|
Warn on excessive keys for legacy Bloom filter with 32-bit hash (#6317)
Summary:
With many millions of keys, the old Bloom filter implementation
for the block-based table (format_version <= 4) would have excessive FP
rate due to the limitations of feeding the Bloom filter with a 32-bit hash.
This change computes an estimated inflated FP rate due to this effect
and warns in the log whenever an SST filter is constructed (almost
certainly a "full" not "partitioned" filter) that exceeds 1.5x FP rate
due to this effect. The detailed condition is only checked if 3 million
keys or more have been added to a filter, as this should be a lower
bound for common bits/key settings (< 20).
Recommended remedies include smaller SST file size, using
format_version >= 5 (for new Bloom filter), or using partitioned
filters.
This does not change behavior other than generating warnings for some
constructed filters using the old implementation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6317
Test Plan:
Example with warning, 15M keys @ 15 bits / key: (working_mem_size_mb is just to stop after building one filter if it's large)
$ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=15000000 2>&1 | grep 'FP rate'
[WARN] [/block_based/filter_policy.cc:292] Using legacy SST/BBT Bloom filter with excessive key count (15.0M @ 15bpk), causing estimated 1.8x higher filter FP rate. Consider using new Bloom with format_version>=5, smaller SST file size, or partitioned filters.
Predicted FP rate %: 0.766702
Average FP rate %: 0.66846
Example without warning (150K keys):
$ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=150000 2>&1 | grep 'FP rate'
Predicted FP rate %: 0.422857
Average FP rate %: 0.379301
$
With more samples at 15 bits/key:
150K keys -> no warning; actual: 0.379% FP rate (baseline)
1M keys -> no warning; actual: 0.396% FP rate, 1.045x
9M keys -> no warning; actual: 0.563% FP rate, 1.485x
10M keys -> warning (1.5x); actual: 0.564% FP rate, 1.488x
15M keys -> warning (1.8x); actual: 0.668% FP rate, 1.76x
25M keys -> warning (2.4x); actual: 0.880% FP rate, 2.32x
At 10 bits/key:
150K keys -> no warning; actual: 1.17% FP rate (baseline)
1M keys -> no warning; actual: 1.16% FP rate
10M keys -> no warning; actual: 1.32% FP rate, 1.13x
25M keys -> no warning; actual: 1.63% FP rate, 1.39x
35M keys -> warning (1.6x); actual: 1.81% FP rate, 1.55x
At 5 bits/key:
150K keys -> no warning; actual: 9.32% FP rate (baseline)
25M keys -> no warning; actual: 9.62% FP rate, 1.03x
200M keys -> no warning; actual: 12.2% FP rate, 1.31x
250M keys -> warning (1.5x); actual: 12.8% FP rate, 1.37x
300M keys -> warning (1.6x); actual: 13.4% FP rate, 1.43x
The reason for the modest inaccuracy at low bits/key is that the assumption of independence between a collision between 32-hash values feeding the filter and an FP in the filter is not quite true for implementations using "simple" logic to compute indices from the stock hash result. There's math on this in my dissertation, but I don't think it's worth the effort just for these extreme cases (> 100 million keys and low-ish bits/key).
Differential Revision: D19471715
Pulled By: pdillinger
fbshipit-source-id: f80c96893a09bf1152630ff0b964e5cdd7e35c68
2020-01-21 05:30:22 +00:00
|
|
|
// NOTE: this has only been validated to enough accuracy for producing
|
|
|
|
// reasonable warnings / user feedback, not for making functional decisions.
|
|
|
|
static double EstimatedFpRate(size_t keys, size_t bytes, int num_probes,
|
|
|
|
int hash_bits) {
|
|
|
|
return BloomMath::IndependentProbabilitySum(
|
|
|
|
BloomMath::CacheLocalFpRate(8.0 * bytes / keys, num_probes,
|
|
|
|
/*cache line bits*/ 512),
|
|
|
|
BloomMath::FingerprintFpRate(keys, hash_bits));
|
|
|
|
}
|
|
|
|
|
Allow fractional bits/key in BloomFilterPolicy (#6092)
Summary:
There's no technological impediment to allowing the Bloom
filter bits/key to be non-integer (fractional/decimal) values, and it
provides finer control over the memory vs. accuracy trade-off. This is
especially handy in using the format_version=5 Bloom filter in place
of the old one, because bits_per_key=9.55 provides the same accuracy as
the old bits_per_key=10.
This change not only requires refining the logic for choosing the best
num_probes for a given bits/key setting, it revealed a flaw in that logic.
As bits/key gets higher, the best num_probes for a cache-local Bloom
filter is closer to bpk / 2 than to bpk * 0.69, the best choice for a
standard Bloom filter. For example, at 16 bits per key, the best
num_probes is 9 (FP rate = 0.0843%) not 11 (FP rate = 0.0884%).
This change fixes and refines that logic (for the format_version=5
Bloom filter only, just in case) based on empirical tests to find
accuracy inflection points between each num_probes.
Although bits_per_key is now specified as a double, the new Bloom
filter converts/rounds this to "millibits / key" for predictable/precise
internal computations. Just in case of unforeseen compatibility
issues, we round to the nearest whole number bits / key for the
legacy Bloom filter, so as not to unlock new behaviors for it.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6092
Test Plan: unit tests included
Differential Revision: D18711313
Pulled By: pdillinger
fbshipit-source-id: 1aa73295f152a995328cb846ef9157ae8a05522a
2019-11-26 23:49:16 +00:00
|
|
|
static inline int ChooseNumProbes(int millibits_per_key) {
|
|
|
|
// Since this implementation can (with AVX2) make up to 8 probes
|
|
|
|
// for the same cost, we pick the most accurate num_probes, based
|
|
|
|
// on actual tests of the implementation. Note that for higher
|
|
|
|
// bits/key, the best choice for cache-local Bloom can be notably
|
|
|
|
// smaller than standard bloom, e.g. 9 instead of 11 @ 16 b/k.
|
|
|
|
if (millibits_per_key <= 2080) {
|
|
|
|
return 1;
|
|
|
|
} else if (millibits_per_key <= 3580) {
|
|
|
|
return 2;
|
|
|
|
} else if (millibits_per_key <= 5100) {
|
|
|
|
return 3;
|
|
|
|
} else if (millibits_per_key <= 6640) {
|
|
|
|
return 4;
|
|
|
|
} else if (millibits_per_key <= 8300) {
|
|
|
|
return 5;
|
|
|
|
} else if (millibits_per_key <= 10070) {
|
|
|
|
return 6;
|
|
|
|
} else if (millibits_per_key <= 11720) {
|
|
|
|
return 7;
|
|
|
|
} else if (millibits_per_key <= 14001) {
|
|
|
|
// Would be something like <= 13800 but sacrificing *slightly* for
|
|
|
|
// more settings using <= 8 probes.
|
|
|
|
return 8;
|
|
|
|
} else if (millibits_per_key <= 16050) {
|
|
|
|
return 9;
|
|
|
|
} else if (millibits_per_key <= 18300) {
|
|
|
|
return 10;
|
|
|
|
} else if (millibits_per_key <= 22001) {
|
|
|
|
return 11;
|
|
|
|
} else if (millibits_per_key <= 25501) {
|
|
|
|
return 12;
|
|
|
|
} else if (millibits_per_key > 50000) {
|
|
|
|
// Top out at 24 probes (three sets of 8)
|
|
|
|
return 24;
|
|
|
|
} else {
|
|
|
|
// Roughly optimal choices for remaining range
|
|
|
|
// e.g.
|
|
|
|
// 28000 -> 12, 28001 -> 13
|
|
|
|
// 50000 -> 23, 50001 -> 24
|
|
|
|
return (millibits_per_key - 1) / 2000 - 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
static inline void AddHash(uint32_t h1, uint32_t h2, uint32_t len_bytes,
|
|
|
|
int num_probes, char *data) {
|
2023-08-17 18:52:38 +00:00
|
|
|
uint32_t bytes_to_cache_line = FastRange32(h1, len_bytes >> 6) << 6;
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
AddHashPrepared(h2, num_probes, data + bytes_to_cache_line);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void AddHashPrepared(uint32_t h2, int num_probes,
|
|
|
|
char *data_at_cache_line) {
|
|
|
|
uint32_t h = h2;
|
|
|
|
for (int i = 0; i < num_probes; ++i, h *= uint32_t{0x9e3779b9}) {
|
|
|
|
// 9-bit address within 512 bit cache line
|
|
|
|
int bitpos = h >> (32 - 9);
|
|
|
|
data_at_cache_line[bitpos >> 3] |= (uint8_t{1} << (bitpos & 7));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void PrepareHash(uint32_t h1, uint32_t len_bytes,
|
|
|
|
const char *data,
|
|
|
|
uint32_t /*out*/ *byte_offset) {
|
2023-08-17 18:52:38 +00:00
|
|
|
uint32_t bytes_to_cache_line = FastRange32(h1, len_bytes >> 6) << 6;
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
PREFETCH(data + bytes_to_cache_line, 0 /* rw */, 1 /* locality */);
|
|
|
|
PREFETCH(data + bytes_to_cache_line + 63, 0 /* rw */, 1 /* locality */);
|
|
|
|
*byte_offset = bytes_to_cache_line;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool HashMayMatch(uint32_t h1, uint32_t h2, uint32_t len_bytes,
|
|
|
|
int num_probes, const char *data) {
|
2023-08-17 18:52:38 +00:00
|
|
|
uint32_t bytes_to_cache_line = FastRange32(h1, len_bytes >> 6) << 6;
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
return HashMayMatchPrepared(h2, num_probes, data + bytes_to_cache_line);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool HashMayMatchPrepared(uint32_t h2, int num_probes,
|
|
|
|
const char *data_at_cache_line) {
|
|
|
|
uint32_t h = h2;
|
Simplify detection of x86 CPU features (#11419)
Summary:
**Background** - runtime detection of certain x86 CPU features was added for optimizing CRC32c checksums, where performance is dramatically affected by the availability of certain CPU instructions and code using intrinsics for those instructions. And Java builds with native library try to be broadly compatible but performant.
What has changed is that CRC32c is no longer the most efficient cheecksum on contemporary x86_64 hardware, nor the default checksum. XXH3 is generally faster and not as dramatically impacted by the availability of certain CPU instructions. For example, on my Skylake system using db_bench (similar on an older Skylake system without AVX512):
PORTABLE=1 empty USE_SSE : xxh3->8 GB/s crc32c->0.8 GB/s (no SSE4.2 nor AVX2 instructions)
PORTABLE=1 USE_SSE=1 : xxh3->19 GB/s crc32c->16 GB/s (with SSE4.2 and AVX2)
PORTABLE=0 USE_SSE ignored: xxh3->28 GB/s crc32c->16 GB/s (also some AVX512)
Testing a ~10 year old system, with SSE4.2 but without AVX2, crc32c is a similar speed to the new systems but xxh3 is only about half that speed, also 8GB/s like the non-AVX2 compile above. Given that xxh3 has specific optimization for AVX2, I think we can infer that that crc32c is only fastest for that ~2008-2013 period when SSE4.2 was included but not AVX2. And given that xxh3 is only about 2x slower on these systems (not like >10x slower for unoptimized crc32c), I don't think we need to invest too much in optimally adapting to these old cases.
x86 hardware that doesn't support fast CRC32c is now extremely rare, so requiring a custom build to support such hardware is fine IMHO.
**This change** does two related things:
* Remove runtime CPU detection for optimizing CRC32c on x86. Maintaining this code is non-zero work, and compiling special code that doesn't work on the configured target instruction set for code generation is always dubious. (On the one hand we have to ensure the CRC32c code uses SSE4.2 but on the other hand we have to ensure nothing else does.)
* Detect CPU features in source code, not in build scripts. Although there are some hypothetical advantages to detectiong in build scripts (compiler generality), RocksDB supports at least three build systems: make, cmake, and buck. It's not practical to support feature detection on all three, and we have suffered from missed optimization opportunities by relying on missing or incomplete detection in cmake and buck. We also depend on some components like xxhash that do source code detection anyway.
**In more detail:**
* `HAVE_SSE42`, `HAVE_AVX2`, and `HAVE_PCLMUL` replaced by standard macros `__SSE4_2__`, `__AVX2__`, and `__PCLMUL__`.
* MSVC does not provide high fidelity defines for SSE, PCLMUL, or POPCNT, but we can infer those from `__AVX__` or `__AVX2__` in a compatibility header. In rare cases of false negative or false positive feature detection, a build engineer should be able to set defines to work around the issue.
* `__POPCNT__` is another standard define, but we happen to only need it on MSVC, where it is set by that compatibility header, or can be set by the build engineer.
* `PORTABLE` can be set to a CPU type, e.g. "haswell", to compile for that CPU type.
* `USE_SSE` is deprecated, now equivalent to PORTABLE=haswell, which roughly approximates its old behavior.
Notably, this change should enable more builds to use the AVX2-optimized Bloom filter implementation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11419
Test Plan:
existing tests, CI
Manual performance tests after the change match the before above (none expected with make build).
We also see AVX2 optimized Bloom filter code enabled when expected, by injecting a compiler error. (Performance difference is not big on my current CPU.)
Reviewed By: ajkr
Differential Revision: D45489041
Pulled By: pdillinger
fbshipit-source-id: 60ceb0dd2aa3b365c99ed08a8b2a087a9abb6a70
2023-05-10 05:25:45 +00:00
|
|
|
#ifdef __AVX2__
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
int rem_probes = num_probes;
|
|
|
|
|
|
|
|
// NOTE: For better performance for num_probes in {1, 2, 9, 10, 17, 18,
|
|
|
|
// etc.} one can insert specialized code for rem_probes <= 2, bypassing
|
|
|
|
// the SIMD code in those cases. There is a detectable but minor overhead
|
|
|
|
// applied to other values of num_probes (when not statically determined),
|
|
|
|
// but smoother performance curve vs. num_probes. But for now, when
|
|
|
|
// in doubt, don't add unnecessary code.
|
|
|
|
|
|
|
|
// Powers of 32-bit golden ratio, mod 2**32.
|
|
|
|
const __m256i multipliers =
|
|
|
|
_mm256_setr_epi32(0x00000001, 0x9e3779b9, 0xe35e67b1, 0x734297e9,
|
|
|
|
0x35fbe861, 0xdeb7c719, 0x448b211, 0x3459b749);
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
// Eight copies of hash
|
|
|
|
__m256i hash_vector = _mm256_set1_epi32(h);
|
|
|
|
|
|
|
|
// Same effect as repeated multiplication by 0x9e3779b9 thanks to
|
|
|
|
// associativity of multiplication.
|
|
|
|
hash_vector = _mm256_mullo_epi32(hash_vector, multipliers);
|
|
|
|
|
|
|
|
// Now the top 9 bits of each of the eight 32-bit values in
|
|
|
|
// hash_vector are bit addresses for probes within the cache line.
|
|
|
|
// While the platform-independent code uses byte addressing (6 bits
|
|
|
|
// to pick a byte + 3 bits to pick a bit within a byte), here we work
|
|
|
|
// with 32-bit words (4 bits to pick a word + 5 bits to pick a bit
|
|
|
|
// within a word) because that works well with AVX2 and is equivalent
|
|
|
|
// under little-endian.
|
|
|
|
|
|
|
|
// Shift each right by 28 bits to get 4-bit word addresses.
|
|
|
|
const __m256i word_addresses = _mm256_srli_epi32(hash_vector, 28);
|
|
|
|
|
|
|
|
// Gather 32-bit values spread over 512 bits by 4-bit address. In
|
|
|
|
// essence, we are dereferencing eight pointers within the cache
|
|
|
|
// line.
|
|
|
|
//
|
|
|
|
// Option 1: AVX2 gather (seems to be a little slow - understandable)
|
|
|
|
// const __m256i value_vector =
|
|
|
|
// _mm256_i32gather_epi32(static_cast<const int
|
|
|
|
// *>(data_at_cache_line),
|
|
|
|
// word_addresses,
|
|
|
|
// /*bytes / i32*/ 4);
|
|
|
|
// END Option 1
|
|
|
|
// Potentially unaligned as we're not *always* cache-aligned -> loadu
|
|
|
|
const __m256i *mm_data =
|
|
|
|
reinterpret_cast<const __m256i *>(data_at_cache_line);
|
|
|
|
__m256i lower = _mm256_loadu_si256(mm_data);
|
|
|
|
__m256i upper = _mm256_loadu_si256(mm_data + 1);
|
|
|
|
// Option 2: AVX512VL permute hack
|
|
|
|
// Only negligibly faster than Option 3, so not yet worth supporting
|
|
|
|
// const __m256i value_vector =
|
|
|
|
// _mm256_permutex2var_epi32(lower, word_addresses, upper);
|
|
|
|
// END Option 2
|
|
|
|
// Option 3: AVX2 permute+blend hack
|
|
|
|
// Use lowest three bits to order probing values, as if all from same
|
|
|
|
// 256 bit piece.
|
|
|
|
lower = _mm256_permutevar8x32_epi32(lower, word_addresses);
|
|
|
|
upper = _mm256_permutevar8x32_epi32(upper, word_addresses);
|
|
|
|
// Just top 1 bit of address, to select between lower and upper.
|
|
|
|
const __m256i upper_lower_selector = _mm256_srai_epi32(hash_vector, 31);
|
|
|
|
// Finally: the next 8 probed 32-bit values, in probing sequence order.
|
|
|
|
const __m256i value_vector =
|
|
|
|
_mm256_blendv_epi8(lower, upper, upper_lower_selector);
|
|
|
|
// END Option 3
|
|
|
|
|
|
|
|
// We might not need to probe all 8, so build a mask for selecting only
|
|
|
|
// what we need. (The k_selector(s) could be pre-computed but that
|
|
|
|
// doesn't seem to make a noticeable performance difference.)
|
|
|
|
const __m256i zero_to_seven = _mm256_setr_epi32(0, 1, 2, 3, 4, 5, 6, 7);
|
|
|
|
// Subtract rem_probes from each of those constants
|
|
|
|
__m256i k_selector =
|
|
|
|
_mm256_sub_epi32(zero_to_seven, _mm256_set1_epi32(rem_probes));
|
|
|
|
// Negative after subtract -> use/select
|
|
|
|
// Keep only high bit (logical shift right each by 31).
|
|
|
|
k_selector = _mm256_srli_epi32(k_selector, 31);
|
|
|
|
|
|
|
|
// Strip off the 4 bit word address (shift left)
|
|
|
|
__m256i bit_addresses = _mm256_slli_epi32(hash_vector, 4);
|
|
|
|
// And keep only 5-bit (32 - 27) bit-within-32-bit-word addresses.
|
|
|
|
bit_addresses = _mm256_srli_epi32(bit_addresses, 27);
|
|
|
|
// Build a bit mask
|
|
|
|
const __m256i bit_mask = _mm256_sllv_epi32(k_selector, bit_addresses);
|
|
|
|
|
|
|
|
// Like ((~value_vector) & bit_mask) == 0)
|
|
|
|
bool match = _mm256_testc_si256(value_vector, bit_mask) != 0;
|
|
|
|
|
|
|
|
// This check first so that it's easy for branch predictor to optimize
|
|
|
|
// num_probes <= 8 case, making it free of unpredictable branches.
|
|
|
|
if (rem_probes <= 8) {
|
|
|
|
return match;
|
|
|
|
} else if (!match) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
// otherwise
|
|
|
|
// Need another iteration. 0xab25f4c1 == golden ratio to the 8th power
|
|
|
|
h *= 0xab25f4c1;
|
|
|
|
rem_probes -= 8;
|
|
|
|
}
|
|
|
|
#else
|
|
|
|
for (int i = 0; i < num_probes; ++i, h *= uint32_t{0x9e3779b9}) {
|
|
|
|
// 9-bit address within 512 bit cache line
|
|
|
|
int bitpos = h >> (32 - 9);
|
|
|
|
if ((data_at_cache_line[bitpos >> 3] & (char(1) << (bitpos & 7))) == 0) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2019-09-16 23:15:18 +00:00
|
|
|
// A legacy Bloom filter implementation with no locality of probes (slow).
|
|
|
|
// It uses double hashing to generate a sequence of hash values.
|
|
|
|
// Asymptotic analysis is in [Kirsch,Mitzenmacher 2006], but known to have
|
|
|
|
// subtle accuracy flaws for practical sizes [Dillinger,Manolios 2004].
|
|
|
|
//
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
// DO NOT REUSE
|
2019-09-16 23:15:18 +00:00
|
|
|
//
|
|
|
|
class LegacyNoLocalityBloomImpl {
|
2019-09-19 19:32:33 +00:00
|
|
|
public:
|
Allow fractional bits/key in BloomFilterPolicy (#6092)
Summary:
There's no technological impediment to allowing the Bloom
filter bits/key to be non-integer (fractional/decimal) values, and it
provides finer control over the memory vs. accuracy trade-off. This is
especially handy in using the format_version=5 Bloom filter in place
of the old one, because bits_per_key=9.55 provides the same accuracy as
the old bits_per_key=10.
This change not only requires refining the logic for choosing the best
num_probes for a given bits/key setting, it revealed a flaw in that logic.
As bits/key gets higher, the best num_probes for a cache-local Bloom
filter is closer to bpk / 2 than to bpk * 0.69, the best choice for a
standard Bloom filter. For example, at 16 bits per key, the best
num_probes is 9 (FP rate = 0.0843%) not 11 (FP rate = 0.0884%).
This change fixes and refines that logic (for the format_version=5
Bloom filter only, just in case) based on empirical tests to find
accuracy inflection points between each num_probes.
Although bits_per_key is now specified as a double, the new Bloom
filter converts/rounds this to "millibits / key" for predictable/precise
internal computations. Just in case of unforeseen compatibility
issues, we round to the nearest whole number bits / key for the
legacy Bloom filter, so as not to unlock new behaviors for it.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6092
Test Plan: unit tests included
Differential Revision: D18711313
Pulled By: pdillinger
fbshipit-source-id: 1aa73295f152a995328cb846ef9157ae8a05522a
2019-11-26 23:49:16 +00:00
|
|
|
static inline int ChooseNumProbes(int bits_per_key) {
|
|
|
|
// We intentionally round down to reduce probing cost a little bit
|
|
|
|
int num_probes = static_cast<int>(bits_per_key * 0.69); // 0.69 =~ ln(2)
|
|
|
|
if (num_probes < 1) num_probes = 1;
|
|
|
|
if (num_probes > 30) num_probes = 30;
|
|
|
|
return num_probes;
|
|
|
|
}
|
|
|
|
|
2019-09-19 19:32:33 +00:00
|
|
|
static inline void AddHash(uint32_t h, uint32_t total_bits, int num_probes,
|
|
|
|
char *data) {
|
2019-09-16 23:15:18 +00:00
|
|
|
const uint32_t delta = (h >> 17) | (h << 15); // Rotate right 17 bits
|
|
|
|
for (int i = 0; i < num_probes; i++) {
|
|
|
|
const uint32_t bitpos = h % total_bits;
|
2019-09-19 19:32:33 +00:00
|
|
|
data[bitpos / 8] |= (1 << (bitpos % 8));
|
2019-09-16 23:15:18 +00:00
|
|
|
h += delta;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool HashMayMatch(uint32_t h, uint32_t total_bits,
|
|
|
|
int num_probes, const char *data) {
|
|
|
|
const uint32_t delta = (h >> 17) | (h << 15); // Rotate right 17 bits
|
|
|
|
for (int i = 0; i < num_probes; i++) {
|
|
|
|
const uint32_t bitpos = h % total_bits;
|
2019-09-19 19:32:33 +00:00
|
|
|
if ((data[bitpos / 8] & (1 << (bitpos % 8))) == 0) {
|
2019-09-16 23:15:18 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
h += delta;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
// A legacy Bloom filter implementation with probes local to a single
|
|
|
|
// cache line (fast). Because SST files might be transported between
|
|
|
|
// platforms, the cache line size is a parameter rather than hard coded.
|
|
|
|
// (But if specified as a constant parameter, an optimizing compiler
|
|
|
|
// should take advantage of that.)
|
|
|
|
//
|
|
|
|
// When ExtraRotates is false, this implementation is notably deficient in
|
|
|
|
// accuracy. Specifically, it uses double hashing with a 1/512 chance of the
|
|
|
|
// increment being zero (when cache line size is 512 bits). Thus, there's a
|
|
|
|
// 1/512 chance of probing only one index, which we'd expect to incur about
|
|
|
|
// a 1/2 * 1/512 or absolute 0.1% FP rate penalty. More detail at
|
|
|
|
// https://github.com/facebook/rocksdb/issues/4120
|
|
|
|
//
|
New Bloom filter implementation for full and partitioned filters (#6007)
Summary:
Adds an improved, replacement Bloom filter implementation (FastLocalBloom) for full and partitioned filters in the block-based table. This replacement is faster and more accurate, especially for high bits per key or millions of keys in a single filter.
Speed
The improved speed, at least on recent x86_64, comes from
* Using fastrange instead of modulo (%)
* Using our new hash function (XXH3 preview, added in a previous commit), which is much faster for large keys and only *slightly* slower on keys around 12 bytes if hashing the same size many thousands of times in a row.
* Optimizing the Bloom filter queries with AVX2 SIMD operations. (Added AVX2 to the USE_SSE=1 build.) Careful design was required to support (a) SIMD-optimized queries, (b) compatible non-SIMD code that's simple and efficient, (c) flexible choice of number of probes, and (d) essentially maximized accuracy for a cache-local Bloom filter. Probes are made eight at a time, so any number of probes up to 8 is the same speed, then up to 16, etc.
* Prefetching cache lines when building the filter. Although this optimization could be applied to the old structure as well, it seems to balance out the small added cost of accumulating 64 bit hashes for adding to the filter rather than 32 bit hashes.
Here's nominal speed data from filter_bench (200MB in filters, about 10k keys each, 10 bits filter data / key, 6 probes, avg key size 24 bytes, includes hashing time) on Skylake DE (relatively low clock speed):
$ ./filter_bench -quick -impl=2 -net_includes_hashing # New Bloom filter
Build avg ns/key: 47.7135
Mixed inside/outside queries...
Single filter net ns/op: 26.2825
Random filter net ns/op: 150.459
Average FP rate %: 0.954651
$ ./filter_bench -quick -impl=0 -net_includes_hashing # Old Bloom filter
Build avg ns/key: 47.2245
Mixed inside/outside queries...
Single filter net ns/op: 63.2978
Random filter net ns/op: 188.038
Average FP rate %: 1.13823
Similar build time but dramatically faster query times on hot data (63 ns to 26 ns), and somewhat faster on stale data (188 ns to 150 ns). Performance differences on batched and skewed query loads are between these extremes as expected.
The only other interesting thing about speed is "inside" (query key was added to filter) vs. "outside" (query key was not added to filter) query times. The non-SIMD implementations are substantially slower when most queries are "outside" vs. "inside". This goes against what one might expect or would have observed years ago, as "outside" queries only need about two probes on average, due to short-circuiting, while "inside" always have num_probes (say 6). The problem is probably the nastily unpredictable branch. The SIMD implementation has few branches (very predictable) and has pretty consistent running time regardless of query outcome.
Accuracy
The generally improved accuracy (re: Issue https://github.com/facebook/rocksdb/issues/5857) comes from a better design for probing indices
within a cache line (re: Issue https://github.com/facebook/rocksdb/issues/4120) and improved accuracy for millions of keys in a single filter from using a 64-bit hash function (XXH3p). Design details in code comments.
Accuracy data (generalizes, except old impl gets worse with millions of keys):
Memory bits per key: FP rate percent old impl -> FP rate percent new impl
6: 5.70953 -> 5.69888
8: 2.45766 -> 2.29709
10: 1.13977 -> 0.959254
12: 0.662498 -> 0.411593
16: 0.353023 -> 0.0873754
24: 0.261552 -> 0.0060971
50: 0.225453 -> ~0.00003 (less than 1 in a million queries are FP)
Fixes https://github.com/facebook/rocksdb/issues/5857
Fixes https://github.com/facebook/rocksdb/issues/4120
Unlike the old implementation, this implementation has a fixed cache line size (64 bytes). At 10 bits per key, the accuracy of this new implementation is very close to the old implementation with 128-byte cache line size. If there's sufficient demand, this implementation could be generalized.
Compatibility
Although old releases would see the new structure as corrupt filter data and read the table as if there's no filter, we've decided only to enable the new Bloom filter with new format_version=5. This provides a smooth path for automatic adoption over time, with an option for early opt-in.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6007
Test Plan: filter_bench has been used thoroughly to validate speed, accuracy, and correctness. Unit tests have been carefully updated to exercise new and old implementations, as well as the logic to select an implementation based on context (format_version).
Differential Revision: D18294749
Pulled By: pdillinger
fbshipit-source-id: d44c9db3696e4d0a17caaec47075b7755c262c5f
2019-11-14 00:31:26 +00:00
|
|
|
// DO NOT REUSE
|
2019-09-16 23:15:18 +00:00
|
|
|
//
|
|
|
|
template <bool ExtraRotates>
|
|
|
|
class LegacyLocalityBloomImpl {
|
2019-09-19 19:32:33 +00:00
|
|
|
private:
|
2019-09-16 23:15:18 +00:00
|
|
|
static inline uint32_t GetLine(uint32_t h, uint32_t num_lines) {
|
|
|
|
uint32_t offset_h = ExtraRotates ? (h >> 11) | (h << 21) : h;
|
|
|
|
return offset_h % num_lines;
|
|
|
|
}
|
2019-09-19 19:32:33 +00:00
|
|
|
|
|
|
|
public:
|
Warn on excessive keys for legacy Bloom filter with 32-bit hash (#6317)
Summary:
With many millions of keys, the old Bloom filter implementation
for the block-based table (format_version <= 4) would have excessive FP
rate due to the limitations of feeding the Bloom filter with a 32-bit hash.
This change computes an estimated inflated FP rate due to this effect
and warns in the log whenever an SST filter is constructed (almost
certainly a "full" not "partitioned" filter) that exceeds 1.5x FP rate
due to this effect. The detailed condition is only checked if 3 million
keys or more have been added to a filter, as this should be a lower
bound for common bits/key settings (< 20).
Recommended remedies include smaller SST file size, using
format_version >= 5 (for new Bloom filter), or using partitioned
filters.
This does not change behavior other than generating warnings for some
constructed filters using the old implementation.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/6317
Test Plan:
Example with warning, 15M keys @ 15 bits / key: (working_mem_size_mb is just to stop after building one filter if it's large)
$ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=15000000 2>&1 | grep 'FP rate'
[WARN] [/block_based/filter_policy.cc:292] Using legacy SST/BBT Bloom filter with excessive key count (15.0M @ 15bpk), causing estimated 1.8x higher filter FP rate. Consider using new Bloom with format_version>=5, smaller SST file size, or partitioned filters.
Predicted FP rate %: 0.766702
Average FP rate %: 0.66846
Example without warning (150K keys):
$ ./filter_bench -quick -impl=0 -working_mem_size_mb=1 -bits_per_key=15 -average_keys_per_filter=150000 2>&1 | grep 'FP rate'
Predicted FP rate %: 0.422857
Average FP rate %: 0.379301
$
With more samples at 15 bits/key:
150K keys -> no warning; actual: 0.379% FP rate (baseline)
1M keys -> no warning; actual: 0.396% FP rate, 1.045x
9M keys -> no warning; actual: 0.563% FP rate, 1.485x
10M keys -> warning (1.5x); actual: 0.564% FP rate, 1.488x
15M keys -> warning (1.8x); actual: 0.668% FP rate, 1.76x
25M keys -> warning (2.4x); actual: 0.880% FP rate, 2.32x
At 10 bits/key:
150K keys -> no warning; actual: 1.17% FP rate (baseline)
1M keys -> no warning; actual: 1.16% FP rate
10M keys -> no warning; actual: 1.32% FP rate, 1.13x
25M keys -> no warning; actual: 1.63% FP rate, 1.39x
35M keys -> warning (1.6x); actual: 1.81% FP rate, 1.55x
At 5 bits/key:
150K keys -> no warning; actual: 9.32% FP rate (baseline)
25M keys -> no warning; actual: 9.62% FP rate, 1.03x
200M keys -> no warning; actual: 12.2% FP rate, 1.31x
250M keys -> warning (1.5x); actual: 12.8% FP rate, 1.37x
300M keys -> warning (1.6x); actual: 13.4% FP rate, 1.43x
The reason for the modest inaccuracy at low bits/key is that the assumption of independence between a collision between 32-hash values feeding the filter and an FP in the filter is not quite true for implementations using "simple" logic to compute indices from the stock hash result. There's math on this in my dissertation, but I don't think it's worth the effort just for these extreme cases (> 100 million keys and low-ish bits/key).
Differential Revision: D19471715
Pulled By: pdillinger
fbshipit-source-id: f80c96893a09bf1152630ff0b964e5cdd7e35c68
2020-01-21 05:30:22 +00:00
|
|
|
// NOTE: this has only been validated to enough accuracy for producing
|
|
|
|
// reasonable warnings / user feedback, not for making functional decisions.
|
|
|
|
static double EstimatedFpRate(size_t keys, size_t bytes, int num_probes) {
|
|
|
|
double bits_per_key = 8.0 * bytes / keys;
|
|
|
|
double filter_rate = BloomMath::CacheLocalFpRate(bits_per_key, num_probes,
|
|
|
|
/*cache line bits*/ 512);
|
|
|
|
if (!ExtraRotates) {
|
|
|
|
// Good estimate of impact of flaw in index computation.
|
|
|
|
// Adds roughly 0.002 around 50 bits/key and 0.001 around 100 bits/key.
|
|
|
|
// The + 22 shifts it nicely to fit for lower bits/key.
|
|
|
|
filter_rate += 0.1 / (bits_per_key * 0.75 + 22);
|
|
|
|
} else {
|
|
|
|
// Not yet validated
|
|
|
|
assert(false);
|
|
|
|
}
|
|
|
|
// Always uses 32-bit hash
|
|
|
|
double fingerprint_rate = BloomMath::FingerprintFpRate(keys, 32);
|
|
|
|
return BloomMath::IndependentProbabilitySum(filter_rate, fingerprint_rate);
|
|
|
|
}
|
|
|
|
|
2019-09-19 19:32:33 +00:00
|
|
|
static inline void AddHash(uint32_t h, uint32_t num_lines, int num_probes,
|
|
|
|
char *data, int log2_cache_line_bytes) {
|
2019-09-16 23:15:18 +00:00
|
|
|
const int log2_cache_line_bits = log2_cache_line_bytes + 3;
|
|
|
|
|
|
|
|
char *data_at_offset =
|
|
|
|
data + (GetLine(h, num_lines) << log2_cache_line_bytes);
|
|
|
|
const uint32_t delta = (h >> 17) | (h << 15);
|
|
|
|
for (int i = 0; i < num_probes; ++i) {
|
|
|
|
// Mask to bit-within-cache-line address
|
|
|
|
const uint32_t bitpos = h & ((1 << log2_cache_line_bits) - 1);
|
|
|
|
data_at_offset[bitpos / 8] |= (1 << (bitpos % 8));
|
|
|
|
if (ExtraRotates) {
|
|
|
|
h = (h >> log2_cache_line_bits) | (h << (32 - log2_cache_line_bits));
|
|
|
|
}
|
|
|
|
h += delta;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void PrepareHashMayMatch(uint32_t h, uint32_t num_lines,
|
|
|
|
const char *data,
|
2019-09-19 19:32:33 +00:00
|
|
|
uint32_t /*out*/ *byte_offset,
|
2019-09-16 23:15:18 +00:00
|
|
|
int log2_cache_line_bytes) {
|
|
|
|
uint32_t b = GetLine(h, num_lines) << log2_cache_line_bytes;
|
|
|
|
PREFETCH(data + b, 0 /* rw */, 1 /* locality */);
|
2019-09-19 19:32:33 +00:00
|
|
|
PREFETCH(data + b + ((1 << log2_cache_line_bytes) - 1), 0 /* rw */,
|
|
|
|
1 /* locality */);
|
2019-09-16 23:15:18 +00:00
|
|
|
*byte_offset = b;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool HashMayMatch(uint32_t h, uint32_t num_lines,
|
|
|
|
int num_probes, const char *data,
|
|
|
|
int log2_cache_line_bytes) {
|
|
|
|
uint32_t b = GetLine(h, num_lines) << log2_cache_line_bytes;
|
2019-09-19 19:32:33 +00:00
|
|
|
return HashMayMatchPrepared(h, num_probes, data + b, log2_cache_line_bytes);
|
2019-09-16 23:15:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool HashMayMatchPrepared(uint32_t h, int num_probes,
|
|
|
|
const char *data_at_offset,
|
|
|
|
int log2_cache_line_bytes) {
|
|
|
|
const int log2_cache_line_bits = log2_cache_line_bytes + 3;
|
|
|
|
|
|
|
|
const uint32_t delta = (h >> 17) | (h << 15);
|
|
|
|
for (int i = 0; i < num_probes; ++i) {
|
|
|
|
// Mask to bit-within-cache-line address
|
|
|
|
const uint32_t bitpos = h & ((1 << log2_cache_line_bits) - 1);
|
|
|
|
if (((data_at_offset[bitpos / 8]) & (1 << (bitpos % 8))) == 0) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (ExtraRotates) {
|
|
|
|
h = (h >> log2_cache_line_bits) | (h << (32 - log2_cache_line_bits));
|
|
|
|
}
|
|
|
|
h += delta;
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
} // namespace ROCKSDB_NAMESPACE
|