mirror of
https://github.com/facebook/rocksdb.git
synced 2024-11-25 22:44:05 +00:00
a8b3b9a20c
Summary: This change only affects non-schema-critical aspects of the production candidate Ribbon filter. Specifically, it refines choice of internal configuration parameters based on inputs. The changes are minor enough that the schema tests in bloom_test, some of which depend on this, are unaffected. There are also some minor optimizations and refactorings. This would be a schema change for "smash" Ribbon, to fix some known issues with small filters, but "smash" Ribbon is not accessible in public APIs. Unit test CompactnessAndBacktrackAndFpRate updated to test small and medium-large filters. Run with --thoroughness=100 or so for much better detection power (not appropriate for continuous regression testing). Homogenous Ribbon: This change adds internally a Ribbon filter variant we call Homogeneous Ribbon, in collaboration with Stefan Walzer. The expected "result" value for every key is zero, instead of computed from a hash. Entropy for queries not to be false positives comes from free variables ("overhead") in the solution structure, which are populated pseudorandomly. Construction is slightly faster for not tracking result values, and never fails. Instead, FP rate can jump up whenever and whereever entries are packed too tightly. For small structures, we can choose overhead to make this FP rate jump unlikely, as seen in updated unit test CompactnessAndBacktrackAndFpRate. Unlike standard Ribbon, Homogeneous Ribbon seems to scale to arbitrary number of keys when accepting an FP rate penalty for small pockets of high FP rate in the structure. For example, 64-bit ribbon with 8 solution columns and 10% allocated space overhead for slots seems to achieve about 10.5% space overhead vs. information-theoretic minimum based on its observed FP rate with expected pockets of degradation. (FP rate is close to 1/256.) If targeting a higher FP rate with fewer solution columns, Homogeneous Ribbon can be even more space efficient, because the penalty from degradation is relatively smaller. If targeting a lower FP rate, Homogeneous Ribbon is less space efficient, as more allocated overhead is needed to keep the FP rate impact of degradation relatively under control. The new OptimizeHomogAtScale tool in ribbon_test helps to find these optimal allocation overheads for different numbers of solution columns. And Ribbon widths, with 128-bit Ribbon apparently cutting space overheads in half vs. 64-bit. Other misc item specifics: * Ribbon APIs in util/ribbon_config.h now provide configuration data for not just 5% construction failure rate (95% success), but also 50% and 0.1%. * Note that the Ribbon structure does not exhibit "threshold" behavior as standard Xor filter does, so there is a roughly fixed space penalty to cut construction failure rate in half. Thus, there isn't really an "almost sure" setting. * Although we can extrapolate settings for large filters, we don't have a good formula for configuring smaller filters (< 2^17 slots or so), and efforts to summarize with a formula have failed. Thus, small data is hard-coded from updated FindOccupancy tool. * Enhances ApproximateNumEntries for public API Ribbon using more precise data (new API GetNumToAdd), thus a more accurate but not perfect reversal of CalculateSpace. (bloom_test updated to expect the greater precision) * Move EndianSwapValue from coding.h to coding_lean.h to keep Ribbon code easily transferable from RocksDB * Add some missing 'const' to member functions * Small optimization to 128-bit BitParity * Small refactoring of BandingStorage in ribbon_alg.h to support Homogeneous Ribbon * CompactnessAndBacktrackAndFpRate now has an "expand" test: on construction failure, a possible alternative to re-seeding hash functions is simply to increase the number of slots (allocated space overhead) and try again with essentially the same hash values. (Start locations will be different roundings of the same scaled hash values--because fastrange not mod.) This seems to be as effective or more effective than re-seeding, as long as we increase the number of slots (m) by roughly m += m/w where w is the Ribbon width. This way, there is effectively an expansion by one slot for each ribbon-width window in the banding. (This approach assumes that getting "bad data" from your hash function is as unlikely as it naturally should be, e.g. no adversary.) * 32-bit and 16-bit Ribbon configurations are added to ribbon_test for understanding their behavior, e.g. with FindOccupancy. They are not considered useful at this time and not tested with CompactnessAndBacktrackAndFpRate. Pull Request resolved: https://github.com/facebook/rocksdb/pull/7879 Test Plan: unit test updates included Reviewed By: jay-zhuang Differential Revision: D26371245 Pulled By: pdillinger fbshipit-source-id: da6600d90a3785b99ad17a88b2a3027710b4ea3a
299 lines
8.1 KiB
C++
299 lines
8.1 KiB
C++
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
#pragma once
|
|
|
|
#include "util/coding_lean.h"
|
|
#include "util/math.h"
|
|
|
|
#ifdef TEST_UINT128_COMPAT
|
|
#undef HAVE_UINT128_EXTENSION
|
|
#endif
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
// Unsigned128 is a 128 bit value supporting (at least) bitwise operators,
|
|
// shifts, and comparisons. __uint128_t is not always available.
|
|
|
|
#ifdef HAVE_UINT128_EXTENSION
|
|
using Unsigned128 = __uint128_t;
|
|
#else
|
|
struct Unsigned128 {
|
|
uint64_t lo;
|
|
uint64_t hi;
|
|
|
|
inline Unsigned128() {
|
|
static_assert(sizeof(Unsigned128) == 2 * sizeof(uint64_t),
|
|
"unexpected overhead in representation");
|
|
lo = 0;
|
|
hi = 0;
|
|
}
|
|
|
|
inline Unsigned128(uint64_t lower) {
|
|
lo = lower;
|
|
hi = 0;
|
|
}
|
|
|
|
inline Unsigned128(uint64_t lower, uint64_t upper) {
|
|
lo = lower;
|
|
hi = upper;
|
|
}
|
|
|
|
explicit operator uint64_t() { return lo; }
|
|
|
|
explicit operator uint32_t() { return static_cast<uint32_t>(lo); }
|
|
|
|
explicit operator uint16_t() { return static_cast<uint16_t>(lo); }
|
|
|
|
explicit operator uint8_t() { return static_cast<uint8_t>(lo); }
|
|
};
|
|
|
|
inline Unsigned128 operator<<(const Unsigned128& lhs, unsigned shift) {
|
|
shift &= 127;
|
|
Unsigned128 rv;
|
|
if (shift >= 64) {
|
|
rv.lo = 0;
|
|
rv.hi = lhs.lo << (shift & 63);
|
|
} else {
|
|
uint64_t tmp = lhs.lo;
|
|
rv.lo = tmp << shift;
|
|
// Ensure shift==0 shifts away everything. (This avoids another
|
|
// conditional branch on shift == 0.)
|
|
tmp = tmp >> 1 >> (63 - shift);
|
|
rv.hi = tmp | (lhs.hi << shift);
|
|
}
|
|
return rv;
|
|
}
|
|
|
|
inline Unsigned128& operator<<=(Unsigned128& lhs, unsigned shift) {
|
|
lhs = lhs << shift;
|
|
return lhs;
|
|
}
|
|
|
|
inline Unsigned128 operator>>(const Unsigned128& lhs, unsigned shift) {
|
|
shift &= 127;
|
|
Unsigned128 rv;
|
|
if (shift >= 64) {
|
|
rv.hi = 0;
|
|
rv.lo = lhs.hi >> (shift & 63);
|
|
} else {
|
|
uint64_t tmp = lhs.hi;
|
|
rv.hi = tmp >> shift;
|
|
// Ensure shift==0 shifts away everything
|
|
tmp = tmp << 1 << (63 - shift);
|
|
rv.lo = tmp | (lhs.lo >> shift);
|
|
}
|
|
return rv;
|
|
}
|
|
|
|
inline Unsigned128& operator>>=(Unsigned128& lhs, unsigned shift) {
|
|
lhs = lhs >> shift;
|
|
return lhs;
|
|
}
|
|
|
|
inline Unsigned128 operator&(const Unsigned128& lhs, const Unsigned128& rhs) {
|
|
return Unsigned128(lhs.lo & rhs.lo, lhs.hi & rhs.hi);
|
|
}
|
|
|
|
inline Unsigned128& operator&=(Unsigned128& lhs, const Unsigned128& rhs) {
|
|
lhs = lhs & rhs;
|
|
return lhs;
|
|
}
|
|
|
|
inline Unsigned128 operator|(const Unsigned128& lhs, const Unsigned128& rhs) {
|
|
return Unsigned128(lhs.lo | rhs.lo, lhs.hi | rhs.hi);
|
|
}
|
|
|
|
inline Unsigned128& operator|=(Unsigned128& lhs, const Unsigned128& rhs) {
|
|
lhs = lhs | rhs;
|
|
return lhs;
|
|
}
|
|
|
|
inline Unsigned128 operator^(const Unsigned128& lhs, const Unsigned128& rhs) {
|
|
return Unsigned128(lhs.lo ^ rhs.lo, lhs.hi ^ rhs.hi);
|
|
}
|
|
|
|
inline Unsigned128& operator^=(Unsigned128& lhs, const Unsigned128& rhs) {
|
|
lhs = lhs ^ rhs;
|
|
return lhs;
|
|
}
|
|
|
|
inline Unsigned128 operator~(const Unsigned128& v) {
|
|
return Unsigned128(~v.lo, ~v.hi);
|
|
}
|
|
|
|
inline bool operator==(const Unsigned128& lhs, const Unsigned128& rhs) {
|
|
return lhs.lo == rhs.lo && lhs.hi == rhs.hi;
|
|
}
|
|
|
|
inline bool operator!=(const Unsigned128& lhs, const Unsigned128& rhs) {
|
|
return lhs.lo != rhs.lo || lhs.hi != rhs.hi;
|
|
}
|
|
|
|
inline bool operator>(const Unsigned128& lhs, const Unsigned128& rhs) {
|
|
return lhs.hi > rhs.hi || (lhs.hi == rhs.hi && lhs.lo > rhs.lo);
|
|
}
|
|
|
|
inline bool operator<(const Unsigned128& lhs, const Unsigned128& rhs) {
|
|
return lhs.hi < rhs.hi || (lhs.hi == rhs.hi && lhs.lo < rhs.lo);
|
|
}
|
|
|
|
inline bool operator>=(const Unsigned128& lhs, const Unsigned128& rhs) {
|
|
return lhs.hi > rhs.hi || (lhs.hi == rhs.hi && lhs.lo >= rhs.lo);
|
|
}
|
|
|
|
inline bool operator<=(const Unsigned128& lhs, const Unsigned128& rhs) {
|
|
return lhs.hi < rhs.hi || (lhs.hi == rhs.hi && lhs.lo <= rhs.lo);
|
|
}
|
|
#endif
|
|
|
|
inline uint64_t Lower64of128(Unsigned128 v) {
|
|
#ifdef HAVE_UINT128_EXTENSION
|
|
return static_cast<uint64_t>(v);
|
|
#else
|
|
return v.lo;
|
|
#endif
|
|
}
|
|
|
|
inline uint64_t Upper64of128(Unsigned128 v) {
|
|
#ifdef HAVE_UINT128_EXTENSION
|
|
return static_cast<uint64_t>(v >> 64);
|
|
#else
|
|
return v.hi;
|
|
#endif
|
|
}
|
|
|
|
// This generally compiles down to a single fast instruction on 64-bit.
|
|
// This doesn't really make sense as operator* because it's not a
|
|
// general 128x128 multiply and provides more output than 64x64 multiply.
|
|
inline Unsigned128 Multiply64to128(uint64_t a, uint64_t b) {
|
|
#ifdef HAVE_UINT128_EXTENSION
|
|
return Unsigned128{a} * Unsigned128{b};
|
|
#else
|
|
// Full decomposition
|
|
// NOTE: GCC seems to fully understand this code as 64-bit x 64-bit
|
|
// -> 128-bit multiplication and optimize it appropriately.
|
|
uint64_t tmp = uint64_t{b & 0xffffFFFF} * uint64_t{a & 0xffffFFFF};
|
|
uint64_t lower = tmp & 0xffffFFFF;
|
|
tmp >>= 32;
|
|
tmp += uint64_t{b & 0xffffFFFF} * uint64_t{a >> 32};
|
|
// Avoid overflow: first add lower 32 of tmp2, and later upper 32
|
|
uint64_t tmp2 = uint64_t{b >> 32} * uint64_t{a & 0xffffFFFF};
|
|
tmp += tmp2 & 0xffffFFFF;
|
|
lower |= tmp << 32;
|
|
tmp >>= 32;
|
|
tmp += tmp2 >> 32;
|
|
tmp += uint64_t{b >> 32} * uint64_t{a >> 32};
|
|
return Unsigned128(lower, tmp);
|
|
#endif
|
|
}
|
|
|
|
template <>
|
|
inline int FloorLog2(Unsigned128 v) {
|
|
if (Upper64of128(v) == 0) {
|
|
return FloorLog2(Lower64of128(v));
|
|
} else {
|
|
return FloorLog2(Upper64of128(v)) + 64;
|
|
}
|
|
}
|
|
|
|
template <>
|
|
inline int CountTrailingZeroBits(Unsigned128 v) {
|
|
if (Lower64of128(v) != 0) {
|
|
return CountTrailingZeroBits(Lower64of128(v));
|
|
} else {
|
|
return CountTrailingZeroBits(Upper64of128(v)) + 64;
|
|
}
|
|
}
|
|
|
|
template <>
|
|
inline int BitsSetToOne(Unsigned128 v) {
|
|
return BitsSetToOne(Lower64of128(v)) + BitsSetToOne(Upper64of128(v));
|
|
}
|
|
|
|
template <>
|
|
inline int BitParity(Unsigned128 v) {
|
|
return BitParity(Lower64of128(v) ^ Upper64of128(v));
|
|
}
|
|
|
|
template <typename T>
|
|
struct IsUnsignedUpTo128
|
|
: std::integral_constant<bool, std::is_unsigned<T>::value ||
|
|
std::is_same<T, Unsigned128>::value> {};
|
|
|
|
inline void EncodeFixed128(char* dst, Unsigned128 value) {
|
|
EncodeFixed64(dst, Lower64of128(value));
|
|
EncodeFixed64(dst + 8, Upper64of128(value));
|
|
}
|
|
|
|
inline Unsigned128 DecodeFixed128(const char* ptr) {
|
|
Unsigned128 rv = DecodeFixed64(ptr + 8);
|
|
return (rv << 64) | DecodeFixed64(ptr);
|
|
}
|
|
|
|
// A version of EncodeFixed* for generic algorithms. Likely to be used
|
|
// with Unsigned128, so lives here for now.
|
|
template <typename T>
|
|
inline void EncodeFixedGeneric(char* /*dst*/, T /*value*/) {
|
|
// Unfortunately, GCC does not appear to optimize this simple code down
|
|
// to a trivial load on Intel:
|
|
//
|
|
// T ret_val = 0;
|
|
// for (size_t i = 0; i < sizeof(T); ++i) {
|
|
// ret_val |= (static_cast<T>(static_cast<unsigned char>(ptr[i])) << (8 *
|
|
// i));
|
|
// }
|
|
// return ret_val;
|
|
//
|
|
// But does unroll the loop, and does optimize manually unrolled version
|
|
// for specific sizes down to a trivial load. I have no idea why it doesn't
|
|
// do both on this code.
|
|
|
|
// So instead, we rely on specializations
|
|
static_assert(sizeof(T) == 0, "No specialization provided for this type");
|
|
}
|
|
|
|
template <>
|
|
inline void EncodeFixedGeneric(char* dst, uint16_t value) {
|
|
return EncodeFixed16(dst, value);
|
|
}
|
|
template <>
|
|
inline void EncodeFixedGeneric(char* dst, uint32_t value) {
|
|
return EncodeFixed32(dst, value);
|
|
}
|
|
template <>
|
|
inline void EncodeFixedGeneric(char* dst, uint64_t value) {
|
|
return EncodeFixed64(dst, value);
|
|
}
|
|
template <>
|
|
inline void EncodeFixedGeneric(char* dst, Unsigned128 value) {
|
|
return EncodeFixed128(dst, value);
|
|
}
|
|
|
|
// A version of EncodeFixed* for generic algorithms.
|
|
template <typename T>
|
|
inline T DecodeFixedGeneric(const char* /*dst*/) {
|
|
static_assert(sizeof(T) == 0, "No specialization provided for this type");
|
|
}
|
|
|
|
template <>
|
|
inline uint16_t DecodeFixedGeneric(const char* dst) {
|
|
return DecodeFixed16(dst);
|
|
}
|
|
template <>
|
|
inline uint32_t DecodeFixedGeneric(const char* dst) {
|
|
return DecodeFixed32(dst);
|
|
}
|
|
template <>
|
|
inline uint64_t DecodeFixedGeneric(const char* dst) {
|
|
return DecodeFixed64(dst);
|
|
}
|
|
template <>
|
|
inline Unsigned128 DecodeFixedGeneric(const char* dst) {
|
|
return DecodeFixed128(dst);
|
|
}
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|