mirror of
https://github.com/facebook/rocksdb.git
synced 2024-11-30 04:41:49 +00:00
0050a73a4f
Summary: This change standardizes on a new 16-byte cache key format for block cache (incl compressed and secondary) and persistent cache (but not table cache and row cache). The goal is a really fast cache key with practically ideal stability and uniqueness properties without external dependencies (e.g. from FileSystem). A fixed key size of 16 bytes should enable future optimizations to the concurrent hash table for block cache, which is a heavy CPU user / bottleneck, but there appears to be measurable performance improvement even with no changes to LRUCache. This change replaces a lot of disjointed and ugly code handling cache keys with calls to a simple, clean new internal API (cache_key.h). (Preserving the old cache key logic under an option would be very ugly and likely negate the performance gain of the new approach. Complete replacement carries some inherent risk, but I think that's acceptable with sufficient analysis and testing.) The scheme for encoding new cache keys is complicated but explained in cache_key.cc. Also: EndianSwapValue is moved to math.h to be next to other bit operations. (Explains some new include "math.h".) ReverseBits operation added and unit tests added to hash_test for both. Fixes https://github.com/facebook/rocksdb/issues/7405 (presuming a root cause) Pull Request resolved: https://github.com/facebook/rocksdb/pull/9126 Test Plan: ### Basic correctness Several tests needed updates to work with the new functionality, mostly because we are no longer relying on filesystem for stable cache keys so table builders & readers need more context info to agree on cache keys. This functionality is so core, a huge number of existing tests exercise the cache key functionality. ### Performance Create db with `TEST_TMPDIR=/dev/shm ./db_bench -bloom_bits=10 -benchmarks=fillrandom -num=3000000 -partition_index_and_filters` And test performance with `TEST_TMPDIR=/dev/shm ./db_bench -readonly -use_existing_db -bloom_bits=10 -benchmarks=readrandom -num=3000000 -duration=30 -cache_index_and_filter_blocks -cache_size=250000 -threads=4` using DEBUG_LEVEL=0 and simultaneous before & after runs. Before ops/sec, avg over 100 runs: 121924 After ops/sec, avg over 100 runs: 125385 (+2.8%) ### Collision probability I have built a tool, ./cache_bench -stress_cache_key to broadly simulate host-wide cache activity over many months, by making some pessimistic simplifying assumptions: * Every generated file has a cache entry for every byte offset in the file (contiguous range of cache keys) * All of every file is cached for its entire lifetime We use a simple table with skewed address assignment and replacement on address collision to simulate files coming & going, with quite a variance (super-Poisson) in ages. Some output with `./cache_bench -stress_cache_key -sck_keep_bits=40`: ``` Total cache or DBs size: 32TiB Writing 925.926 MiB/s or 76.2939TiB/day Multiply by 9.22337e+18 to correct for simulation losses (but still assume whole file cached) ``` These come from default settings of 2.5M files per day of 32 MB each, and `-sck_keep_bits=40` means that to represent a single file, we are only keeping 40 bits of the 128-bit cache key. With file size of 2\*\*25 contiguous keys (pessimistic), our simulation is about 2\*\*(128-40-25) or about 9 billion billion times more prone to collision than reality. More default assumptions, relatively pessimistic: * 100 DBs in same process (doesn't matter much) * Re-open DB in same process (new session ID related to old session ID) on average every 100 files generated * Restart process (all new session IDs unrelated to old) 24 times per day After enough data, we get a result at the end: ``` (keep 40 bits) 17 collisions after 2 x 90 days, est 10.5882 days between (9.76592e+19 corrected) ``` If we believe the (pessimistic) simulation and the mathematical generalization, we would need to run a billion machines all for 97 billion days to expect a cache key collision. To help verify that our generalization ("corrected") is robust, we can make our simulation more precise with `-sck_keep_bits=41` and `42`, which takes more running time to get enough data: ``` (keep 41 bits) 16 collisions after 4 x 90 days, est 22.5 days between (1.03763e+20 corrected) (keep 42 bits) 19 collisions after 10 x 90 days, est 47.3684 days between (1.09224e+20 corrected) ``` The generalized prediction still holds. With the `-sck_randomize` option, we can see that we are beating "random" cache keys (except offsets still non-randomized) by a modest amount (roughly 20x less collision prone than random), which should make us reasonably comfortable even in "degenerate" cases: ``` 197 collisions after 1 x 90 days, est 0.456853 days between (4.21372e+18 corrected) ``` I've run other tests to validate other conditions behave as expected, never behaving "worse than random" unless we start chopping off structured data. Reviewed By: zhichao-cao Differential Revision: D33171746 Pulled By: pdillinger fbshipit-source-id: f16a57e369ed37be5e7e33525ace848d0537c88f
102 lines
3.1 KiB
C++
102 lines
3.1 KiB
C++
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
// Encoding independent of machine byte order:
|
|
// * Fixed-length numbers are encoded with least-significant byte first
|
|
// (little endian, native order on Intel and others)
|
|
//
|
|
// More functions in coding.h
|
|
|
|
#pragma once
|
|
|
|
#include <cstdint>
|
|
#include <cstring>
|
|
|
|
#include "port/port.h" // for port::kLittleEndian
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
// Lower-level versions of Put... that write directly into a character buffer
|
|
// REQUIRES: dst has enough space for the value being written
|
|
// -- Implementation of the functions declared above
|
|
inline void EncodeFixed16(char* buf, uint16_t value) {
|
|
if (port::kLittleEndian) {
|
|
memcpy(buf, &value, sizeof(value));
|
|
} else {
|
|
buf[0] = value & 0xff;
|
|
buf[1] = (value >> 8) & 0xff;
|
|
}
|
|
}
|
|
|
|
inline void EncodeFixed32(char* buf, uint32_t value) {
|
|
if (port::kLittleEndian) {
|
|
memcpy(buf, &value, sizeof(value));
|
|
} else {
|
|
buf[0] = value & 0xff;
|
|
buf[1] = (value >> 8) & 0xff;
|
|
buf[2] = (value >> 16) & 0xff;
|
|
buf[3] = (value >> 24) & 0xff;
|
|
}
|
|
}
|
|
|
|
inline void EncodeFixed64(char* buf, uint64_t value) {
|
|
if (port::kLittleEndian) {
|
|
memcpy(buf, &value, sizeof(value));
|
|
} else {
|
|
buf[0] = value & 0xff;
|
|
buf[1] = (value >> 8) & 0xff;
|
|
buf[2] = (value >> 16) & 0xff;
|
|
buf[3] = (value >> 24) & 0xff;
|
|
buf[4] = (value >> 32) & 0xff;
|
|
buf[5] = (value >> 40) & 0xff;
|
|
buf[6] = (value >> 48) & 0xff;
|
|
buf[7] = (value >> 56) & 0xff;
|
|
}
|
|
}
|
|
|
|
// Lower-level versions of Get... that read directly from a character buffer
|
|
// without any bounds checking.
|
|
|
|
inline uint16_t DecodeFixed16(const char* ptr) {
|
|
if (port::kLittleEndian) {
|
|
// Load the raw bytes
|
|
uint16_t result;
|
|
memcpy(&result, ptr, sizeof(result)); // gcc optimizes this to a plain load
|
|
return result;
|
|
} else {
|
|
return ((static_cast<uint16_t>(static_cast<unsigned char>(ptr[0]))) |
|
|
(static_cast<uint16_t>(static_cast<unsigned char>(ptr[1])) << 8));
|
|
}
|
|
}
|
|
|
|
inline uint32_t DecodeFixed32(const char* ptr) {
|
|
if (port::kLittleEndian) {
|
|
// Load the raw bytes
|
|
uint32_t result;
|
|
memcpy(&result, ptr, sizeof(result)); // gcc optimizes this to a plain load
|
|
return result;
|
|
} else {
|
|
return ((static_cast<uint32_t>(static_cast<unsigned char>(ptr[0]))) |
|
|
(static_cast<uint32_t>(static_cast<unsigned char>(ptr[1])) << 8) |
|
|
(static_cast<uint32_t>(static_cast<unsigned char>(ptr[2])) << 16) |
|
|
(static_cast<uint32_t>(static_cast<unsigned char>(ptr[3])) << 24));
|
|
}
|
|
}
|
|
|
|
inline uint64_t DecodeFixed64(const char* ptr) {
|
|
if (port::kLittleEndian) {
|
|
// Load the raw bytes
|
|
uint64_t result;
|
|
memcpy(&result, ptr, sizeof(result)); // gcc optimizes this to a plain load
|
|
return result;
|
|
} else {
|
|
uint64_t lo = DecodeFixed32(ptr);
|
|
uint64_t hi = DecodeFixed32(ptr + 4);
|
|
return (hi << 32) | lo;
|
|
}
|
|
}
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|