rocksdb/table/block_based/block_cache.h
Peter Dillinger b34cef57b7 Support pro-actively erasing obsolete block cache entries (#12694)
Summary:
Currently, when files become obsolete, the block cache entries associated with them just age out naturally. With pure LRU, this is not too bad, as once you "use" enough cache entries to (re-)fill the cache, you are guranteed to have purged the obsolete entries. However, HyperClockCache is a counting clock cache with a somewhat longer memory, so could be more negatively impacted by previously-hot cache entries becoming obsolete, and taking longer to age out than newer single-hit entries.

Part of the reason we still have this natural aging-out is that there's almost no connection between block cache entries and the file they are associated with. Everything is hashed into the same pool(s) of entries with nothing like a secondary index based on file. Keeping track of such an index could be expensive.

This change adds a new, mutable CF option `uncache_aggressiveness` for erasing obsolete block cache entries. The process can be speculative, lossy, or unproductive because not all potential block cache entries associated with files will be resident in memory, and attempting to remove them all could be wasted CPU time. Rather than a simple on/off switch, `uncache_aggressiveness` basically tells RocksDB how much CPU you're willing to burn trying to purge obsolete block cache entries. When such efforts are not sufficiently productive for a file, we stop and move on.

The option is in ColumnFamilyOptions so that it is dynamically changeable for already-open files, and customizeable by CF.

Note that this block cache removal happens as part of the process of purging obsolete files, which is often in a background thread (depending on `background_purge_on_iterator_cleanup` and `avoid_unnecessary_blocking_io` options) rather than along CPU critical paths.

Notable auxiliary code details:
* Possibly fixing some issues with trivial moves with `only_delete_metadata`: unnecessary TableCache::Evict in that case and missing from the ObsoleteFileInfo move operator. (Not able to reproduce an current failure.)
* Remove suspicious TableCache::Erase() from VersionSet::AddObsoleteBlobFile() (TODO follow-up item)

Marked EXPERIMENTAL until more thorough validation is complete.

Direct stats of this functionality are omitted because they could be misleading. Block cache hit rate is a better indicator of benefit, and CPU profiling a better indicator of cost.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/12694

Test Plan:
* Unit tests added, including refactoring an existing test to make better use of parameterized tests.
* Added to crash test.
* Performance, sample command:
```
for I in `seq 1 10`; do for UA in 300; do for CT in lru_cache fixed_hyper_clock_cache auto_hyper_clock_cache; do rm -rf /dev/shm/test3; TEST_TMPDIR=/dev/shm/test3 /usr/bin/time ./db_bench -benchmarks=readwhilewriting -num=13000000 -read_random_exp_range=6 -write_buffer_size=10000000 -bloom_bits=10 -cache_type=$CT -cache_size=390000000 -cache_index_and_filter_blocks=1 -disable_wal=1 -duration=60 -statistics -uncache_aggressiveness=$UA 2>&1 | grep -E 'micros/op|rocksdb.block.cache.data.(hit|miss)|rocksdb.number.keys.(read|written)|maxresident' | awk '/rocksdb.block.cache.data.miss/ { miss = $4 } /rocksdb.block.cache.data.hit/ { hit = $4 } { print } END { print "hit rate = " ((hit * 1.0) / (miss + hit)) }' | tee -a results-$CT-$UA; done; done; done
```

Averaging 10 runs each case, block cache data block hit rates

```
lru_cache
UA=0   -> hit rate = 0.327, ops/s = 87668, user CPU sec = 139.0
UA=300 -> hit rate = 0.336, ops/s = 87960, user CPU sec = 139.0

fixed_hyper_clock_cache
UA=0   -> hit rate = 0.336, ops/s = 100069, user CPU sec = 139.9
UA=300 -> hit rate = 0.343, ops/s = 100104, user CPU sec = 140.2

auto_hyper_clock_cache
UA=0   -> hit rate = 0.336, ops/s = 97580, user CPU sec = 140.5
UA=300 -> hit rate = 0.345, ops/s = 97972, user CPU sec = 139.8
```

Conclusion: up to roughly 1 percentage point of improved block cache hit rate, likely leading to overall improved efficiency (because the foreground CPU cost of cache misses likely outweighs the background CPU cost of erasure, let alone I/O savings).

Reviewed By: ajkr

Differential Revision: D57932442

Pulled By: pdillinger

fbshipit-source-id: 84a243ca5f965f731f346a4853009780a904af6c
2024-06-07 08:57:11 -07:00

191 lines
6.8 KiB
C++

// Copyright (c) Meta Platforms, Inc. and affiliates.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
// Code supporting block cache (Cache) access for block-based table, based on
// the convenient APIs in typed_cache.h
#pragma once
#include <type_traits>
#include "cache/typed_cache.h"
#include "port/lang.h"
#include "table/block_based/block.h"
#include "table/block_based/block_type.h"
#include "table/block_based/parsed_full_filter_block.h"
#include "table/format.h"
namespace ROCKSDB_NAMESPACE {
// Metaprogramming wrappers for Block, to give each type a single role when
// used with FullTypedCacheInterface.
// (NOTE: previous attempts to create actual derived classes of Block with
// virtual calls resulted in performance regression)
class Block_kData : public Block {
public:
using Block::Block;
static constexpr CacheEntryRole kCacheEntryRole = CacheEntryRole::kDataBlock;
static constexpr BlockType kBlockType = BlockType::kData;
};
class Block_kIndex : public Block {
public:
using Block::Block;
static constexpr CacheEntryRole kCacheEntryRole = CacheEntryRole::kIndexBlock;
static constexpr BlockType kBlockType = BlockType::kIndex;
};
class Block_kFilterPartitionIndex : public Block {
public:
using Block::Block;
static constexpr CacheEntryRole kCacheEntryRole =
CacheEntryRole::kFilterMetaBlock;
static constexpr BlockType kBlockType = BlockType::kFilterPartitionIndex;
};
class Block_kRangeDeletion : public Block {
public:
using Block::Block;
static constexpr CacheEntryRole kCacheEntryRole = CacheEntryRole::kOtherBlock;
static constexpr BlockType kBlockType = BlockType::kRangeDeletion;
};
// Useful for creating the Block even though meta index blocks are not
// yet stored in block cache
class Block_kMetaIndex : public Block {
public:
using Block::Block;
static constexpr CacheEntryRole kCacheEntryRole = CacheEntryRole::kOtherBlock;
static constexpr BlockType kBlockType = BlockType::kMetaIndex;
};
struct BlockCreateContext : public Cache::CreateContext {
BlockCreateContext() {}
BlockCreateContext(const BlockBasedTableOptions* _table_options,
const ImmutableOptions* _ioptions, Statistics* _statistics,
bool _using_zstd, uint8_t _protection_bytes_per_key,
const Comparator* _raw_ucmp,
bool _index_value_is_full = false,
bool _index_has_first_key = false)
: table_options(_table_options),
ioptions(_ioptions),
statistics(_statistics),
raw_ucmp(_raw_ucmp),
using_zstd(_using_zstd),
protection_bytes_per_key(_protection_bytes_per_key),
index_value_is_full(_index_value_is_full),
index_has_first_key(_index_has_first_key) {}
const BlockBasedTableOptions* table_options = nullptr;
const ImmutableOptions* ioptions = nullptr;
Statistics* statistics = nullptr;
const Comparator* raw_ucmp = nullptr;
const UncompressionDict* dict = nullptr;
uint32_t format_version;
bool using_zstd = false;
uint8_t protection_bytes_per_key = 0;
bool index_value_is_full;
bool index_has_first_key;
// For TypedCacheInterface
template <typename TBlocklike>
inline void Create(std::unique_ptr<TBlocklike>* parsed_out,
size_t* charge_out, const Slice& data,
CompressionType type, MemoryAllocator* alloc) {
BlockContents uncompressed_block_contents;
if (type != CompressionType::kNoCompression) {
assert(dict != nullptr);
UncompressionContext context(type);
UncompressionInfo info(context, *dict, type);
Status s = UncompressBlockData(
info, data.data(), data.size(), &uncompressed_block_contents,
table_options->format_version, *ioptions, alloc);
if (!s.ok()) {
parsed_out->reset();
return;
}
} else {
uncompressed_block_contents =
BlockContents(AllocateAndCopyBlock(data, alloc), data.size());
}
Create(parsed_out, std::move(uncompressed_block_contents));
*charge_out = parsed_out->get()->ApproximateMemoryUsage();
}
void Create(std::unique_ptr<Block_kData>* parsed_out, BlockContents&& block);
void Create(std::unique_ptr<Block_kIndex>* parsed_out, BlockContents&& block);
void Create(std::unique_ptr<Block_kFilterPartitionIndex>* parsed_out,
BlockContents&& block);
void Create(std::unique_ptr<Block_kRangeDeletion>* parsed_out,
BlockContents&& block);
void Create(std::unique_ptr<Block_kMetaIndex>* parsed_out,
BlockContents&& block);
void Create(std::unique_ptr<ParsedFullFilterBlock>* parsed_out,
BlockContents&& block);
void Create(std::unique_ptr<UncompressionDict>* parsed_out,
BlockContents&& block);
};
// Convenient cache interface to use for block_cache, with support for
// SecondaryCache.
template <typename TBlocklike>
using BlockCacheInterface =
FullTypedCacheInterface<TBlocklike, BlockCreateContext>;
// Shortcut name for cache handles under BlockCacheInterface
template <typename TBlocklike>
using BlockCacheTypedHandle =
typename BlockCacheInterface<TBlocklike>::TypedHandle;
// Selects the right helper based on BlockType and CacheTier
const Cache::CacheItemHelper* GetCacheItemHelper(
BlockType block_type,
CacheTier lowest_used_cache_tier = CacheTier::kNonVolatileBlockTier);
// For SFINAE check that a type is "blocklike" with a kCacheEntryRole member.
// Can get difficult compiler/linker errors without a good check like this.
template <typename TUse, typename TBlocklike>
using WithBlocklikeCheck = std::enable_if_t<
TBlocklike::kCacheEntryRole == CacheEntryRole::kMisc || true, TUse>;
// Helper for the uncache_aggressiveness option
class UncacheAggressivenessAdvisor {
public:
UncacheAggressivenessAdvisor(uint32_t uncache_aggressiveness) {
assert(uncache_aggressiveness > 0);
allowance_ = std::min(uncache_aggressiveness, uint32_t{3});
threshold_ = std::pow(0.99, uncache_aggressiveness - 1);
}
void Report(bool erased) { ++(erased ? useful_ : not_useful_); }
bool ShouldContinue() {
if (not_useful_ < allowance_) {
return true;
} else {
// See UncacheAggressivenessAdvisor unit test
return (useful_ + 1.0) / (useful_ + not_useful_ - allowance_ + 1.5) >=
threshold_;
}
}
private:
// Baseline minimum number of "not useful" to consider stopping, to allow
// sufficient evidence for checking the threshold. Actual minimum will be
// higher as threshold gets well below 1.0.
int allowance_;
// After allowance, stop if useful ratio is below this threshold
double threshold_;
// Counts
int useful_ = 0;
int not_useful_ = 0;
};
} // namespace ROCKSDB_NAMESPACE