mirror of
https://github.com/facebook/rocksdb.git
synced 2024-11-25 22:44:05 +00:00
b34cef57b7
Summary: Currently, when files become obsolete, the block cache entries associated with them just age out naturally. With pure LRU, this is not too bad, as once you "use" enough cache entries to (re-)fill the cache, you are guranteed to have purged the obsolete entries. However, HyperClockCache is a counting clock cache with a somewhat longer memory, so could be more negatively impacted by previously-hot cache entries becoming obsolete, and taking longer to age out than newer single-hit entries. Part of the reason we still have this natural aging-out is that there's almost no connection between block cache entries and the file they are associated with. Everything is hashed into the same pool(s) of entries with nothing like a secondary index based on file. Keeping track of such an index could be expensive. This change adds a new, mutable CF option `uncache_aggressiveness` for erasing obsolete block cache entries. The process can be speculative, lossy, or unproductive because not all potential block cache entries associated with files will be resident in memory, and attempting to remove them all could be wasted CPU time. Rather than a simple on/off switch, `uncache_aggressiveness` basically tells RocksDB how much CPU you're willing to burn trying to purge obsolete block cache entries. When such efforts are not sufficiently productive for a file, we stop and move on. The option is in ColumnFamilyOptions so that it is dynamically changeable for already-open files, and customizeable by CF. Note that this block cache removal happens as part of the process of purging obsolete files, which is often in a background thread (depending on `background_purge_on_iterator_cleanup` and `avoid_unnecessary_blocking_io` options) rather than along CPU critical paths. Notable auxiliary code details: * Possibly fixing some issues with trivial moves with `only_delete_metadata`: unnecessary TableCache::Evict in that case and missing from the ObsoleteFileInfo move operator. (Not able to reproduce an current failure.) * Remove suspicious TableCache::Erase() from VersionSet::AddObsoleteBlobFile() (TODO follow-up item) Marked EXPERIMENTAL until more thorough validation is complete. Direct stats of this functionality are omitted because they could be misleading. Block cache hit rate is a better indicator of benefit, and CPU profiling a better indicator of cost. Pull Request resolved: https://github.com/facebook/rocksdb/pull/12694 Test Plan: * Unit tests added, including refactoring an existing test to make better use of parameterized tests. * Added to crash test. * Performance, sample command: ``` for I in `seq 1 10`; do for UA in 300; do for CT in lru_cache fixed_hyper_clock_cache auto_hyper_clock_cache; do rm -rf /dev/shm/test3; TEST_TMPDIR=/dev/shm/test3 /usr/bin/time ./db_bench -benchmarks=readwhilewriting -num=13000000 -read_random_exp_range=6 -write_buffer_size=10000000 -bloom_bits=10 -cache_type=$CT -cache_size=390000000 -cache_index_and_filter_blocks=1 -disable_wal=1 -duration=60 -statistics -uncache_aggressiveness=$UA 2>&1 | grep -E 'micros/op|rocksdb.block.cache.data.(hit|miss)|rocksdb.number.keys.(read|written)|maxresident' | awk '/rocksdb.block.cache.data.miss/ { miss = $4 } /rocksdb.block.cache.data.hit/ { hit = $4 } { print } END { print "hit rate = " ((hit * 1.0) / (miss + hit)) }' | tee -a results-$CT-$UA; done; done; done ``` Averaging 10 runs each case, block cache data block hit rates ``` lru_cache UA=0 -> hit rate = 0.327, ops/s = 87668, user CPU sec = 139.0 UA=300 -> hit rate = 0.336, ops/s = 87960, user CPU sec = 139.0 fixed_hyper_clock_cache UA=0 -> hit rate = 0.336, ops/s = 100069, user CPU sec = 139.9 UA=300 -> hit rate = 0.343, ops/s = 100104, user CPU sec = 140.2 auto_hyper_clock_cache UA=0 -> hit rate = 0.336, ops/s = 97580, user CPU sec = 140.5 UA=300 -> hit rate = 0.345, ops/s = 97972, user CPU sec = 139.8 ``` Conclusion: up to roughly 1 percentage point of improved block cache hit rate, likely leading to overall improved efficiency (because the foreground CPU cost of cache misses likely outweighs the background CPU cost of erasure, let alone I/O savings). Reviewed By: ajkr Differential Revision: D57932442 Pulled By: pdillinger fbshipit-source-id: 84a243ca5f965f731f346a4853009780a904af6c
250 lines
7.5 KiB
C++
250 lines
7.5 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
//
|
|
// Copyright (c) 2012 The LevelDB Authors. All rights reserved.
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
#pragma once
|
|
|
|
#include <cassert>
|
|
#include <type_traits>
|
|
|
|
#include "port/likely.h"
|
|
#include "rocksdb/advanced_cache.h"
|
|
#include "rocksdb/cleanable.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
// CachableEntry is a handle to an object that may or may not be in the block
|
|
// cache. It is used in a variety of ways:
|
|
//
|
|
// 1) It may refer to an object in the block cache. In this case, cache_ and
|
|
// cache_handle_ are not nullptr, and the cache handle has to be released when
|
|
// the CachableEntry is destroyed (the lifecycle of the cached object, on the
|
|
// other hand, is managed by the cache itself).
|
|
// 2) It may uniquely own the (non-cached) object it refers to (examples include
|
|
// a block read directly from file, or uncompressed blocks when there is a
|
|
// compressed block cache but no uncompressed block cache). In such cases, the
|
|
// object has to be destroyed when the CachableEntry is destroyed.
|
|
// 3) It may point to an object (cached or not) without owning it. In this case,
|
|
// no action is needed when the CachableEntry is destroyed.
|
|
// 4) Sometimes, management of a cached or owned object (see #1 and #2 above)
|
|
// is transferred to some other object. This is used for instance with iterators
|
|
// (where cleanup is performed using a chain of cleanup functions,
|
|
// see Cleanable).
|
|
//
|
|
// Because of #1 and #2 above, copying a CachableEntry is not safe (and thus not
|
|
// allowed); hence, this is a move-only type, where a move transfers the
|
|
// management responsibilities, and leaves the source object in an empty state.
|
|
|
|
template <class T>
|
|
class CachableEntry {
|
|
public:
|
|
CachableEntry() = default;
|
|
|
|
CachableEntry(T* value, Cache* cache, Cache::Handle* cache_handle,
|
|
bool own_value)
|
|
: value_(value),
|
|
cache_(cache),
|
|
cache_handle_(cache_handle),
|
|
own_value_(own_value) {
|
|
assert(value_ != nullptr ||
|
|
(cache_ == nullptr && cache_handle_ == nullptr && !own_value_));
|
|
assert(!!cache_ == !!cache_handle_);
|
|
assert(!cache_handle_ || !own_value_);
|
|
}
|
|
|
|
CachableEntry(const CachableEntry&) = delete;
|
|
CachableEntry& operator=(const CachableEntry&) = delete;
|
|
|
|
CachableEntry(CachableEntry&& rhs) noexcept
|
|
: value_(rhs.value_),
|
|
cache_(rhs.cache_),
|
|
cache_handle_(rhs.cache_handle_),
|
|
own_value_(rhs.own_value_) {
|
|
assert(value_ != nullptr ||
|
|
(cache_ == nullptr && cache_handle_ == nullptr && !own_value_));
|
|
assert(!!cache_ == !!cache_handle_);
|
|
assert(!cache_handle_ || !own_value_);
|
|
|
|
rhs.ResetFields();
|
|
}
|
|
|
|
CachableEntry& operator=(CachableEntry&& rhs) noexcept {
|
|
if (UNLIKELY(this == &rhs)) {
|
|
return *this;
|
|
}
|
|
|
|
ReleaseResource(/*erase_if_last_ref=*/false);
|
|
|
|
value_ = rhs.value_;
|
|
cache_ = rhs.cache_;
|
|
cache_handle_ = rhs.cache_handle_;
|
|
own_value_ = rhs.own_value_;
|
|
|
|
assert(value_ != nullptr ||
|
|
(cache_ == nullptr && cache_handle_ == nullptr && !own_value_));
|
|
assert(!!cache_ == !!cache_handle_);
|
|
assert(!cache_handle_ || !own_value_);
|
|
|
|
rhs.ResetFields();
|
|
|
|
return *this;
|
|
}
|
|
|
|
~CachableEntry() { ReleaseResource(/*erase_if_last_ref=*/false); }
|
|
|
|
bool IsEmpty() const {
|
|
return value_ == nullptr && cache_ == nullptr && cache_handle_ == nullptr &&
|
|
!own_value_;
|
|
}
|
|
|
|
bool IsCached() const {
|
|
assert(!!cache_ == !!cache_handle_);
|
|
|
|
return cache_handle_ != nullptr;
|
|
}
|
|
|
|
T* GetValue() const { return value_; }
|
|
Cache* GetCache() const { return cache_; }
|
|
Cache::Handle* GetCacheHandle() const { return cache_handle_; }
|
|
bool GetOwnValue() const { return own_value_; }
|
|
|
|
void Reset() {
|
|
ReleaseResource(/*erase_if_last_ref=*/false);
|
|
ResetFields();
|
|
}
|
|
|
|
void ResetEraseIfLastRef() {
|
|
ReleaseResource(/*erase_if_last_ref=*/true);
|
|
ResetFields();
|
|
}
|
|
|
|
void TransferTo(Cleanable* cleanable) {
|
|
if (cleanable) {
|
|
if (cache_handle_ != nullptr) {
|
|
assert(cache_ != nullptr);
|
|
cleanable->RegisterCleanup(&ReleaseCacheHandle, cache_, cache_handle_);
|
|
} else if (own_value_) {
|
|
cleanable->RegisterCleanup(&DeleteValue, value_, nullptr);
|
|
}
|
|
}
|
|
|
|
ResetFields();
|
|
}
|
|
|
|
void SetOwnedValue(std::unique_ptr<T>&& value) {
|
|
assert(value.get() != nullptr);
|
|
|
|
if (UNLIKELY(value_ == value.get() && own_value_)) {
|
|
assert(cache_ == nullptr && cache_handle_ == nullptr);
|
|
return;
|
|
}
|
|
|
|
Reset();
|
|
|
|
value_ = value.release();
|
|
own_value_ = true;
|
|
}
|
|
|
|
void SetUnownedValue(T* value) {
|
|
assert(value != nullptr);
|
|
|
|
if (UNLIKELY(value_ == value && cache_ == nullptr &&
|
|
cache_handle_ == nullptr && !own_value_)) {
|
|
return;
|
|
}
|
|
|
|
Reset();
|
|
|
|
value_ = value;
|
|
assert(!own_value_);
|
|
}
|
|
|
|
void SetCachedValue(T* value, Cache* cache, Cache::Handle* cache_handle) {
|
|
assert(cache != nullptr);
|
|
assert(cache_handle != nullptr);
|
|
|
|
if (UNLIKELY(value_ == value && cache_ == cache &&
|
|
cache_handle_ == cache_handle && !own_value_)) {
|
|
return;
|
|
}
|
|
|
|
Reset();
|
|
|
|
value_ = value;
|
|
cache_ = cache;
|
|
cache_handle_ = cache_handle;
|
|
assert(!own_value_);
|
|
}
|
|
|
|
// Since this class is essentially an elaborate pointer, it's sometimes
|
|
// useful to be able to upcast or downcast the base type of the pointer,
|
|
// especially when interacting with typed_cache.h.
|
|
template <class TWrapper>
|
|
std::enable_if_t<sizeof(TWrapper) == sizeof(T) &&
|
|
(std::is_base_of_v<TWrapper, T> ||
|
|
std::is_base_of_v<T, TWrapper>),
|
|
/* Actual return type */
|
|
CachableEntry<TWrapper>&>
|
|
As() {
|
|
CachableEntry<TWrapper>* result_ptr =
|
|
reinterpret_cast<CachableEntry<TWrapper>*>(this);
|
|
// Ensure no weirdness in template instantiations
|
|
assert(static_cast<void*>(&this->value_) ==
|
|
static_cast<void*>(&result_ptr->value_));
|
|
assert(&this->cache_handle_ == &result_ptr->cache_handle_);
|
|
// This function depends on no arithmetic involved in the pointer
|
|
// conversion, which is not statically checkable.
|
|
assert(static_cast<void*>(this->value_) ==
|
|
static_cast<void*>(result_ptr->value_));
|
|
return *result_ptr;
|
|
}
|
|
|
|
private:
|
|
void ReleaseResource(bool erase_if_last_ref) noexcept {
|
|
if (LIKELY(cache_handle_ != nullptr)) {
|
|
assert(cache_ != nullptr);
|
|
cache_->Release(cache_handle_, erase_if_last_ref);
|
|
} else if (own_value_) {
|
|
delete value_;
|
|
}
|
|
}
|
|
|
|
void ResetFields() noexcept {
|
|
value_ = nullptr;
|
|
cache_ = nullptr;
|
|
cache_handle_ = nullptr;
|
|
own_value_ = false;
|
|
}
|
|
|
|
static void ReleaseCacheHandle(void* arg1, void* arg2) {
|
|
Cache* const cache = static_cast<Cache*>(arg1);
|
|
assert(cache);
|
|
|
|
Cache::Handle* const cache_handle = static_cast<Cache::Handle*>(arg2);
|
|
assert(cache_handle);
|
|
|
|
cache->Release(cache_handle);
|
|
}
|
|
|
|
static void DeleteValue(void* arg1, void* /* arg2 */) {
|
|
delete static_cast<T*>(arg1);
|
|
}
|
|
|
|
private:
|
|
// Have to be your own best friend
|
|
template <class TT>
|
|
friend class CachableEntry;
|
|
|
|
T* value_ = nullptr;
|
|
Cache* cache_ = nullptr;
|
|
Cache::Handle* cache_handle_ = nullptr;
|
|
bool own_value_ = false;
|
|
};
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|