mirror of https://github.com/facebook/rocksdb.git
Major Cache refactoring, CPU efficiency improvement (#10975)
Summary: This is several refactorings bundled into one to avoid having to incrementally re-modify uses of Cache several times. Overall, there are breaking changes to Cache class, and it becomes more of low-level interface for implementing caches, especially block cache. New internal APIs make using Cache cleaner than before, and more insulated from block cache evolution. Hopefully, this is the last really big block cache refactoring, because of rather effectively decoupling the implementations from the uses. This change also removes the EXPERIMENTAL designation on the SecondaryCache support in Cache. It seems reasonably mature at this point but still subject to change/evolution (as I warn in the API docs for Cache). The high-level motivation for this refactoring is to minimize code duplication / compounding complexity in adding SecondaryCache support to HyperClockCache (in a later PR). Other benefits listed below. * static_cast lines of code +29 -35 (net removed 6) * reinterpret_cast lines of code +6 -32 (net removed 26) ## cache.h and secondary_cache.h * Always use CacheItemHelper with entries instead of just a Deleter. There are several motivations / justifications: * Simpler for implementations to deal with just one Insert and one Lookup. * Simpler and more efficient implementation because we don't have to track which entries are using helpers and which are using deleters * Gets rid of hack to classify cache entries by their deleter. Instead, the CacheItemHelper includes a CacheEntryRole. This simplifies a lot of code (cache_entry_roles.h almost eliminated). Fixes https://github.com/facebook/rocksdb/issues/9428. * Makes it trivial to adjust SecondaryCache behavior based on kind of block (e.g. don't re-compress filter blocks). * It is arguably less convenient for many direct users of Cache, but direct users of Cache are now rare with introduction of typed_cache.h (below). * I considered and rejected an alternative approach in which we reduce customizability by assuming each secondary cache compatible value starts with a Slice referencing the uncompressed block contents (already true or mostly true), but we apparently intend to stack secondary caches. Saving an entry from a compressed secondary to a lower tier requires custom handling offered by SaveToCallback, etc. * Make CreateCallback part of the helper and introduce CreateContext to work with it (alternative to https://github.com/facebook/rocksdb/issues/10562). This cleans up the interface while still allowing context to be provided for loading/parsing values into primary cache. This model works for async lookup in BlockBasedTable reader (reader owns a CreateContext) under the assumption that it always waits on secondary cache operations to finish. (Otherwise, the CreateContext could be destroyed while async operation depending on it continues.) This likely contributes most to the observed performance improvement because it saves an std::function backed by a heap allocation. * Use char* for serialized data, e.g. in SaveToCallback, where void* was confusingly used. (We use `char*` for serialized byte data all over RocksDB, with many advantages over `void*`. `memcpy` etc. are legacy APIs that should not be mimicked.) * Add a type alias Cache::ObjectPtr = void*, so that we can better indicate the intent of the void* when it is to be the object associated with a Cache entry. Related: started (but did not complete) a refactoring to move away from "value" of a cache entry toward "object" or "obj". (It is confusing to call Cache a key-value store (like DB) when it is really storing arbitrary in-memory objects, not byte strings.) * Remove unnecessary key param from DeleterFn. This is good for efficiency in HyperClockCache, which does not directly store the cache key in memory. (Alternative to https://github.com/facebook/rocksdb/issues/10774) * Add allocator to Cache DeleterFn. This is a kind of future-proofing change in case we get more serious about using the Cache allocator for memory tracked by the Cache. Right now, only the uncompressed block contents are allocated using the allocator, and a pointer to that allocator is saved as part of the cached object so that the deleter can use it. (See CacheAllocationPtr.) If in the future we are able to "flatten out" our Cache objects some more, it would be good not to have to track the allocator as part of each object. * Removes legacy `ApplyToAllCacheEntries` and changes `ApplyToAllEntries` signature for Deleter->CacheItemHelper change. ## typed_cache.h Adds various "typed" interfaces to the Cache as internal APIs, so that most uses of Cache can use simple type safe code without casting and without explicit deleters, etc. Almost all of the non-test, non-glue code uses of Cache have been migrated. (Follow-up work: CompressedSecondaryCache deserves deeper attention to migrate.) This change expands RocksDB's internal usage of metaprogramming and SFINAE (https://en.cppreference.com/w/cpp/language/sfinae). The existing usages of Cache are divided up at a high level into these new interfaces. See updated existing uses of Cache for examples of how these are used. * PlaceholderCacheInterface - Used for making cache reservations, with entries that have a charge but no value. * BasicTypedCacheInterface<TValue> - Used for primary cache storage of objects of type TValue, which can be cleaned up with std::default_delete<TValue>. The role is provided by TValue::kCacheEntryRole or given in an optional template parameter. * FullTypedCacheInterface<TValue, TCreateContext> - Used for secondary cache compatible storage of objects of type TValue. In addition to BasicTypedCacheInterface constraints, we require TValue::ContentSlice() to return persistable data. This simplifies usage for the normal case of simple secondary cache compatibility (can give you a Slice to the data already in memory). In addition to TCreateContext performing the role of Cache::CreateContext, it is also expected to provide a factory function for creating TValue. * For each of these, there's a "Shared" version (e.g. FullTypedSharedCacheInterface) that holds a shared_ptr to the Cache, rather than assuming external ownership by holding only a raw `Cache*`. These interfaces introduce specific handle types for each interface instantiation, so that it's easy to see what kind of object is controlled by a handle. (Ultimately, this might not be worth the extra complexity, but it seems OK so far.) Note: I attempted to make the cache 'charge' automatically inferred from the cache object type, such as by expecting an ApproximateMemoryUsage() function, but this is not so clean because there are cases where we need to compute the charge ahead of time and don't want to re-compute it. ## block_cache.h This header is essentially the replacement for the old block_like_traits.h. It includes various things to support block cache access with typed_cache.h for block-based table. ## block_based_table_reader.cc Before this change, accessing the block cache here was an awkward mix of static polymorphism (template TBlocklike) and switch-case on a dynamic BlockType value. This change mostly unifies on static polymorphism, relying on minor hacks in block_cache.h to distinguish variants of Block. We still check BlockType in some places (especially for stats, which could be improved in follow-up work) but at least the BlockType is a static constant from the template parameter. (No more awkward partial redundancy between static and dynamic info.) This likely contributes to the overall performance improvement, but hasn't been tested in isolation. The other key source of simplification here is a more unified system of creating block cache objects: for directly populating from primary cache and for promotion from secondary cache. Both use BlockCreateContext, for context and for factory functions. ## block_based_table_builder.cc, cache_dump_load_impl.cc Before this change, warming caches was super ugly code. Both of these source files had switch statements to basically transition from the dynamic BlockType world to the static TBlocklike world. None of that mess is needed anymore as there's a new, untyped WarmInCache function that handles all the details just as promotion from SecondaryCache would. (Fixes `TODO akanksha: Dedup below code` in block_based_table_builder.cc.) ## Everything else Mostly just updating Cache users to use new typed APIs when reasonably possible, or changed Cache APIs when not. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10975 Test Plan: tests updated Performance test setup similar to https://github.com/facebook/rocksdb/issues/10626 (by cache size, LRUCache when not "hyper" for HyperClockCache): 34MB 1thread base.hyper -> kops/s: 0.745 io_bytes/op: 2.52504e+06 miss_ratio: 0.140906 max_rss_mb: 76.4844 34MB 1thread new.hyper -> kops/s: 0.751 io_bytes/op: 2.5123e+06 miss_ratio: 0.140161 max_rss_mb: 79.3594 34MB 1thread base -> kops/s: 0.254 io_bytes/op: 1.36073e+07 miss_ratio: 0.918818 max_rss_mb: 45.9297 34MB 1thread new -> kops/s: 0.252 io_bytes/op: 1.36157e+07 miss_ratio: 0.918999 max_rss_mb: 44.1523 34MB 32thread base.hyper -> kops/s: 7.272 io_bytes/op: 2.88323e+06 miss_ratio: 0.162532 max_rss_mb: 516.602 34MB 32thread new.hyper -> kops/s: 7.214 io_bytes/op: 2.99046e+06 miss_ratio: 0.168818 max_rss_mb: 518.293 34MB 32thread base -> kops/s: 3.528 io_bytes/op: 1.35722e+07 miss_ratio: 0.914691 max_rss_mb: 264.926 34MB 32thread new -> kops/s: 3.604 io_bytes/op: 1.35744e+07 miss_ratio: 0.915054 max_rss_mb: 264.488 233MB 1thread base.hyper -> kops/s: 53.909 io_bytes/op: 2552.35 miss_ratio: 0.0440566 max_rss_mb: 241.984 233MB 1thread new.hyper -> kops/s: 62.792 io_bytes/op: 2549.79 miss_ratio: 0.044043 max_rss_mb: 241.922 233MB 1thread base -> kops/s: 1.197 io_bytes/op: 2.75173e+06 miss_ratio: 0.103093 max_rss_mb: 241.559 233MB 1thread new -> kops/s: 1.199 io_bytes/op: 2.73723e+06 miss_ratio: 0.10305 max_rss_mb: 240.93 233MB 32thread base.hyper -> kops/s: 1298.69 io_bytes/op: 2539.12 miss_ratio: 0.0440307 max_rss_mb: 371.418 233MB 32thread new.hyper -> kops/s: 1421.35 io_bytes/op: 2538.75 miss_ratio: 0.0440307 max_rss_mb: 347.273 233MB 32thread base -> kops/s: 9.693 io_bytes/op: 2.77304e+06 miss_ratio: 0.103745 max_rss_mb: 569.691 233MB 32thread new -> kops/s: 9.75 io_bytes/op: 2.77559e+06 miss_ratio: 0.103798 max_rss_mb: 552.82 1597MB 1thread base.hyper -> kops/s: 58.607 io_bytes/op: 1449.14 miss_ratio: 0.0249324 max_rss_mb: 1583.55 1597MB 1thread new.hyper -> kops/s: 69.6 io_bytes/op: 1434.89 miss_ratio: 0.0247167 max_rss_mb: 1584.02 1597MB 1thread base -> kops/s: 60.478 io_bytes/op: 1421.28 miss_ratio: 0.024452 max_rss_mb: 1589.45 1597MB 1thread new -> kops/s: 63.973 io_bytes/op: 1416.07 miss_ratio: 0.0243766 max_rss_mb: 1589.24 1597MB 32thread base.hyper -> kops/s: 1436.2 io_bytes/op: 1357.93 miss_ratio: 0.0235353 max_rss_mb: 1692.92 1597MB 32thread new.hyper -> kops/s: 1605.03 io_bytes/op: 1358.04 miss_ratio: 0.023538 max_rss_mb: 1702.78 1597MB 32thread base -> kops/s: 280.059 io_bytes/op: 1350.34 miss_ratio: 0.023289 max_rss_mb: 1675.36 1597MB 32thread new -> kops/s: 283.125 io_bytes/op: 1351.05 miss_ratio: 0.0232797 max_rss_mb: 1703.83 Almost uniformly improving over base revision, especially for hot paths with HyperClockCache, up to 12% higher throughput seen (1597MB, 32thread, hyper). The improvement for that is likely coming from much simplified code for providing context for secondary cache promotion (CreateCallback/CreateContext), and possibly from less branching in block_based_table_reader. And likely a small improvement from not reconstituting key for DeleterFn. Reviewed By: anand1976 Differential Revision: D42417818 Pulled By: pdillinger fbshipit-source-id: f86bfdd584dce27c028b151ba56818ad14f7a432
This commit is contained in:
parent
0a2d3b663a
commit
9f7801c5f1
|
@ -649,6 +649,7 @@ set(SOURCES
|
||||||
cache/cache.cc
|
cache/cache.cc
|
||||||
cache/cache_entry_roles.cc
|
cache/cache_entry_roles.cc
|
||||||
cache/cache_key.cc
|
cache/cache_key.cc
|
||||||
|
cache/cache_helpers.cc
|
||||||
cache/cache_reservation_manager.cc
|
cache/cache_reservation_manager.cc
|
||||||
cache/charged_cache.cc
|
cache/charged_cache.cc
|
||||||
cache/clock_cache.cc
|
cache/clock_cache.cc
|
||||||
|
@ -806,6 +807,7 @@ set(SOURCES
|
||||||
table/block_based/block_based_table_iterator.cc
|
table/block_based/block_based_table_iterator.cc
|
||||||
table/block_based/block_based_table_reader.cc
|
table/block_based/block_based_table_reader.cc
|
||||||
table/block_based/block_builder.cc
|
table/block_based/block_builder.cc
|
||||||
|
table/block_based/block_cache.cc
|
||||||
table/block_based/block_prefetcher.cc
|
table/block_based/block_prefetcher.cc
|
||||||
table/block_based/block_prefix_index.cc
|
table/block_based/block_prefix_index.cc
|
||||||
table/block_based/data_block_hash_index.cc
|
table/block_based/data_block_hash_index.cc
|
||||||
|
|
|
@ -19,10 +19,11 @@
|
||||||
|
|
||||||
### New Features
|
### New Features
|
||||||
* When an SstPartitionerFactory is configured, CompactRange() now automatically selects for compaction any files overlapping a partition boundary that is in the compaction range, even if no actual entries are in the requested compaction range. With this feature, manual compaction can be used to (re-)establish SST partition points when SstPartitioner changes, without a full compaction.
|
* When an SstPartitionerFactory is configured, CompactRange() now automatically selects for compaction any files overlapping a partition boundary that is in the compaction range, even if no actual entries are in the requested compaction range. With this feature, manual compaction can be used to (re-)establish SST partition points when SstPartitioner changes, without a full compaction.
|
||||||
|
|
||||||
### New Features
|
|
||||||
* Add BackupEngine feature to exclude files from backup that are known to be backed up elsewhere, using `CreateBackupOptions::exclude_files_callback`. To restore the DB, the excluded files must be provided in alternative backup directories using `RestoreOptions::alternate_dirs`.
|
* Add BackupEngine feature to exclude files from backup that are known to be backed up elsewhere, using `CreateBackupOptions::exclude_files_callback`. To restore the DB, the excluded files must be provided in alternative backup directories using `RestoreOptions::alternate_dirs`.
|
||||||
|
|
||||||
|
### Public API Changes
|
||||||
|
* Substantial changes have been made to the Cache class to support internal development goals. Direct use of Cache class members is discouraged and further breaking modifications are expected in the future. SecondaryCache has some related changes and implementations will need to be updated. (Unlike Cache, SecondaryCache is still intended to support user implementations, and disruptive changes will be avoided.) (#10975)
|
||||||
|
|
||||||
## 7.9.0 (11/21/2022)
|
## 7.9.0 (11/21/2022)
|
||||||
### Performance Improvements
|
### Performance Improvements
|
||||||
* Fixed an iterator performance regression for delete range users when scanning through a consecutive sequence of range tombstones (#10877).
|
* Fixed an iterator performance regression for delete range users when scanning through a consecutive sequence of range tombstones (#10877).
|
||||||
|
|
4
TARGETS
4
TARGETS
|
@ -11,6 +11,7 @@ load("//rocks/buckifier:defs.bzl", "cpp_library_wrapper","rocks_cpp_library_wrap
|
||||||
cpp_library_wrapper(name="rocksdb_lib", srcs=[
|
cpp_library_wrapper(name="rocksdb_lib", srcs=[
|
||||||
"cache/cache.cc",
|
"cache/cache.cc",
|
||||||
"cache/cache_entry_roles.cc",
|
"cache/cache_entry_roles.cc",
|
||||||
|
"cache/cache_helpers.cc",
|
||||||
"cache/cache_key.cc",
|
"cache/cache_key.cc",
|
||||||
"cache/cache_reservation_manager.cc",
|
"cache/cache_reservation_manager.cc",
|
||||||
"cache/charged_cache.cc",
|
"cache/charged_cache.cc",
|
||||||
|
@ -180,6 +181,7 @@ cpp_library_wrapper(name="rocksdb_lib", srcs=[
|
||||||
"table/block_based/block_based_table_iterator.cc",
|
"table/block_based/block_based_table_iterator.cc",
|
||||||
"table/block_based/block_based_table_reader.cc",
|
"table/block_based/block_based_table_reader.cc",
|
||||||
"table/block_based/block_builder.cc",
|
"table/block_based/block_builder.cc",
|
||||||
|
"table/block_based/block_cache.cc",
|
||||||
"table/block_based/block_prefetcher.cc",
|
"table/block_based/block_prefetcher.cc",
|
||||||
"table/block_based/block_prefix_index.cc",
|
"table/block_based/block_prefix_index.cc",
|
||||||
"table/block_based/data_block_footer.cc",
|
"table/block_based/data_block_footer.cc",
|
||||||
|
@ -352,6 +354,7 @@ cpp_library_wrapper(name="rocksdb_lib", srcs=[
|
||||||
cpp_library_wrapper(name="rocksdb_whole_archive_lib", srcs=[
|
cpp_library_wrapper(name="rocksdb_whole_archive_lib", srcs=[
|
||||||
"cache/cache.cc",
|
"cache/cache.cc",
|
||||||
"cache/cache_entry_roles.cc",
|
"cache/cache_entry_roles.cc",
|
||||||
|
"cache/cache_helpers.cc",
|
||||||
"cache/cache_key.cc",
|
"cache/cache_key.cc",
|
||||||
"cache/cache_reservation_manager.cc",
|
"cache/cache_reservation_manager.cc",
|
||||||
"cache/charged_cache.cc",
|
"cache/charged_cache.cc",
|
||||||
|
@ -521,6 +524,7 @@ cpp_library_wrapper(name="rocksdb_whole_archive_lib", srcs=[
|
||||||
"table/block_based/block_based_table_iterator.cc",
|
"table/block_based/block_based_table_iterator.cc",
|
||||||
"table/block_based/block_based_table_reader.cc",
|
"table/block_based/block_based_table_reader.cc",
|
||||||
"table/block_based/block_builder.cc",
|
"table/block_based/block_builder.cc",
|
||||||
|
"table/block_based/block_cache.cc",
|
||||||
"table/block_based/block_prefetcher.cc",
|
"table/block_based/block_prefetcher.cc",
|
||||||
"table/block_based/block_prefix_index.cc",
|
"table/block_based/block_prefix_index.cc",
|
||||||
"table/block_based/data_block_footer.cc",
|
"table/block_based/data_block_footer.cc",
|
||||||
|
|
|
@ -226,7 +226,7 @@ struct KeyGen {
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
char* createValue(Random64& rnd) {
|
Cache::ObjectPtr createValue(Random64& rnd) {
|
||||||
char* rv = new char[FLAGS_value_bytes];
|
char* rv = new char[FLAGS_value_bytes];
|
||||||
// Fill with some filler data, and take some CPU time
|
// Fill with some filler data, and take some CPU time
|
||||||
for (uint32_t i = 0; i < FLAGS_value_bytes; i += 8) {
|
for (uint32_t i = 0; i < FLAGS_value_bytes; i += 8) {
|
||||||
|
@ -236,28 +236,33 @@ char* createValue(Random64& rnd) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Callbacks for secondary cache
|
// Callbacks for secondary cache
|
||||||
size_t SizeFn(void* /*obj*/) { return FLAGS_value_bytes; }
|
size_t SizeFn(Cache::ObjectPtr /*obj*/) { return FLAGS_value_bytes; }
|
||||||
|
|
||||||
Status SaveToFn(void* obj, size_t /*offset*/, size_t size, void* out) {
|
Status SaveToFn(Cache::ObjectPtr from_obj, size_t /*from_offset*/,
|
||||||
memcpy(out, obj, size);
|
size_t length, char* out) {
|
||||||
|
memcpy(out, from_obj, length);
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
// Different deleters to simulate using deleter to gather
|
Status CreateFn(const Slice& data, Cache::CreateContext* /*context*/,
|
||||||
// stats on the code origin and kind of cache entries.
|
MemoryAllocator* /*allocator*/, Cache::ObjectPtr* out_obj,
|
||||||
void deleter1(const Slice& /*key*/, void* value) {
|
size_t* out_charge) {
|
||||||
delete[] static_cast<char*>(value);
|
*out_obj = new char[data.size()];
|
||||||
}
|
memcpy(*out_obj, data.data(), data.size());
|
||||||
void deleter2(const Slice& /*key*/, void* value) {
|
*out_charge = data.size();
|
||||||
delete[] static_cast<char*>(value);
|
return Status::OK();
|
||||||
}
|
};
|
||||||
void deleter3(const Slice& /*key*/, void* value) {
|
|
||||||
|
void DeleteFn(Cache::ObjectPtr value, MemoryAllocator* /*alloc*/) {
|
||||||
delete[] static_cast<char*>(value);
|
delete[] static_cast<char*>(value);
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::CacheItemHelper helper1(SizeFn, SaveToFn, deleter1);
|
Cache::CacheItemHelper helper1(CacheEntryRole::kDataBlock, DeleteFn, SizeFn,
|
||||||
Cache::CacheItemHelper helper2(SizeFn, SaveToFn, deleter2);
|
SaveToFn, CreateFn);
|
||||||
Cache::CacheItemHelper helper3(SizeFn, SaveToFn, deleter3);
|
Cache::CacheItemHelper helper2(CacheEntryRole::kIndexBlock, DeleteFn, SizeFn,
|
||||||
|
SaveToFn, CreateFn);
|
||||||
|
Cache::CacheItemHelper helper3(CacheEntryRole::kFilterBlock, DeleteFn, SizeFn,
|
||||||
|
SaveToFn, CreateFn);
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
class CacheBench {
|
class CacheBench {
|
||||||
|
@ -436,7 +441,7 @@ class CacheBench {
|
||||||
uint64_t total_entry_count = 0;
|
uint64_t total_entry_count = 0;
|
||||||
uint64_t table_occupancy = 0;
|
uint64_t table_occupancy = 0;
|
||||||
uint64_t table_size = 0;
|
uint64_t table_size = 0;
|
||||||
std::set<Cache::DeleterFn> deleters;
|
std::set<const Cache::CacheItemHelper*> helpers;
|
||||||
StopWatchNano timer(clock);
|
StopWatchNano timer(clock);
|
||||||
|
|
||||||
for (;;) {
|
for (;;) {
|
||||||
|
@ -461,7 +466,7 @@ class CacheBench {
|
||||||
<< BytesToHumanString(static_cast<uint64_t>(
|
<< BytesToHumanString(static_cast<uint64_t>(
|
||||||
1.0 * total_charge / total_entry_count))
|
1.0 * total_charge / total_entry_count))
|
||||||
<< "\n"
|
<< "\n"
|
||||||
<< "Unique deleters: " << deleters.size() << "\n";
|
<< "Unique helpers: " << helpers.size() << "\n";
|
||||||
*stats_report = ostr.str();
|
*stats_report = ostr.str();
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -477,14 +482,14 @@ class CacheBench {
|
||||||
total_key_size = 0;
|
total_key_size = 0;
|
||||||
total_charge = 0;
|
total_charge = 0;
|
||||||
total_entry_count = 0;
|
total_entry_count = 0;
|
||||||
deleters.clear();
|
helpers.clear();
|
||||||
auto fn = [&](const Slice& key, void* /*value*/, size_t charge,
|
auto fn = [&](const Slice& key, Cache::ObjectPtr /*value*/, size_t charge,
|
||||||
Cache::DeleterFn deleter) {
|
const Cache::CacheItemHelper* helper) {
|
||||||
total_key_size += key.size();
|
total_key_size += key.size();
|
||||||
total_charge += charge;
|
total_charge += charge;
|
||||||
++total_entry_count;
|
++total_entry_count;
|
||||||
// Something slightly more expensive as in (future) stats by category
|
// Something slightly more expensive as in stats by category
|
||||||
deleters.insert(deleter);
|
helpers.insert(helper);
|
||||||
};
|
};
|
||||||
timer.Start();
|
timer.Start();
|
||||||
Cache::ApplyToAllEntriesOptions opts;
|
Cache::ApplyToAllEntriesOptions opts;
|
||||||
|
@ -533,14 +538,6 @@ class CacheBench {
|
||||||
for (uint64_t i = 0; i < FLAGS_ops_per_thread; i++) {
|
for (uint64_t i = 0; i < FLAGS_ops_per_thread; i++) {
|
||||||
Slice key = gen.GetRand(thread->rnd, max_key_, max_log_);
|
Slice key = gen.GetRand(thread->rnd, max_key_, max_log_);
|
||||||
uint64_t random_op = thread->rnd.Next();
|
uint64_t random_op = thread->rnd.Next();
|
||||||
Cache::CreateCallback create_cb = [](const void* buf, size_t size,
|
|
||||||
void** out_obj,
|
|
||||||
size_t* charge) -> Status {
|
|
||||||
*out_obj = reinterpret_cast<void*>(new char[size]);
|
|
||||||
memcpy(*out_obj, buf, size);
|
|
||||||
*charge = size;
|
|
||||||
return Status::OK();
|
|
||||||
};
|
|
||||||
|
|
||||||
timer.Start();
|
timer.Start();
|
||||||
|
|
||||||
|
@ -550,8 +547,8 @@ class CacheBench {
|
||||||
handle = nullptr;
|
handle = nullptr;
|
||||||
}
|
}
|
||||||
// do lookup
|
// do lookup
|
||||||
handle = cache_->Lookup(key, &helper2, create_cb, Cache::Priority::LOW,
|
handle = cache_->Lookup(key, &helper2, /*context*/ nullptr,
|
||||||
true);
|
Cache::Priority::LOW, true);
|
||||||
if (handle) {
|
if (handle) {
|
||||||
if (!FLAGS_lean) {
|
if (!FLAGS_lean) {
|
||||||
// do something with the data
|
// do something with the data
|
||||||
|
@ -579,8 +576,8 @@ class CacheBench {
|
||||||
handle = nullptr;
|
handle = nullptr;
|
||||||
}
|
}
|
||||||
// do lookup
|
// do lookup
|
||||||
handle = cache_->Lookup(key, &helper2, create_cb, Cache::Priority::LOW,
|
handle = cache_->Lookup(key, &helper2, /*context*/ nullptr,
|
||||||
true);
|
Cache::Priority::LOW, true);
|
||||||
if (handle) {
|
if (handle) {
|
||||||
if (!FLAGS_lean) {
|
if (!FLAGS_lean) {
|
||||||
// do something with the data
|
// do something with the data
|
||||||
|
|
|
@ -101,34 +101,4 @@ std::string BlockCacheEntryStatsMapKeys::UsedPercent(CacheEntryRole role) {
|
||||||
return GetPrefixedCacheEntryRoleName(kPrefix, role);
|
return GetPrefixedCacheEntryRoleName(kPrefix, role);
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace {
|
|
||||||
|
|
||||||
struct Registry {
|
|
||||||
std::mutex mutex;
|
|
||||||
UnorderedMap<Cache::DeleterFn, CacheEntryRole> role_map;
|
|
||||||
void Register(Cache::DeleterFn fn, CacheEntryRole role) {
|
|
||||||
std::lock_guard<std::mutex> lock(mutex);
|
|
||||||
role_map[fn] = role;
|
|
||||||
}
|
|
||||||
UnorderedMap<Cache::DeleterFn, CacheEntryRole> Copy() {
|
|
||||||
std::lock_guard<std::mutex> lock(mutex);
|
|
||||||
return role_map;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
Registry& GetRegistry() {
|
|
||||||
STATIC_AVOID_DESTRUCTION(Registry, registry);
|
|
||||||
return registry;
|
|
||||||
}
|
|
||||||
|
|
||||||
} // namespace
|
|
||||||
|
|
||||||
void RegisterCacheDeleterRole(Cache::DeleterFn fn, CacheEntryRole role) {
|
|
||||||
GetRegistry().Register(fn, role);
|
|
||||||
}
|
|
||||||
|
|
||||||
UnorderedMap<Cache::DeleterFn, CacheEntryRole> CopyCacheDeleterRoleMap() {
|
|
||||||
return GetRegistry().Copy();
|
|
||||||
}
|
|
||||||
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
} // namespace ROCKSDB_NAMESPACE
|
||||||
|
|
|
@ -7,11 +7,8 @@
|
||||||
|
|
||||||
#include <array>
|
#include <array>
|
||||||
#include <cstdint>
|
#include <cstdint>
|
||||||
#include <memory>
|
|
||||||
#include <type_traits>
|
|
||||||
|
|
||||||
#include "rocksdb/cache.h"
|
#include "rocksdb/cache.h"
|
||||||
#include "util/hash_containers.h"
|
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
|
@ -20,84 +17,4 @@ extern std::array<std::string, kNumCacheEntryRoles>
|
||||||
extern std::array<std::string, kNumCacheEntryRoles>
|
extern std::array<std::string, kNumCacheEntryRoles>
|
||||||
kCacheEntryRoleToHyphenString;
|
kCacheEntryRoleToHyphenString;
|
||||||
|
|
||||||
// To associate cache entries with their role, we use a hack on the
|
|
||||||
// existing Cache interface. Because the deleter of an entry can authenticate
|
|
||||||
// the code origin of an entry, we can elaborate the choice of deleter to
|
|
||||||
// also encode role information, without inferring false role information
|
|
||||||
// from entries not choosing to encode a role.
|
|
||||||
//
|
|
||||||
// The rest of this file is for handling mappings between deleters and
|
|
||||||
// roles.
|
|
||||||
|
|
||||||
// To infer a role from a deleter, the deleter must be registered. This
|
|
||||||
// can be done "manually" with this function. This function is thread-safe,
|
|
||||||
// and the registration mappings go into private but static storage. (Note
|
|
||||||
// that DeleterFn is a function pointer, not std::function. Registrations
|
|
||||||
// should not be too many.)
|
|
||||||
void RegisterCacheDeleterRole(Cache::DeleterFn fn, CacheEntryRole role);
|
|
||||||
|
|
||||||
// Gets a copy of the registered deleter -> role mappings. This is the only
|
|
||||||
// function for reading the mappings made with RegisterCacheDeleterRole.
|
|
||||||
// Why only this interface for reading?
|
|
||||||
// * This function has to be thread safe, which could incur substantial
|
|
||||||
// overhead. We should not pay this overhead for every deleter look-up.
|
|
||||||
// * This is suitable for preparing for batch operations, like with
|
|
||||||
// CacheEntryStatsCollector.
|
|
||||||
// * The number of mappings should be sufficiently small (dozens).
|
|
||||||
UnorderedMap<Cache::DeleterFn, CacheEntryRole> CopyCacheDeleterRoleMap();
|
|
||||||
|
|
||||||
// ************************************************************** //
|
|
||||||
// An automatic registration infrastructure. This enables code
|
|
||||||
// to simply ask for a deleter associated with a particular type
|
|
||||||
// and role, and registration is automatic. In a sense, this is
|
|
||||||
// a small dependency injection infrastructure, because linking
|
|
||||||
// in new deleter instantiations is essentially sufficient for
|
|
||||||
// making stats collection (using CopyCacheDeleterRoleMap) aware
|
|
||||||
// of them.
|
|
||||||
|
|
||||||
namespace cache_entry_roles_detail {
|
|
||||||
|
|
||||||
template <typename T, CacheEntryRole R>
|
|
||||||
struct RegisteredDeleter {
|
|
||||||
RegisteredDeleter() { RegisterCacheDeleterRole(Delete, R); }
|
|
||||||
|
|
||||||
// These have global linkage to help ensure compiler optimizations do not
|
|
||||||
// break uniqueness for each <T,R>
|
|
||||||
static void Delete(const Slice& /* key */, void* value) {
|
|
||||||
// Supports T == Something[], unlike delete operator
|
|
||||||
std::default_delete<T>()(
|
|
||||||
static_cast<typename std::remove_extent<T>::type*>(value));
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
template <CacheEntryRole R>
|
|
||||||
struct RegisteredNoopDeleter {
|
|
||||||
RegisteredNoopDeleter() { RegisterCacheDeleterRole(Delete, R); }
|
|
||||||
|
|
||||||
static void Delete(const Slice& /* key */, void* /* value */) {
|
|
||||||
// Here was `assert(value == nullptr);` but we can also put pointers
|
|
||||||
// to static data in Cache, for testing at least.
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
} // namespace cache_entry_roles_detail
|
|
||||||
|
|
||||||
// Get an automatically registered deleter for value type T and role R.
|
|
||||||
// Based on C++ semantics, registration is invoked exactly once in a
|
|
||||||
// thread-safe way on first call to this function, for each <T, R>.
|
|
||||||
template <typename T, CacheEntryRole R>
|
|
||||||
Cache::DeleterFn GetCacheEntryDeleterForRole() {
|
|
||||||
static cache_entry_roles_detail::RegisteredDeleter<T, R> reg;
|
|
||||||
return reg.Delete;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get an automatically registered no-op deleter (value should be nullptr)
|
|
||||||
// and associated with role R. This is used for Cache "reservation" entries
|
|
||||||
// such as for WriteBufferManager.
|
|
||||||
template <CacheEntryRole R>
|
|
||||||
Cache::DeleterFn GetNoopDeleterForRole() {
|
|
||||||
static cache_entry_roles_detail::RegisteredNoopDeleter<R> reg;
|
|
||||||
return reg.Delete;
|
|
||||||
}
|
|
||||||
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
} // namespace ROCKSDB_NAMESPACE
|
||||||
|
|
|
@ -10,8 +10,8 @@
|
||||||
#include <memory>
|
#include <memory>
|
||||||
#include <mutex>
|
#include <mutex>
|
||||||
|
|
||||||
#include "cache/cache_helpers.h"
|
|
||||||
#include "cache/cache_key.h"
|
#include "cache/cache_key.h"
|
||||||
|
#include "cache/typed_cache.h"
|
||||||
#include "port/lang.h"
|
#include "port/lang.h"
|
||||||
#include "rocksdb/cache.h"
|
#include "rocksdb/cache.h"
|
||||||
#include "rocksdb/status.h"
|
#include "rocksdb/status.h"
|
||||||
|
@ -111,11 +111,14 @@ class CacheEntryStatsCollector {
|
||||||
// Gets or creates a shared instance of CacheEntryStatsCollector in the
|
// Gets or creates a shared instance of CacheEntryStatsCollector in the
|
||||||
// cache itself, and saves into `ptr`. This shared_ptr will hold the
|
// cache itself, and saves into `ptr`. This shared_ptr will hold the
|
||||||
// entry in cache until all refs are destroyed.
|
// entry in cache until all refs are destroyed.
|
||||||
static Status GetShared(Cache *cache, SystemClock *clock,
|
static Status GetShared(Cache *raw_cache, SystemClock *clock,
|
||||||
std::shared_ptr<CacheEntryStatsCollector> *ptr) {
|
std::shared_ptr<CacheEntryStatsCollector> *ptr) {
|
||||||
const Slice &cache_key = GetCacheKey();
|
assert(raw_cache);
|
||||||
|
BasicTypedCacheInterface<CacheEntryStatsCollector, CacheEntryRole::kMisc>
|
||||||
|
cache{raw_cache};
|
||||||
|
|
||||||
Cache::Handle *h = cache->Lookup(cache_key);
|
const Slice &cache_key = GetCacheKey();
|
||||||
|
auto h = cache.Lookup(cache_key);
|
||||||
if (h == nullptr) {
|
if (h == nullptr) {
|
||||||
// Not yet in cache, but Cache doesn't provide a built-in way to
|
// Not yet in cache, but Cache doesn't provide a built-in way to
|
||||||
// avoid racing insert. So we double-check under a shared mutex,
|
// avoid racing insert. So we double-check under a shared mutex,
|
||||||
|
@ -123,15 +126,15 @@ class CacheEntryStatsCollector {
|
||||||
STATIC_AVOID_DESTRUCTION(std::mutex, static_mutex);
|
STATIC_AVOID_DESTRUCTION(std::mutex, static_mutex);
|
||||||
std::lock_guard<std::mutex> lock(static_mutex);
|
std::lock_guard<std::mutex> lock(static_mutex);
|
||||||
|
|
||||||
h = cache->Lookup(cache_key);
|
h = cache.Lookup(cache_key);
|
||||||
if (h == nullptr) {
|
if (h == nullptr) {
|
||||||
auto new_ptr = new CacheEntryStatsCollector(cache, clock);
|
auto new_ptr = new CacheEntryStatsCollector(cache.get(), clock);
|
||||||
// TODO: non-zero charge causes some tests that count block cache
|
// TODO: non-zero charge causes some tests that count block cache
|
||||||
// usage to go flaky. Fix the problem somehow so we can use an
|
// usage to go flaky. Fix the problem somehow so we can use an
|
||||||
// accurate charge.
|
// accurate charge.
|
||||||
size_t charge = 0;
|
size_t charge = 0;
|
||||||
Status s = cache->Insert(cache_key, new_ptr, charge, Deleter, &h,
|
Status s =
|
||||||
Cache::Priority::HIGH);
|
cache.Insert(cache_key, new_ptr, charge, &h, Cache::Priority::HIGH);
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
assert(h == nullptr);
|
assert(h == nullptr);
|
||||||
delete new_ptr;
|
delete new_ptr;
|
||||||
|
@ -140,11 +143,11 @@ class CacheEntryStatsCollector {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// If we reach here, shared entry is in cache with handle `h`.
|
// If we reach here, shared entry is in cache with handle `h`.
|
||||||
assert(cache->GetDeleter(h) == Deleter);
|
assert(cache.get()->GetCacheItemHelper(h) == &cache.kBasicHelper);
|
||||||
|
|
||||||
// Build an aliasing shared_ptr that keeps `ptr` in cache while there
|
// Build an aliasing shared_ptr that keeps `ptr` in cache while there
|
||||||
// are references.
|
// are references.
|
||||||
*ptr = MakeSharedCacheHandleGuard<CacheEntryStatsCollector>(cache, h);
|
*ptr = cache.SharedGuard(h);
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -157,10 +160,6 @@ class CacheEntryStatsCollector {
|
||||||
cache_(cache),
|
cache_(cache),
|
||||||
clock_(clock) {}
|
clock_(clock) {}
|
||||||
|
|
||||||
static void Deleter(const Slice &, void *value) {
|
|
||||||
delete static_cast<CacheEntryStatsCollector *>(value);
|
|
||||||
}
|
|
||||||
|
|
||||||
static const Slice &GetCacheKey() {
|
static const Slice &GetCacheKey() {
|
||||||
// For each template instantiation
|
// For each template instantiation
|
||||||
static CacheKey ckey = CacheKey::CreateUniqueForProcessLifetime();
|
static CacheKey ckey = CacheKey::CreateUniqueForProcessLifetime();
|
||||||
|
|
|
@ -0,0 +1,40 @@
|
||||||
|
// Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
// This source code is licensed under both the GPLv2 (found in the
|
||||||
|
// COPYING file in the root directory) and Apache 2.0 License
|
||||||
|
// (found in the LICENSE.Apache file in the root directory).
|
||||||
|
|
||||||
|
#include "cache/cache_helpers.h"
|
||||||
|
|
||||||
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
|
void ReleaseCacheHandleCleanup(void* arg1, void* arg2) {
|
||||||
|
Cache* const cache = static_cast<Cache*>(arg1);
|
||||||
|
assert(cache);
|
||||||
|
|
||||||
|
Cache::Handle* const cache_handle = static_cast<Cache::Handle*>(arg2);
|
||||||
|
assert(cache_handle);
|
||||||
|
|
||||||
|
cache->Release(cache_handle);
|
||||||
|
}
|
||||||
|
|
||||||
|
Status WarmInCache(Cache* cache, const Slice& key, const Slice& saved,
|
||||||
|
Cache::CreateContext* create_context,
|
||||||
|
const Cache::CacheItemHelper* helper,
|
||||||
|
Cache::Priority priority, size_t* out_charge) {
|
||||||
|
assert(helper);
|
||||||
|
assert(helper->create_cb);
|
||||||
|
Cache::ObjectPtr value;
|
||||||
|
size_t charge;
|
||||||
|
Status st = helper->create_cb(saved, create_context,
|
||||||
|
cache->memory_allocator(), &value, &charge);
|
||||||
|
if (st.ok()) {
|
||||||
|
st =
|
||||||
|
cache->Insert(key, value, helper, charge, /*handle*/ nullptr, priority);
|
||||||
|
if (out_charge) {
|
||||||
|
*out_charge = charge;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return st;
|
||||||
|
}
|
||||||
|
|
||||||
|
} // namespace ROCKSDB_NAMESPACE
|
|
@ -17,22 +17,17 @@ template <typename T>
|
||||||
T* GetFromCacheHandle(Cache* cache, Cache::Handle* handle) {
|
T* GetFromCacheHandle(Cache* cache, Cache::Handle* handle) {
|
||||||
assert(cache);
|
assert(cache);
|
||||||
assert(handle);
|
assert(handle);
|
||||||
|
|
||||||
return static_cast<T*>(cache->Value(handle));
|
return static_cast<T*>(cache->Value(handle));
|
||||||
}
|
}
|
||||||
|
|
||||||
// Simple generic deleter for Cache (to be used with Cache::Insert).
|
|
||||||
template <typename T>
|
|
||||||
void DeleteCacheEntry(const Slice& /* key */, void* value) {
|
|
||||||
delete static_cast<T*>(value);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Turns a T* into a Slice so it can be used as a key with Cache.
|
// Turns a T* into a Slice so it can be used as a key with Cache.
|
||||||
template <typename T>
|
template <typename T>
|
||||||
Slice GetSlice(const T* t) {
|
Slice GetSliceForKey(const T* t) {
|
||||||
return Slice(reinterpret_cast<const char*>(t), sizeof(T));
|
return Slice(reinterpret_cast<const char*>(t), sizeof(T));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void ReleaseCacheHandleCleanup(void* arg1, void* arg2);
|
||||||
|
|
||||||
// Generic resource management object for cache handles that releases the handle
|
// Generic resource management object for cache handles that releases the handle
|
||||||
// when destroyed. Has unique ownership of the handle, so copying it is not
|
// when destroyed. Has unique ownership of the handle, so copying it is not
|
||||||
// allowed, while moving it transfers ownership.
|
// allowed, while moving it transfers ownership.
|
||||||
|
@ -88,7 +83,7 @@ class CacheHandleGuard {
|
||||||
if (cleanable) {
|
if (cleanable) {
|
||||||
if (handle_ != nullptr) {
|
if (handle_ != nullptr) {
|
||||||
assert(cache_);
|
assert(cache_);
|
||||||
cleanable->RegisterCleanup(&ReleaseCacheHandle, cache_, handle_);
|
cleanable->RegisterCleanup(&ReleaseCacheHandleCleanup, cache_, handle_);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
ResetFields();
|
ResetFields();
|
||||||
|
@ -115,16 +110,6 @@ class CacheHandleGuard {
|
||||||
value_ = nullptr;
|
value_ = nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void ReleaseCacheHandle(void* arg1, void* arg2) {
|
|
||||||
Cache* const cache = static_cast<Cache*>(arg1);
|
|
||||||
assert(cache);
|
|
||||||
|
|
||||||
Cache::Handle* const cache_handle = static_cast<Cache::Handle*>(arg2);
|
|
||||||
assert(cache_handle);
|
|
||||||
|
|
||||||
cache->Release(cache_handle);
|
|
||||||
}
|
|
||||||
|
|
||||||
private:
|
private:
|
||||||
Cache* cache_ = nullptr;
|
Cache* cache_ = nullptr;
|
||||||
Cache::Handle* handle_ = nullptr;
|
Cache::Handle* handle_ = nullptr;
|
||||||
|
@ -139,7 +124,16 @@ template <typename T>
|
||||||
std::shared_ptr<T> MakeSharedCacheHandleGuard(Cache* cache,
|
std::shared_ptr<T> MakeSharedCacheHandleGuard(Cache* cache,
|
||||||
Cache::Handle* handle) {
|
Cache::Handle* handle) {
|
||||||
auto wrapper = std::make_shared<CacheHandleGuard<T>>(cache, handle);
|
auto wrapper = std::make_shared<CacheHandleGuard<T>>(cache, handle);
|
||||||
return std::shared_ptr<T>(wrapper, static_cast<T*>(cache->Value(handle)));
|
return std::shared_ptr<T>(wrapper, GetFromCacheHandle<T>(cache, handle));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Given the persistable data (saved) for a block cache entry, parse that
|
||||||
|
// into a cache entry object and insert it into the given cache. The charge
|
||||||
|
// of the new entry can be returned to the caller through `out_charge`.
|
||||||
|
Status WarmInCache(Cache* cache, const Slice& key, const Slice& saved,
|
||||||
|
Cache::CreateContext* create_context,
|
||||||
|
const Cache::CacheItemHelper* helper,
|
||||||
|
Cache::Priority priority = Cache::Priority::LOW,
|
||||||
|
size_t* out_charge = nullptr);
|
||||||
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
} // namespace ROCKSDB_NAMESPACE
|
||||||
|
|
|
@ -13,7 +13,6 @@
|
||||||
#include <cstring>
|
#include <cstring>
|
||||||
#include <memory>
|
#include <memory>
|
||||||
|
|
||||||
#include "cache/cache_entry_roles.h"
|
|
||||||
#include "rocksdb/cache.h"
|
#include "rocksdb/cache.h"
|
||||||
#include "rocksdb/slice.h"
|
#include "rocksdb/slice.h"
|
||||||
#include "rocksdb/status.h"
|
#include "rocksdb/status.h"
|
||||||
|
@ -41,17 +40,17 @@ CacheReservationManagerImpl<
|
||||||
template <CacheEntryRole R>
|
template <CacheEntryRole R>
|
||||||
CacheReservationManagerImpl<R>::CacheReservationManagerImpl(
|
CacheReservationManagerImpl<R>::CacheReservationManagerImpl(
|
||||||
std::shared_ptr<Cache> cache, bool delayed_decrease)
|
std::shared_ptr<Cache> cache, bool delayed_decrease)
|
||||||
: delayed_decrease_(delayed_decrease),
|
: cache_(cache),
|
||||||
|
delayed_decrease_(delayed_decrease),
|
||||||
cache_allocated_size_(0),
|
cache_allocated_size_(0),
|
||||||
memory_used_(0) {
|
memory_used_(0) {
|
||||||
assert(cache != nullptr);
|
assert(cache != nullptr);
|
||||||
cache_ = cache;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
template <CacheEntryRole R>
|
template <CacheEntryRole R>
|
||||||
CacheReservationManagerImpl<R>::~CacheReservationManagerImpl() {
|
CacheReservationManagerImpl<R>::~CacheReservationManagerImpl() {
|
||||||
for (auto* handle : dummy_handles_) {
|
for (auto* handle : dummy_handles_) {
|
||||||
cache_->Release(handle, true);
|
cache_.ReleaseAndEraseIfLastRef(handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -115,8 +114,7 @@ Status CacheReservationManagerImpl<R>::IncreaseCacheReservation(
|
||||||
Status return_status = Status::OK();
|
Status return_status = Status::OK();
|
||||||
while (new_mem_used > cache_allocated_size_.load(std::memory_order_relaxed)) {
|
while (new_mem_used > cache_allocated_size_.load(std::memory_order_relaxed)) {
|
||||||
Cache::Handle* handle = nullptr;
|
Cache::Handle* handle = nullptr;
|
||||||
return_status = cache_->Insert(GetNextCacheKey(), nullptr, kSizeDummyEntry,
|
return_status = cache_.Insert(GetNextCacheKey(), kSizeDummyEntry, &handle);
|
||||||
GetNoopDeleterForRole<R>(), &handle);
|
|
||||||
|
|
||||||
if (return_status != Status::OK()) {
|
if (return_status != Status::OK()) {
|
||||||
return return_status;
|
return return_status;
|
||||||
|
@ -141,7 +139,7 @@ Status CacheReservationManagerImpl<R>::DecreaseCacheReservation(
|
||||||
cache_allocated_size_.load(std::memory_order_relaxed)) {
|
cache_allocated_size_.load(std::memory_order_relaxed)) {
|
||||||
assert(!dummy_handles_.empty());
|
assert(!dummy_handles_.empty());
|
||||||
auto* handle = dummy_handles_.back();
|
auto* handle = dummy_handles_.back();
|
||||||
cache_->Release(handle, true);
|
cache_.ReleaseAndEraseIfLastRef(handle);
|
||||||
dummy_handles_.pop_back();
|
dummy_handles_.pop_back();
|
||||||
cache_allocated_size_ -= kSizeDummyEntry;
|
cache_allocated_size_ -= kSizeDummyEntry;
|
||||||
}
|
}
|
||||||
|
@ -169,8 +167,9 @@ Slice CacheReservationManagerImpl<R>::GetNextCacheKey() {
|
||||||
}
|
}
|
||||||
|
|
||||||
template <CacheEntryRole R>
|
template <CacheEntryRole R>
|
||||||
Cache::DeleterFn CacheReservationManagerImpl<R>::TEST_GetNoopDeleterForRole() {
|
const Cache::CacheItemHelper*
|
||||||
return GetNoopDeleterForRole<R>();
|
CacheReservationManagerImpl<R>::TEST_GetCacheItemHelperForRole() {
|
||||||
|
return &CacheInterface::kHelper;
|
||||||
}
|
}
|
||||||
|
|
||||||
template class CacheReservationManagerImpl<
|
template class CacheReservationManagerImpl<
|
||||||
|
|
|
@ -18,7 +18,7 @@
|
||||||
|
|
||||||
#include "cache/cache_entry_roles.h"
|
#include "cache/cache_entry_roles.h"
|
||||||
#include "cache/cache_key.h"
|
#include "cache/cache_key.h"
|
||||||
#include "rocksdb/cache.h"
|
#include "cache/typed_cache.h"
|
||||||
#include "rocksdb/slice.h"
|
#include "rocksdb/slice.h"
|
||||||
#include "rocksdb/status.h"
|
#include "rocksdb/status.h"
|
||||||
#include "util/coding.h"
|
#include "util/coding.h"
|
||||||
|
@ -197,10 +197,10 @@ class CacheReservationManagerImpl
|
||||||
|
|
||||||
static constexpr std::size_t GetDummyEntrySize() { return kSizeDummyEntry; }
|
static constexpr std::size_t GetDummyEntrySize() { return kSizeDummyEntry; }
|
||||||
|
|
||||||
// For testing only - it is to help ensure the NoopDeleterForRole<R>
|
// For testing only - it is to help ensure the CacheItemHelperForRole<R>
|
||||||
// accessed from CacheReservationManagerImpl and the one accessed from the
|
// accessed from CacheReservationManagerImpl and the one accessed from the
|
||||||
// test are from the same translation units
|
// test are from the same translation units
|
||||||
static Cache::DeleterFn TEST_GetNoopDeleterForRole();
|
static const Cache::CacheItemHelper *TEST_GetCacheItemHelperForRole();
|
||||||
|
|
||||||
private:
|
private:
|
||||||
static constexpr std::size_t kSizeDummyEntry = 256 * 1024;
|
static constexpr std::size_t kSizeDummyEntry = 256 * 1024;
|
||||||
|
@ -211,7 +211,8 @@ class CacheReservationManagerImpl
|
||||||
Status IncreaseCacheReservation(std::size_t new_mem_used);
|
Status IncreaseCacheReservation(std::size_t new_mem_used);
|
||||||
Status DecreaseCacheReservation(std::size_t new_mem_used);
|
Status DecreaseCacheReservation(std::size_t new_mem_used);
|
||||||
|
|
||||||
std::shared_ptr<Cache> cache_;
|
using CacheInterface = PlaceholderSharedCacheInterface<R>;
|
||||||
|
CacheInterface cache_;
|
||||||
bool delayed_decrease_;
|
bool delayed_decrease_;
|
||||||
std::atomic<std::size_t> cache_allocated_size_;
|
std::atomic<std::size_t> cache_allocated_size_;
|
||||||
std::size_t memory_used_;
|
std::size_t memory_used_;
|
||||||
|
|
|
@ -16,6 +16,7 @@
|
||||||
#include <vector>
|
#include <vector>
|
||||||
|
|
||||||
#include "cache/lru_cache.h"
|
#include "cache/lru_cache.h"
|
||||||
|
#include "cache/typed_cache.h"
|
||||||
#include "port/stack_trace.h"
|
#include "port/stack_trace.h"
|
||||||
#include "test_util/testharness.h"
|
#include "test_util/testharness.h"
|
||||||
#include "util/coding.h"
|
#include "util/coding.h"
|
||||||
|
@ -55,23 +56,31 @@ int DecodeKey32Bits(const Slice& k) {
|
||||||
return DecodeFixed32(k.data());
|
return DecodeFixed32(k.data());
|
||||||
}
|
}
|
||||||
|
|
||||||
void* EncodeValue(uintptr_t v) { return reinterpret_cast<void*>(v); }
|
Cache::ObjectPtr EncodeValue(uintptr_t v) {
|
||||||
|
return reinterpret_cast<Cache::ObjectPtr>(v);
|
||||||
|
}
|
||||||
|
|
||||||
int DecodeValue(void* v) {
|
int DecodeValue(void* v) {
|
||||||
return static_cast<int>(reinterpret_cast<uintptr_t>(v));
|
return static_cast<int>(reinterpret_cast<uintptr_t>(v));
|
||||||
}
|
}
|
||||||
|
|
||||||
void DumbDeleter(const Slice& /*key*/, void* /*value*/) {}
|
const Cache::CacheItemHelper kDumbHelper{
|
||||||
|
CacheEntryRole::kMisc,
|
||||||
|
[](Cache::ObjectPtr /*value*/, MemoryAllocator* /*alloc*/) {}};
|
||||||
|
|
||||||
void EraseDeleter1(const Slice& /*key*/, void* value) {
|
const Cache::CacheItemHelper kEraseOnDeleteHelper1{
|
||||||
Cache* cache = reinterpret_cast<Cache*>(value);
|
CacheEntryRole::kMisc,
|
||||||
cache->Erase("foo");
|
[](Cache::ObjectPtr value, MemoryAllocator* /*alloc*/) {
|
||||||
}
|
Cache* cache = static_cast<Cache*>(value);
|
||||||
|
cache->Erase("foo");
|
||||||
|
}};
|
||||||
|
|
||||||
void EraseDeleter2(const Slice& /*key*/, void* value) {
|
const Cache::CacheItemHelper kEraseOnDeleteHelper2{
|
||||||
Cache* cache = reinterpret_cast<Cache*>(value);
|
CacheEntryRole::kMisc,
|
||||||
cache->Erase(EncodeKey16Bytes(1234));
|
[](Cache::ObjectPtr value, MemoryAllocator* /*alloc*/) {
|
||||||
}
|
Cache* cache = static_cast<Cache*>(value);
|
||||||
|
cache->Erase(EncodeKey16Bytes(1234));
|
||||||
|
}};
|
||||||
|
|
||||||
const std::string kLRU = "lru";
|
const std::string kLRU = "lru";
|
||||||
const std::string kHyperClock = "hyper_clock";
|
const std::string kHyperClock = "hyper_clock";
|
||||||
|
@ -83,14 +92,11 @@ class CacheTest : public testing::TestWithParam<std::string> {
|
||||||
static CacheTest* current_;
|
static CacheTest* current_;
|
||||||
static std::string type_;
|
static std::string type_;
|
||||||
|
|
||||||
static void Deleter(const Slice& key, void* v) {
|
static void Deleter(Cache::ObjectPtr v, MemoryAllocator*) {
|
||||||
if (type_ == kHyperClock) {
|
|
||||||
current_->deleted_keys_.push_back(DecodeKey16Bytes(key));
|
|
||||||
} else {
|
|
||||||
current_->deleted_keys_.push_back(DecodeKey32Bits(key));
|
|
||||||
}
|
|
||||||
current_->deleted_values_.push_back(DecodeValue(v));
|
current_->deleted_values_.push_back(DecodeValue(v));
|
||||||
}
|
}
|
||||||
|
static constexpr Cache::CacheItemHelper kHelper{CacheEntryRole::kMisc,
|
||||||
|
&Deleter};
|
||||||
|
|
||||||
static const int kCacheSize = 1000;
|
static const int kCacheSize = 1000;
|
||||||
static const int kNumShardBits = 4;
|
static const int kNumShardBits = 4;
|
||||||
|
@ -98,7 +104,6 @@ class CacheTest : public testing::TestWithParam<std::string> {
|
||||||
static const int kCacheSize2 = 100;
|
static const int kCacheSize2 = 100;
|
||||||
static const int kNumShardBits2 = 2;
|
static const int kNumShardBits2 = 2;
|
||||||
|
|
||||||
std::vector<int> deleted_keys_;
|
|
||||||
std::vector<int> deleted_values_;
|
std::vector<int> deleted_values_;
|
||||||
std::shared_ptr<Cache> cache_;
|
std::shared_ptr<Cache> cache_;
|
||||||
std::shared_ptr<Cache> cache2_;
|
std::shared_ptr<Cache> cache2_;
|
||||||
|
@ -182,8 +187,8 @@ class CacheTest : public testing::TestWithParam<std::string> {
|
||||||
|
|
||||||
void Insert(std::shared_ptr<Cache> cache, int key, int value,
|
void Insert(std::shared_ptr<Cache> cache, int key, int value,
|
||||||
int charge = 1) {
|
int charge = 1) {
|
||||||
EXPECT_OK(cache->Insert(EncodeKey(key), EncodeValue(value), charge,
|
EXPECT_OK(
|
||||||
&CacheTest::Deleter));
|
cache->Insert(EncodeKey(key), EncodeValue(value), &kHelper, charge));
|
||||||
}
|
}
|
||||||
|
|
||||||
void Erase(std::shared_ptr<Cache> cache, int key) {
|
void Erase(std::shared_ptr<Cache> cache, int key) {
|
||||||
|
@ -236,10 +241,8 @@ TEST_P(CacheTest, UsageTest) {
|
||||||
key = EncodeKey(i);
|
key = EncodeKey(i);
|
||||||
}
|
}
|
||||||
auto kv_size = key.size() + 5;
|
auto kv_size = key.size() + 5;
|
||||||
ASSERT_OK(cache->Insert(key, reinterpret_cast<void*>(value), kv_size,
|
ASSERT_OK(cache->Insert(key, value, &kDumbHelper, kv_size));
|
||||||
DumbDeleter));
|
ASSERT_OK(precise_cache->Insert(key, value, &kDumbHelper, kv_size));
|
||||||
ASSERT_OK(precise_cache->Insert(key, reinterpret_cast<void*>(value),
|
|
||||||
kv_size, DumbDeleter));
|
|
||||||
usage += kv_size;
|
usage += kv_size;
|
||||||
ASSERT_EQ(usage, cache->GetUsage());
|
ASSERT_EQ(usage, cache->GetUsage());
|
||||||
if (type == kHyperClock) {
|
if (type == kHyperClock) {
|
||||||
|
@ -262,10 +265,8 @@ TEST_P(CacheTest, UsageTest) {
|
||||||
} else {
|
} else {
|
||||||
key = EncodeKey(static_cast<int>(1000 + i));
|
key = EncodeKey(static_cast<int>(1000 + i));
|
||||||
}
|
}
|
||||||
ASSERT_OK(cache->Insert(key, reinterpret_cast<void*>(value), key.size() + 5,
|
ASSERT_OK(cache->Insert(key, value, &kDumbHelper, key.size() + 5));
|
||||||
DumbDeleter));
|
ASSERT_OK(precise_cache->Insert(key, value, &kDumbHelper, key.size() + 5));
|
||||||
ASSERT_OK(precise_cache->Insert(key, reinterpret_cast<void*>(value),
|
|
||||||
key.size() + 5, DumbDeleter));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// the usage should be close to the capacity
|
// the usage should be close to the capacity
|
||||||
|
@ -320,11 +321,9 @@ TEST_P(CacheTest, PinnedUsageTest) {
|
||||||
auto kv_size = key.size() + 5;
|
auto kv_size = key.size() + 5;
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
Cache::Handle* handle_in_precise_cache;
|
Cache::Handle* handle_in_precise_cache;
|
||||||
ASSERT_OK(cache->Insert(key, reinterpret_cast<void*>(value), kv_size,
|
ASSERT_OK(cache->Insert(key, value, &kDumbHelper, kv_size, &handle));
|
||||||
DumbDeleter, &handle));
|
|
||||||
assert(handle);
|
assert(handle);
|
||||||
ASSERT_OK(precise_cache->Insert(key, reinterpret_cast<void*>(value),
|
ASSERT_OK(precise_cache->Insert(key, value, &kDumbHelper, kv_size,
|
||||||
kv_size, DumbDeleter,
|
|
||||||
&handle_in_precise_cache));
|
&handle_in_precise_cache));
|
||||||
assert(handle_in_precise_cache);
|
assert(handle_in_precise_cache);
|
||||||
pinned_usage += kv_size;
|
pinned_usage += kv_size;
|
||||||
|
@ -365,10 +364,8 @@ TEST_P(CacheTest, PinnedUsageTest) {
|
||||||
} else {
|
} else {
|
||||||
key = EncodeKey(static_cast<int>(1000 + i));
|
key = EncodeKey(static_cast<int>(1000 + i));
|
||||||
}
|
}
|
||||||
ASSERT_OK(cache->Insert(key, reinterpret_cast<void*>(value), key.size() + 5,
|
ASSERT_OK(cache->Insert(key, value, &kDumbHelper, key.size() + 5));
|
||||||
DumbDeleter));
|
ASSERT_OK(precise_cache->Insert(key, value, &kDumbHelper, key.size() + 5));
|
||||||
ASSERT_OK(precise_cache->Insert(key, reinterpret_cast<void*>(value),
|
|
||||||
key.size() + 5, DumbDeleter));
|
|
||||||
}
|
}
|
||||||
ASSERT_EQ(pinned_usage, cache->GetPinnedUsage());
|
ASSERT_EQ(pinned_usage, cache->GetPinnedUsage());
|
||||||
ASSERT_EQ(precise_cache_pinned_usage, precise_cache->GetPinnedUsage());
|
ASSERT_EQ(precise_cache_pinned_usage, precise_cache->GetPinnedUsage());
|
||||||
|
@ -416,8 +413,7 @@ TEST_P(CacheTest, HitAndMiss) {
|
||||||
ASSERT_EQ(201, Lookup(200));
|
ASSERT_EQ(201, Lookup(200));
|
||||||
ASSERT_EQ(-1, Lookup(300));
|
ASSERT_EQ(-1, Lookup(300));
|
||||||
|
|
||||||
ASSERT_EQ(1U, deleted_keys_.size());
|
ASSERT_EQ(1U, deleted_values_.size());
|
||||||
ASSERT_EQ(100, deleted_keys_[0]);
|
|
||||||
if (GetParam() == kHyperClock) {
|
if (GetParam() == kHyperClock) {
|
||||||
ASSERT_EQ(102, deleted_values_[0]);
|
ASSERT_EQ(102, deleted_values_[0]);
|
||||||
} else {
|
} else {
|
||||||
|
@ -438,21 +434,20 @@ TEST_P(CacheTest, InsertSameKey) {
|
||||||
|
|
||||||
TEST_P(CacheTest, Erase) {
|
TEST_P(CacheTest, Erase) {
|
||||||
Erase(200);
|
Erase(200);
|
||||||
ASSERT_EQ(0U, deleted_keys_.size());
|
ASSERT_EQ(0U, deleted_values_.size());
|
||||||
|
|
||||||
Insert(100, 101);
|
Insert(100, 101);
|
||||||
Insert(200, 201);
|
Insert(200, 201);
|
||||||
Erase(100);
|
Erase(100);
|
||||||
ASSERT_EQ(-1, Lookup(100));
|
ASSERT_EQ(-1, Lookup(100));
|
||||||
ASSERT_EQ(201, Lookup(200));
|
ASSERT_EQ(201, Lookup(200));
|
||||||
ASSERT_EQ(1U, deleted_keys_.size());
|
ASSERT_EQ(1U, deleted_values_.size());
|
||||||
ASSERT_EQ(100, deleted_keys_[0]);
|
|
||||||
ASSERT_EQ(101, deleted_values_[0]);
|
ASSERT_EQ(101, deleted_values_[0]);
|
||||||
|
|
||||||
Erase(100);
|
Erase(100);
|
||||||
ASSERT_EQ(-1, Lookup(100));
|
ASSERT_EQ(-1, Lookup(100));
|
||||||
ASSERT_EQ(201, Lookup(200));
|
ASSERT_EQ(201, Lookup(200));
|
||||||
ASSERT_EQ(1U, deleted_keys_.size());
|
ASSERT_EQ(1U, deleted_values_.size());
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_P(CacheTest, EntriesArePinned) {
|
TEST_P(CacheTest, EntriesArePinned) {
|
||||||
|
@ -469,23 +464,21 @@ TEST_P(CacheTest, EntriesArePinned) {
|
||||||
Insert(100, 102);
|
Insert(100, 102);
|
||||||
Cache::Handle* h2 = cache_->Lookup(EncodeKey(100));
|
Cache::Handle* h2 = cache_->Lookup(EncodeKey(100));
|
||||||
ASSERT_EQ(102, DecodeValue(cache_->Value(h2)));
|
ASSERT_EQ(102, DecodeValue(cache_->Value(h2)));
|
||||||
ASSERT_EQ(0U, deleted_keys_.size());
|
ASSERT_EQ(0U, deleted_values_.size());
|
||||||
ASSERT_EQ(2U, cache_->GetUsage());
|
ASSERT_EQ(2U, cache_->GetUsage());
|
||||||
|
|
||||||
cache_->Release(h1);
|
cache_->Release(h1);
|
||||||
ASSERT_EQ(1U, deleted_keys_.size());
|
ASSERT_EQ(1U, deleted_values_.size());
|
||||||
ASSERT_EQ(100, deleted_keys_[0]);
|
|
||||||
ASSERT_EQ(101, deleted_values_[0]);
|
ASSERT_EQ(101, deleted_values_[0]);
|
||||||
ASSERT_EQ(1U, cache_->GetUsage());
|
ASSERT_EQ(1U, cache_->GetUsage());
|
||||||
|
|
||||||
Erase(100);
|
Erase(100);
|
||||||
ASSERT_EQ(-1, Lookup(100));
|
ASSERT_EQ(-1, Lookup(100));
|
||||||
ASSERT_EQ(1U, deleted_keys_.size());
|
ASSERT_EQ(1U, deleted_values_.size());
|
||||||
ASSERT_EQ(1U, cache_->GetUsage());
|
ASSERT_EQ(1U, cache_->GetUsage());
|
||||||
|
|
||||||
cache_->Release(h2);
|
cache_->Release(h2);
|
||||||
ASSERT_EQ(2U, deleted_keys_.size());
|
ASSERT_EQ(2U, deleted_values_.size());
|
||||||
ASSERT_EQ(100, deleted_keys_[1]);
|
|
||||||
ASSERT_EQ(102, deleted_values_[1]);
|
ASSERT_EQ(102, deleted_values_[1]);
|
||||||
ASSERT_EQ(0U, cache_->GetUsage());
|
ASSERT_EQ(0U, cache_->GetUsage());
|
||||||
}
|
}
|
||||||
|
@ -588,9 +581,9 @@ TEST_P(CacheTest, EvictEmptyCache) {
|
||||||
// Insert item large than capacity to trigger eviction on empty cache.
|
// Insert item large than capacity to trigger eviction on empty cache.
|
||||||
auto cache = NewCache(1, 0, false);
|
auto cache = NewCache(1, 0, false);
|
||||||
if (type == kLRU) {
|
if (type == kLRU) {
|
||||||
ASSERT_OK(cache->Insert("foo", nullptr, 10, DumbDeleter));
|
ASSERT_OK(cache->Insert("foo", nullptr, &kDumbHelper, 10));
|
||||||
} else {
|
} else {
|
||||||
ASSERT_OK(cache->Insert(EncodeKey(1000), nullptr, 10, DumbDeleter));
|
ASSERT_OK(cache->Insert(EncodeKey(1000), nullptr, &kDumbHelper, 10));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -601,19 +594,19 @@ TEST_P(CacheTest, EraseFromDeleter) {
|
||||||
// the cache at that point.
|
// the cache at that point.
|
||||||
std::shared_ptr<Cache> cache = NewCache(10, 0, false);
|
std::shared_ptr<Cache> cache = NewCache(10, 0, false);
|
||||||
std::string foo, bar;
|
std::string foo, bar;
|
||||||
Cache::DeleterFn erase_deleter;
|
const Cache::CacheItemHelper* erase_helper;
|
||||||
if (type == kLRU) {
|
if (type == kLRU) {
|
||||||
foo = "foo";
|
foo = "foo";
|
||||||
bar = "bar";
|
bar = "bar";
|
||||||
erase_deleter = EraseDeleter1;
|
erase_helper = &kEraseOnDeleteHelper1;
|
||||||
} else {
|
} else {
|
||||||
foo = EncodeKey(1234);
|
foo = EncodeKey(1234);
|
||||||
bar = EncodeKey(5678);
|
bar = EncodeKey(5678);
|
||||||
erase_deleter = EraseDeleter2;
|
erase_helper = &kEraseOnDeleteHelper2;
|
||||||
}
|
}
|
||||||
|
|
||||||
ASSERT_OK(cache->Insert(foo, nullptr, 1, DumbDeleter));
|
ASSERT_OK(cache->Insert(foo, nullptr, &kDumbHelper, 1));
|
||||||
ASSERT_OK(cache->Insert(bar, cache.get(), 1, erase_deleter));
|
ASSERT_OK(cache->Insert(bar, cache.get(), erase_helper, 1));
|
||||||
|
|
||||||
cache->Erase(bar);
|
cache->Erase(bar);
|
||||||
ASSERT_EQ(nullptr, cache->Lookup(foo));
|
ASSERT_EQ(nullptr, cache->Lookup(foo));
|
||||||
|
@ -675,50 +668,51 @@ TEST_P(CacheTest, NewId) {
|
||||||
ASSERT_NE(a, b);
|
ASSERT_NE(a, b);
|
||||||
}
|
}
|
||||||
|
|
||||||
class Value {
|
|
||||||
public:
|
|
||||||
explicit Value(int v) : v_(v) {}
|
|
||||||
|
|
||||||
int v_;
|
|
||||||
};
|
|
||||||
|
|
||||||
namespace {
|
|
||||||
void deleter(const Slice& /*key*/, void* value) {
|
|
||||||
delete static_cast<Value*>(value);
|
|
||||||
}
|
|
||||||
} // namespace
|
|
||||||
|
|
||||||
TEST_P(CacheTest, ReleaseAndErase) {
|
TEST_P(CacheTest, ReleaseAndErase) {
|
||||||
std::shared_ptr<Cache> cache = NewCache(5, 0, false);
|
std::shared_ptr<Cache> cache = NewCache(5, 0, false);
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
Status s = cache->Insert(EncodeKey(100), EncodeValue(100), 1,
|
Status s =
|
||||||
&CacheTest::Deleter, &handle);
|
cache->Insert(EncodeKey(100), EncodeValue(100), &kHelper, 1, &handle);
|
||||||
ASSERT_TRUE(s.ok());
|
ASSERT_TRUE(s.ok());
|
||||||
ASSERT_EQ(5U, cache->GetCapacity());
|
ASSERT_EQ(5U, cache->GetCapacity());
|
||||||
ASSERT_EQ(1U, cache->GetUsage());
|
ASSERT_EQ(1U, cache->GetUsage());
|
||||||
ASSERT_EQ(0U, deleted_keys_.size());
|
ASSERT_EQ(0U, deleted_values_.size());
|
||||||
auto erased = cache->Release(handle, true);
|
auto erased = cache->Release(handle, true);
|
||||||
ASSERT_TRUE(erased);
|
ASSERT_TRUE(erased);
|
||||||
// This tests that deleter has been called
|
// This tests that deleter has been called
|
||||||
ASSERT_EQ(1U, deleted_keys_.size());
|
ASSERT_EQ(1U, deleted_values_.size());
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_P(CacheTest, ReleaseWithoutErase) {
|
TEST_P(CacheTest, ReleaseWithoutErase) {
|
||||||
std::shared_ptr<Cache> cache = NewCache(5, 0, false);
|
std::shared_ptr<Cache> cache = NewCache(5, 0, false);
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
Status s = cache->Insert(EncodeKey(100), EncodeValue(100), 1,
|
Status s =
|
||||||
&CacheTest::Deleter, &handle);
|
cache->Insert(EncodeKey(100), EncodeValue(100), &kHelper, 1, &handle);
|
||||||
ASSERT_TRUE(s.ok());
|
ASSERT_TRUE(s.ok());
|
||||||
ASSERT_EQ(5U, cache->GetCapacity());
|
ASSERT_EQ(5U, cache->GetCapacity());
|
||||||
ASSERT_EQ(1U, cache->GetUsage());
|
ASSERT_EQ(1U, cache->GetUsage());
|
||||||
ASSERT_EQ(0U, deleted_keys_.size());
|
ASSERT_EQ(0U, deleted_values_.size());
|
||||||
auto erased = cache->Release(handle);
|
auto erased = cache->Release(handle);
|
||||||
ASSERT_FALSE(erased);
|
ASSERT_FALSE(erased);
|
||||||
// This tests that deleter is not called. When cache has free capacity it is
|
// This tests that deleter is not called. When cache has free capacity it is
|
||||||
// not expected to immediately erase the released items.
|
// not expected to immediately erase the released items.
|
||||||
ASSERT_EQ(0U, deleted_keys_.size());
|
ASSERT_EQ(0U, deleted_values_.size());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
namespace {
|
||||||
|
class Value {
|
||||||
|
public:
|
||||||
|
explicit Value(int v) : v_(v) {}
|
||||||
|
|
||||||
|
int v_;
|
||||||
|
|
||||||
|
static constexpr auto kCacheEntryRole = CacheEntryRole::kMisc;
|
||||||
|
};
|
||||||
|
|
||||||
|
using SharedCache = BasicTypedSharedCacheInterface<Value>;
|
||||||
|
using TypedHandle = SharedCache::TypedHandle;
|
||||||
|
} // namespace
|
||||||
|
|
||||||
TEST_P(CacheTest, SetCapacity) {
|
TEST_P(CacheTest, SetCapacity) {
|
||||||
auto type = GetParam();
|
auto type = GetParam();
|
||||||
if (type == kHyperClock) {
|
if (type == kHyperClock) {
|
||||||
|
@ -731,19 +725,19 @@ TEST_P(CacheTest, SetCapacity) {
|
||||||
// lets create a cache with capacity 5,
|
// lets create a cache with capacity 5,
|
||||||
// then, insert 5 elements, then increase capacity
|
// then, insert 5 elements, then increase capacity
|
||||||
// to 10, returned capacity should be 10, usage=5
|
// to 10, returned capacity should be 10, usage=5
|
||||||
std::shared_ptr<Cache> cache = NewCache(5, 0, false);
|
SharedCache cache{NewCache(5, 0, false)};
|
||||||
std::vector<Cache::Handle*> handles(10);
|
std::vector<TypedHandle*> handles(10);
|
||||||
// Insert 5 entries, but not releasing.
|
// Insert 5 entries, but not releasing.
|
||||||
for (int i = 0; i < 5; i++) {
|
for (int i = 0; i < 5; i++) {
|
||||||
std::string key = EncodeKey(i + 1);
|
std::string key = EncodeKey(i + 1);
|
||||||
Status s = cache->Insert(key, new Value(i + 1), 1, &deleter, &handles[i]);
|
Status s = cache.Insert(key, new Value(i + 1), 1, &handles[i]);
|
||||||
ASSERT_TRUE(s.ok());
|
ASSERT_TRUE(s.ok());
|
||||||
}
|
}
|
||||||
ASSERT_EQ(5U, cache->GetCapacity());
|
ASSERT_EQ(5U, cache.get()->GetCapacity());
|
||||||
ASSERT_EQ(5U, cache->GetUsage());
|
ASSERT_EQ(5U, cache.get()->GetUsage());
|
||||||
cache->SetCapacity(10);
|
cache.get()->SetCapacity(10);
|
||||||
ASSERT_EQ(10U, cache->GetCapacity());
|
ASSERT_EQ(10U, cache.get()->GetCapacity());
|
||||||
ASSERT_EQ(5U, cache->GetUsage());
|
ASSERT_EQ(5U, cache.get()->GetUsage());
|
||||||
|
|
||||||
// test2: decrease capacity
|
// test2: decrease capacity
|
||||||
// insert 5 more elements to cache, then release 5,
|
// insert 5 more elements to cache, then release 5,
|
||||||
|
@ -751,77 +745,77 @@ TEST_P(CacheTest, SetCapacity) {
|
||||||
// and usage should be 7
|
// and usage should be 7
|
||||||
for (int i = 5; i < 10; i++) {
|
for (int i = 5; i < 10; i++) {
|
||||||
std::string key = EncodeKey(i + 1);
|
std::string key = EncodeKey(i + 1);
|
||||||
Status s = cache->Insert(key, new Value(i + 1), 1, &deleter, &handles[i]);
|
Status s = cache.Insert(key, new Value(i + 1), 1, &handles[i]);
|
||||||
ASSERT_TRUE(s.ok());
|
ASSERT_TRUE(s.ok());
|
||||||
}
|
}
|
||||||
ASSERT_EQ(10U, cache->GetCapacity());
|
ASSERT_EQ(10U, cache.get()->GetCapacity());
|
||||||
ASSERT_EQ(10U, cache->GetUsage());
|
ASSERT_EQ(10U, cache.get()->GetUsage());
|
||||||
for (int i = 0; i < 5; i++) {
|
for (int i = 0; i < 5; i++) {
|
||||||
cache->Release(handles[i]);
|
cache.Release(handles[i]);
|
||||||
}
|
}
|
||||||
ASSERT_EQ(10U, cache->GetCapacity());
|
ASSERT_EQ(10U, cache.get()->GetCapacity());
|
||||||
ASSERT_EQ(10U, cache->GetUsage());
|
ASSERT_EQ(10U, cache.get()->GetUsage());
|
||||||
cache->SetCapacity(7);
|
cache.get()->SetCapacity(7);
|
||||||
ASSERT_EQ(7, cache->GetCapacity());
|
ASSERT_EQ(7, cache.get()->GetCapacity());
|
||||||
ASSERT_EQ(7, cache->GetUsage());
|
ASSERT_EQ(7, cache.get()->GetUsage());
|
||||||
|
|
||||||
// release remaining 5 to keep valgrind happy
|
// release remaining 5 to keep valgrind happy
|
||||||
for (int i = 5; i < 10; i++) {
|
for (int i = 5; i < 10; i++) {
|
||||||
cache->Release(handles[i]);
|
cache.Release(handles[i]);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Make sure this doesn't crash or upset ASAN/valgrind
|
// Make sure this doesn't crash or upset ASAN/valgrind
|
||||||
cache->DisownData();
|
cache.get()->DisownData();
|
||||||
}
|
}
|
||||||
|
|
||||||
TEST_P(LRUCacheTest, SetStrictCapacityLimit) {
|
TEST_P(LRUCacheTest, SetStrictCapacityLimit) {
|
||||||
// test1: set the flag to false. Insert more keys than capacity. See if they
|
// test1: set the flag to false. Insert more keys than capacity. See if they
|
||||||
// all go through.
|
// all go through.
|
||||||
std::shared_ptr<Cache> cache = NewCache(5, 0, false);
|
SharedCache cache{NewCache(5, 0, false)};
|
||||||
std::vector<Cache::Handle*> handles(10);
|
std::vector<TypedHandle*> handles(10);
|
||||||
Status s;
|
Status s;
|
||||||
for (int i = 0; i < 10; i++) {
|
for (int i = 0; i < 10; i++) {
|
||||||
std::string key = EncodeKey(i + 1);
|
std::string key = EncodeKey(i + 1);
|
||||||
s = cache->Insert(key, new Value(i + 1), 1, &deleter, &handles[i]);
|
s = cache.Insert(key, new Value(i + 1), 1, &handles[i]);
|
||||||
ASSERT_OK(s);
|
ASSERT_OK(s);
|
||||||
ASSERT_NE(nullptr, handles[i]);
|
ASSERT_NE(nullptr, handles[i]);
|
||||||
}
|
}
|
||||||
ASSERT_EQ(10, cache->GetUsage());
|
ASSERT_EQ(10, cache.get()->GetUsage());
|
||||||
|
|
||||||
// test2: set the flag to true. Insert and check if it fails.
|
// test2: set the flag to true. Insert and check if it fails.
|
||||||
std::string extra_key = EncodeKey(100);
|
std::string extra_key = EncodeKey(100);
|
||||||
Value* extra_value = new Value(0);
|
Value* extra_value = new Value(0);
|
||||||
cache->SetStrictCapacityLimit(true);
|
cache.get()->SetStrictCapacityLimit(true);
|
||||||
Cache::Handle* handle;
|
TypedHandle* handle;
|
||||||
s = cache->Insert(extra_key, extra_value, 1, &deleter, &handle);
|
s = cache.Insert(extra_key, extra_value, 1, &handle);
|
||||||
ASSERT_TRUE(s.IsMemoryLimit());
|
ASSERT_TRUE(s.IsMemoryLimit());
|
||||||
ASSERT_EQ(nullptr, handle);
|
ASSERT_EQ(nullptr, handle);
|
||||||
ASSERT_EQ(10, cache->GetUsage());
|
ASSERT_EQ(10, cache.get()->GetUsage());
|
||||||
|
|
||||||
for (int i = 0; i < 10; i++) {
|
for (int i = 0; i < 10; i++) {
|
||||||
cache->Release(handles[i]);
|
cache.Release(handles[i]);
|
||||||
}
|
}
|
||||||
|
|
||||||
// test3: init with flag being true.
|
// test3: init with flag being true.
|
||||||
std::shared_ptr<Cache> cache2 = NewCache(5, 0, true);
|
SharedCache cache2{NewCache(5, 0, true)};
|
||||||
for (int i = 0; i < 5; i++) {
|
for (int i = 0; i < 5; i++) {
|
||||||
std::string key = EncodeKey(i + 1);
|
std::string key = EncodeKey(i + 1);
|
||||||
s = cache2->Insert(key, new Value(i + 1), 1, &deleter, &handles[i]);
|
s = cache2.Insert(key, new Value(i + 1), 1, &handles[i]);
|
||||||
ASSERT_OK(s);
|
ASSERT_OK(s);
|
||||||
ASSERT_NE(nullptr, handles[i]);
|
ASSERT_NE(nullptr, handles[i]);
|
||||||
}
|
}
|
||||||
s = cache2->Insert(extra_key, extra_value, 1, &deleter, &handle);
|
s = cache2.Insert(extra_key, extra_value, 1, &handle);
|
||||||
ASSERT_TRUE(s.IsMemoryLimit());
|
ASSERT_TRUE(s.IsMemoryLimit());
|
||||||
ASSERT_EQ(nullptr, handle);
|
ASSERT_EQ(nullptr, handle);
|
||||||
// test insert without handle
|
// test insert without handle
|
||||||
s = cache2->Insert(extra_key, extra_value, 1, &deleter);
|
s = cache2.Insert(extra_key, extra_value, 1);
|
||||||
// AS if the key have been inserted into cache but get evicted immediately.
|
// AS if the key have been inserted into cache but get evicted immediately.
|
||||||
ASSERT_OK(s);
|
ASSERT_OK(s);
|
||||||
ASSERT_EQ(5, cache2->GetUsage());
|
ASSERT_EQ(5, cache2.get()->GetUsage());
|
||||||
ASSERT_EQ(nullptr, cache2->Lookup(extra_key));
|
ASSERT_EQ(nullptr, cache2.Lookup(extra_key));
|
||||||
|
|
||||||
for (int i = 0; i < 5; i++) {
|
for (int i = 0; i < 5; i++) {
|
||||||
cache2->Release(handles[i]);
|
cache2.Release(handles[i]);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -829,55 +823,54 @@ TEST_P(CacheTest, OverCapacity) {
|
||||||
size_t n = 10;
|
size_t n = 10;
|
||||||
|
|
||||||
// a LRUCache with n entries and one shard only
|
// a LRUCache with n entries and one shard only
|
||||||
std::shared_ptr<Cache> cache = NewCache(n, 0, false);
|
SharedCache cache{NewCache(n, 0, false)};
|
||||||
|
std::vector<TypedHandle*> handles(n + 1);
|
||||||
std::vector<Cache::Handle*> handles(n + 1);
|
|
||||||
|
|
||||||
// Insert n+1 entries, but not releasing.
|
// Insert n+1 entries, but not releasing.
|
||||||
for (int i = 0; i < static_cast<int>(n + 1); i++) {
|
for (int i = 0; i < static_cast<int>(n + 1); i++) {
|
||||||
std::string key = EncodeKey(i + 1);
|
std::string key = EncodeKey(i + 1);
|
||||||
Status s = cache->Insert(key, new Value(i + 1), 1, &deleter, &handles[i]);
|
Status s = cache.Insert(key, new Value(i + 1), 1, &handles[i]);
|
||||||
ASSERT_TRUE(s.ok());
|
ASSERT_TRUE(s.ok());
|
||||||
}
|
}
|
||||||
|
|
||||||
// Guess what's in the cache now?
|
// Guess what's in the cache now?
|
||||||
for (int i = 0; i < static_cast<int>(n + 1); i++) {
|
for (int i = 0; i < static_cast<int>(n + 1); i++) {
|
||||||
std::string key = EncodeKey(i + 1);
|
std::string key = EncodeKey(i + 1);
|
||||||
auto h = cache->Lookup(key);
|
auto h = cache.Lookup(key);
|
||||||
ASSERT_TRUE(h != nullptr);
|
ASSERT_TRUE(h != nullptr);
|
||||||
if (h) cache->Release(h);
|
if (h) cache.Release(h);
|
||||||
}
|
}
|
||||||
|
|
||||||
// the cache is over capacity since nothing could be evicted
|
// the cache is over capacity since nothing could be evicted
|
||||||
ASSERT_EQ(n + 1U, cache->GetUsage());
|
ASSERT_EQ(n + 1U, cache.get()->GetUsage());
|
||||||
for (int i = 0; i < static_cast<int>(n + 1); i++) {
|
for (int i = 0; i < static_cast<int>(n + 1); i++) {
|
||||||
cache->Release(handles[i]);
|
cache.Release(handles[i]);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (GetParam() == kHyperClock) {
|
if (GetParam() == kHyperClock) {
|
||||||
// Make sure eviction is triggered.
|
// Make sure eviction is triggered.
|
||||||
ASSERT_OK(cache->Insert(EncodeKey(-1), nullptr, 1, &deleter, &handles[0]));
|
ASSERT_OK(cache.Insert(EncodeKey(-1), nullptr, 1, &handles[0]));
|
||||||
|
|
||||||
// cache is under capacity now since elements were released
|
// cache is under capacity now since elements were released
|
||||||
ASSERT_GE(n, cache->GetUsage());
|
ASSERT_GE(n, cache.get()->GetUsage());
|
||||||
|
|
||||||
// clean up
|
// clean up
|
||||||
cache->Release(handles[0]);
|
cache.Release(handles[0]);
|
||||||
} else {
|
} else {
|
||||||
// LRUCache checks for over-capacity in Release.
|
// LRUCache checks for over-capacity in Release.
|
||||||
|
|
||||||
// cache is exactly at capacity now with minimal eviction
|
// cache is exactly at capacity now with minimal eviction
|
||||||
ASSERT_EQ(n, cache->GetUsage());
|
ASSERT_EQ(n, cache.get()->GetUsage());
|
||||||
|
|
||||||
// element 0 is evicted and the rest is there
|
// element 0 is evicted and the rest is there
|
||||||
// This is consistent with the LRU policy since the element 0
|
// This is consistent with the LRU policy since the element 0
|
||||||
// was released first
|
// was released first
|
||||||
for (int i = 0; i < static_cast<int>(n + 1); i++) {
|
for (int i = 0; i < static_cast<int>(n + 1); i++) {
|
||||||
std::string key = EncodeKey(i + 1);
|
std::string key = EncodeKey(i + 1);
|
||||||
auto h = cache->Lookup(key);
|
auto h = cache.Lookup(key);
|
||||||
if (h) {
|
if (h) {
|
||||||
ASSERT_NE(static_cast<size_t>(i), 0U);
|
ASSERT_NE(static_cast<size_t>(i), 0U);
|
||||||
cache->Release(h);
|
cache.Release(h);
|
||||||
} else {
|
} else {
|
||||||
ASSERT_EQ(static_cast<size_t>(i), 0U);
|
ASSERT_EQ(static_cast<size_t>(i), 0U);
|
||||||
}
|
}
|
||||||
|
@ -885,40 +878,15 @@ TEST_P(CacheTest, OverCapacity) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace {
|
|
||||||
std::vector<std::pair<int, int>> legacy_callback_state;
|
|
||||||
void legacy_callback(void* value, size_t charge) {
|
|
||||||
legacy_callback_state.push_back(
|
|
||||||
{DecodeValue(value), static_cast<int>(charge)});
|
|
||||||
}
|
|
||||||
}; // namespace
|
|
||||||
|
|
||||||
TEST_P(CacheTest, ApplyToAllCacheEntriesTest) {
|
|
||||||
std::vector<std::pair<int, int>> inserted;
|
|
||||||
legacy_callback_state.clear();
|
|
||||||
|
|
||||||
for (int i = 0; i < 10; ++i) {
|
|
||||||
Insert(i, i * 2, i + 1);
|
|
||||||
inserted.push_back({i * 2, i + 1});
|
|
||||||
}
|
|
||||||
cache_->ApplyToAllCacheEntries(legacy_callback, true);
|
|
||||||
|
|
||||||
std::sort(inserted.begin(), inserted.end());
|
|
||||||
std::sort(legacy_callback_state.begin(), legacy_callback_state.end());
|
|
||||||
ASSERT_EQ(inserted.size(), legacy_callback_state.size());
|
|
||||||
for (int i = 0; i < static_cast<int>(inserted.size()); ++i) {
|
|
||||||
EXPECT_EQ(inserted[i], legacy_callback_state[i]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
TEST_P(CacheTest, ApplyToAllEntriesTest) {
|
TEST_P(CacheTest, ApplyToAllEntriesTest) {
|
||||||
std::vector<std::string> callback_state;
|
std::vector<std::string> callback_state;
|
||||||
const auto callback = [&](const Slice& key, void* value, size_t charge,
|
const auto callback = [&](const Slice& key, Cache::ObjectPtr value,
|
||||||
Cache::DeleterFn deleter) {
|
size_t charge,
|
||||||
|
const Cache::CacheItemHelper* helper) {
|
||||||
callback_state.push_back(std::to_string(DecodeKey(key)) + "," +
|
callback_state.push_back(std::to_string(DecodeKey(key)) + "," +
|
||||||
std::to_string(DecodeValue(value)) + "," +
|
std::to_string(DecodeValue(value)) + "," +
|
||||||
std::to_string(charge));
|
std::to_string(charge));
|
||||||
assert(deleter == &CacheTest::Deleter);
|
assert(helper == &CacheTest::kHelper);
|
||||||
};
|
};
|
||||||
|
|
||||||
std::vector<std::string> inserted;
|
std::vector<std::string> inserted;
|
||||||
|
@ -957,8 +925,8 @@ TEST_P(CacheTest, ApplyToAllEntriesDuringResize) {
|
||||||
|
|
||||||
// For callback
|
// For callback
|
||||||
int special_count = 0;
|
int special_count = 0;
|
||||||
const auto callback = [&](const Slice&, void*, size_t charge,
|
const auto callback = [&](const Slice&, Cache::ObjectPtr, size_t charge,
|
||||||
Cache::DeleterFn) {
|
const Cache::CacheItemHelper*) {
|
||||||
if (charge == static_cast<size_t>(kSpecialCharge)) {
|
if (charge == static_cast<size_t>(kSpecialCharge)) {
|
||||||
++special_count;
|
++special_count;
|
||||||
}
|
}
|
||||||
|
@ -1020,7 +988,7 @@ TEST_P(CacheTest, GetChargeAndDeleter) {
|
||||||
Cache::Handle* h1 = cache_->Lookup(EncodeKey(1));
|
Cache::Handle* h1 = cache_->Lookup(EncodeKey(1));
|
||||||
ASSERT_EQ(2, DecodeValue(cache_->Value(h1)));
|
ASSERT_EQ(2, DecodeValue(cache_->Value(h1)));
|
||||||
ASSERT_EQ(1, cache_->GetCharge(h1));
|
ASSERT_EQ(1, cache_->GetCharge(h1));
|
||||||
ASSERT_EQ(&CacheTest::Deleter, cache_->GetDeleter(h1));
|
ASSERT_EQ(&CacheTest::kHelper, cache_->GetCacheItemHelper(h1));
|
||||||
cache_->Release(h1);
|
cache_->Release(h1);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -17,25 +17,10 @@ ChargedCache::ChargedCache(std::shared_ptr<Cache> cache,
|
||||||
CacheReservationManagerImpl<CacheEntryRole::kBlobCache>>(
|
CacheReservationManagerImpl<CacheEntryRole::kBlobCache>>(
|
||||||
block_cache))) {}
|
block_cache))) {}
|
||||||
|
|
||||||
Status ChargedCache::Insert(const Slice& key, void* value, size_t charge,
|
Status ChargedCache::Insert(const Slice& key, ObjectPtr obj,
|
||||||
DeleterFn deleter, Handle** handle,
|
|
||||||
Priority priority) {
|
|
||||||
Status s = cache_->Insert(key, value, charge, deleter, handle, priority);
|
|
||||||
if (s.ok()) {
|
|
||||||
// Insert may cause the cache entry eviction if the cache is full. So we
|
|
||||||
// directly call the reservation manager to update the total memory used
|
|
||||||
// in the cache.
|
|
||||||
assert(cache_res_mgr_);
|
|
||||||
cache_res_mgr_->UpdateCacheReservation(cache_->GetUsage())
|
|
||||||
.PermitUncheckedError();
|
|
||||||
}
|
|
||||||
return s;
|
|
||||||
}
|
|
||||||
|
|
||||||
Status ChargedCache::Insert(const Slice& key, void* value,
|
|
||||||
const CacheItemHelper* helper, size_t charge,
|
const CacheItemHelper* helper, size_t charge,
|
||||||
Handle** handle, Priority priority) {
|
Handle** handle, Priority priority) {
|
||||||
Status s = cache_->Insert(key, value, helper, charge, handle, priority);
|
Status s = cache_->Insert(key, obj, helper, charge, handle, priority);
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
// Insert may cause the cache entry eviction if the cache is full. So we
|
// Insert may cause the cache entry eviction if the cache is full. So we
|
||||||
// directly call the reservation manager to update the total memory used
|
// directly call the reservation manager to update the total memory used
|
||||||
|
@ -47,22 +32,21 @@ Status ChargedCache::Insert(const Slice& key, void* value,
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::Handle* ChargedCache::Lookup(const Slice& key, Statistics* stats) {
|
|
||||||
return cache_->Lookup(key, stats);
|
|
||||||
}
|
|
||||||
|
|
||||||
Cache::Handle* ChargedCache::Lookup(const Slice& key,
|
Cache::Handle* ChargedCache::Lookup(const Slice& key,
|
||||||
const CacheItemHelper* helper,
|
const CacheItemHelper* helper,
|
||||||
const CreateCallback& create_cb,
|
CreateContext* create_context,
|
||||||
Priority priority, bool wait,
|
Priority priority, bool wait,
|
||||||
Statistics* stats) {
|
Statistics* stats) {
|
||||||
auto handle = cache_->Lookup(key, helper, create_cb, priority, wait, stats);
|
auto handle =
|
||||||
|
cache_->Lookup(key, helper, create_context, priority, wait, stats);
|
||||||
// Lookup may promote the KV pair from the secondary cache to the primary
|
// Lookup may promote the KV pair from the secondary cache to the primary
|
||||||
// cache. So we directly call the reservation manager to update the total
|
// cache. So we directly call the reservation manager to update the total
|
||||||
// memory used in the cache.
|
// memory used in the cache.
|
||||||
assert(cache_res_mgr_);
|
if (helper && helper->create_cb) {
|
||||||
cache_res_mgr_->UpdateCacheReservation(cache_->GetUsage())
|
assert(cache_res_mgr_);
|
||||||
.PermitUncheckedError();
|
cache_res_mgr_->UpdateCacheReservation(cache_->GetUsage())
|
||||||
|
.PermitUncheckedError();
|
||||||
|
}
|
||||||
return handle;
|
return handle;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -23,16 +23,14 @@ class ChargedCache : public Cache {
|
||||||
std::shared_ptr<Cache> block_cache);
|
std::shared_ptr<Cache> block_cache);
|
||||||
~ChargedCache() override = default;
|
~ChargedCache() override = default;
|
||||||
|
|
||||||
Status Insert(const Slice& key, void* value, size_t charge, DeleterFn deleter,
|
Status Insert(const Slice& key, ObjectPtr obj, const CacheItemHelper* helper,
|
||||||
Handle** handle, Priority priority) override;
|
|
||||||
Status Insert(const Slice& key, void* value, const CacheItemHelper* helper,
|
|
||||||
size_t charge, Handle** handle = nullptr,
|
size_t charge, Handle** handle = nullptr,
|
||||||
Priority priority = Priority::LOW) override;
|
Priority priority = Priority::LOW) override;
|
||||||
|
|
||||||
Cache::Handle* Lookup(const Slice& key, Statistics* stats) override;
|
|
||||||
Cache::Handle* Lookup(const Slice& key, const CacheItemHelper* helper,
|
Cache::Handle* Lookup(const Slice& key, const CacheItemHelper* helper,
|
||||||
const CreateCallback& create_cb, Priority priority,
|
CreateContext* create_context,
|
||||||
bool wait, Statistics* stats = nullptr) override;
|
Priority priority = Priority::LOW, bool wait = true,
|
||||||
|
Statistics* stats = nullptr) override;
|
||||||
|
|
||||||
bool Release(Cache::Handle* handle, bool useful,
|
bool Release(Cache::Handle* handle, bool useful,
|
||||||
bool erase_if_last_ref = false) override;
|
bool erase_if_last_ref = false) override;
|
||||||
|
@ -56,7 +54,9 @@ class ChargedCache : public Cache {
|
||||||
return cache_->HasStrictCapacityLimit();
|
return cache_->HasStrictCapacityLimit();
|
||||||
}
|
}
|
||||||
|
|
||||||
void* Value(Cache::Handle* handle) override { return cache_->Value(handle); }
|
ObjectPtr Value(Cache::Handle* handle) override {
|
||||||
|
return cache_->Value(handle);
|
||||||
|
}
|
||||||
|
|
||||||
bool IsReady(Cache::Handle* handle) override {
|
bool IsReady(Cache::Handle* handle) override {
|
||||||
return cache_->IsReady(handle);
|
return cache_->IsReady(handle);
|
||||||
|
@ -84,22 +84,17 @@ class ChargedCache : public Cache {
|
||||||
return cache_->GetCharge(handle);
|
return cache_->GetCharge(handle);
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::DeleterFn GetDeleter(Cache::Handle* handle) const override {
|
const CacheItemHelper* GetCacheItemHelper(Handle* handle) const override {
|
||||||
return cache_->GetDeleter(handle);
|
return cache_->GetCacheItemHelper(handle);
|
||||||
}
|
}
|
||||||
|
|
||||||
void ApplyToAllEntries(
|
void ApplyToAllEntries(
|
||||||
const std::function<void(const Slice& key, void* value, size_t charge,
|
const std::function<void(const Slice& key, ObjectPtr value, size_t charge,
|
||||||
Cache::DeleterFn deleter)>& callback,
|
const CacheItemHelper* helper)>& callback,
|
||||||
const Cache::ApplyToAllEntriesOptions& opts) override {
|
const Cache::ApplyToAllEntriesOptions& opts) override {
|
||||||
cache_->ApplyToAllEntries(callback, opts);
|
cache_->ApplyToAllEntries(callback, opts);
|
||||||
}
|
}
|
||||||
|
|
||||||
void ApplyToAllCacheEntries(void (*callback)(void* value, size_t charge),
|
|
||||||
bool thread_safe) override {
|
|
||||||
cache_->ApplyToAllCacheEntries(callback, thread_safe);
|
|
||||||
}
|
|
||||||
|
|
||||||
std::string GetPrintableOptions() const override {
|
std::string GetPrintableOptions() const override {
|
||||||
return cache_->GetPrintableOptions();
|
return cache_->GetPrintableOptions();
|
||||||
}
|
}
|
||||||
|
|
|
@ -50,12 +50,12 @@ inline uint64_t GetInitialCountdown(Cache::Priority priority) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
inline void FreeDataMarkEmpty(ClockHandle& h) {
|
inline void FreeDataMarkEmpty(ClockHandle& h, MemoryAllocator* allocator) {
|
||||||
// NOTE: in theory there's more room for parallelism if we copy the handle
|
// NOTE: in theory there's more room for parallelism if we copy the handle
|
||||||
// data and delay actions like this until after marking the entry as empty,
|
// data and delay actions like this until after marking the entry as empty,
|
||||||
// but performance tests only show a regression by copying the few words
|
// but performance tests only show a regression by copying the few words
|
||||||
// of data.
|
// of data.
|
||||||
h.FreeData();
|
h.FreeData(allocator);
|
||||||
|
|
||||||
#ifndef NDEBUG
|
#ifndef NDEBUG
|
||||||
// Mark slot as empty, with assertion
|
// Mark slot as empty, with assertion
|
||||||
|
@ -115,24 +115,23 @@ inline bool ClockUpdate(ClockHandle& h) {
|
||||||
|
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
void ClockHandleBasicData::FreeData() const {
|
void ClockHandleBasicData::FreeData(MemoryAllocator* allocator) const {
|
||||||
if (deleter) {
|
if (helper->del_cb) {
|
||||||
UniqueId64x2 unhashed;
|
helper->del_cb(value, allocator);
|
||||||
(*deleter)(
|
|
||||||
ClockCacheShard<HyperClockTable>::ReverseHash(hashed_key, &unhashed),
|
|
||||||
value);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
HyperClockTable::HyperClockTable(
|
HyperClockTable::HyperClockTable(
|
||||||
size_t capacity, bool /*strict_capacity_limit*/,
|
size_t capacity, bool /*strict_capacity_limit*/,
|
||||||
CacheMetadataChargePolicy metadata_charge_policy, const Opts& opts)
|
CacheMetadataChargePolicy metadata_charge_policy,
|
||||||
|
MemoryAllocator* allocator, const Opts& opts)
|
||||||
: length_bits_(CalcHashBits(capacity, opts.estimated_value_size,
|
: length_bits_(CalcHashBits(capacity, opts.estimated_value_size,
|
||||||
metadata_charge_policy)),
|
metadata_charge_policy)),
|
||||||
length_bits_mask_((size_t{1} << length_bits_) - 1),
|
length_bits_mask_((size_t{1} << length_bits_) - 1),
|
||||||
occupancy_limit_(static_cast<size_t>((uint64_t{1} << length_bits_) *
|
occupancy_limit_(static_cast<size_t>((uint64_t{1} << length_bits_) *
|
||||||
kStrictLoadFactor)),
|
kStrictLoadFactor)),
|
||||||
array_(new HandleImpl[size_t{1} << length_bits_]) {
|
array_(new HandleImpl[size_t{1} << length_bits_]),
|
||||||
|
allocator_(allocator) {
|
||||||
if (metadata_charge_policy ==
|
if (metadata_charge_policy ==
|
||||||
CacheMetadataChargePolicy::kFullChargeCacheMetadata) {
|
CacheMetadataChargePolicy::kFullChargeCacheMetadata) {
|
||||||
usage_ += size_t{GetTableSize()} * sizeof(HandleImpl);
|
usage_ += size_t{GetTableSize()} * sizeof(HandleImpl);
|
||||||
|
@ -154,7 +153,7 @@ HyperClockTable::~HyperClockTable() {
|
||||||
case ClockHandle::kStateInvisible: // rare but possible
|
case ClockHandle::kStateInvisible: // rare but possible
|
||||||
case ClockHandle::kStateVisible:
|
case ClockHandle::kStateVisible:
|
||||||
assert(GetRefcount(h.meta) == 0);
|
assert(GetRefcount(h.meta) == 0);
|
||||||
h.FreeData();
|
h.FreeData(allocator_);
|
||||||
#ifndef NDEBUG
|
#ifndef NDEBUG
|
||||||
Rollback(h.hashed_key, &h);
|
Rollback(h.hashed_key, &h);
|
||||||
ReclaimEntryUsage(h.GetTotalCharge());
|
ReclaimEntryUsage(h.GetTotalCharge());
|
||||||
|
@ -415,7 +414,7 @@ Status HyperClockTable::Insert(const ClockHandleBasicData& proto,
|
||||||
if (handle == nullptr) {
|
if (handle == nullptr) {
|
||||||
// Don't insert the entry but still return ok, as if the entry
|
// Don't insert the entry but still return ok, as if the entry
|
||||||
// inserted into cache and evicted immediately.
|
// inserted into cache and evicted immediately.
|
||||||
proto.FreeData();
|
proto.FreeData(allocator_);
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
} else {
|
} else {
|
||||||
// Need to track usage of fallback detached insert
|
// Need to track usage of fallback detached insert
|
||||||
|
@ -556,7 +555,7 @@ Status HyperClockTable::Insert(const ClockHandleBasicData& proto,
|
||||||
if (handle == nullptr) {
|
if (handle == nullptr) {
|
||||||
revert_usage_fn();
|
revert_usage_fn();
|
||||||
// As if unrefed entry immdiately evicted
|
// As if unrefed entry immdiately evicted
|
||||||
proto.FreeData();
|
proto.FreeData(allocator_);
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -698,14 +697,14 @@ bool HyperClockTable::Release(HandleImpl* h, bool useful,
|
||||||
// Took ownership
|
// Took ownership
|
||||||
size_t total_charge = h->GetTotalCharge();
|
size_t total_charge = h->GetTotalCharge();
|
||||||
if (UNLIKELY(h->IsDetached())) {
|
if (UNLIKELY(h->IsDetached())) {
|
||||||
h->FreeData();
|
h->FreeData(allocator_);
|
||||||
// Delete detached handle
|
// Delete detached handle
|
||||||
delete h;
|
delete h;
|
||||||
detached_usage_.fetch_sub(total_charge, std::memory_order_relaxed);
|
detached_usage_.fetch_sub(total_charge, std::memory_order_relaxed);
|
||||||
usage_.fetch_sub(total_charge, std::memory_order_relaxed);
|
usage_.fetch_sub(total_charge, std::memory_order_relaxed);
|
||||||
} else {
|
} else {
|
||||||
Rollback(h->hashed_key, h);
|
Rollback(h->hashed_key, h);
|
||||||
FreeDataMarkEmpty(*h);
|
FreeDataMarkEmpty(*h, allocator_);
|
||||||
ReclaimEntryUsage(total_charge);
|
ReclaimEntryUsage(total_charge);
|
||||||
}
|
}
|
||||||
return true;
|
return true;
|
||||||
|
@ -790,7 +789,7 @@ void HyperClockTable::Erase(const UniqueId64x2& hashed_key) {
|
||||||
// Took ownership
|
// Took ownership
|
||||||
assert(hashed_key == h->hashed_key);
|
assert(hashed_key == h->hashed_key);
|
||||||
size_t total_charge = h->GetTotalCharge();
|
size_t total_charge = h->GetTotalCharge();
|
||||||
FreeDataMarkEmpty(*h);
|
FreeDataMarkEmpty(*h, allocator_);
|
||||||
ReclaimEntryUsage(total_charge);
|
ReclaimEntryUsage(total_charge);
|
||||||
// We already have a copy of hashed_key in this case, so OK to
|
// We already have a copy of hashed_key in this case, so OK to
|
||||||
// delay Rollback until after releasing the entry
|
// delay Rollback until after releasing the entry
|
||||||
|
@ -878,7 +877,7 @@ void HyperClockTable::EraseUnRefEntries() {
|
||||||
// Took ownership
|
// Took ownership
|
||||||
size_t total_charge = h.GetTotalCharge();
|
size_t total_charge = h.GetTotalCharge();
|
||||||
Rollback(h.hashed_key, &h);
|
Rollback(h.hashed_key, &h);
|
||||||
FreeDataMarkEmpty(h);
|
FreeDataMarkEmpty(h, allocator_);
|
||||||
ReclaimEntryUsage(total_charge);
|
ReclaimEntryUsage(total_charge);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -968,7 +967,7 @@ inline void HyperClockTable::Evict(size_t requested_charge,
|
||||||
Rollback(h.hashed_key, &h);
|
Rollback(h.hashed_key, &h);
|
||||||
*freed_charge += h.GetTotalCharge();
|
*freed_charge += h.GetTotalCharge();
|
||||||
*freed_count += 1;
|
*freed_count += 1;
|
||||||
FreeDataMarkEmpty(h);
|
FreeDataMarkEmpty(h, allocator_);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -990,9 +989,10 @@ template <class Table>
|
||||||
ClockCacheShard<Table>::ClockCacheShard(
|
ClockCacheShard<Table>::ClockCacheShard(
|
||||||
size_t capacity, bool strict_capacity_limit,
|
size_t capacity, bool strict_capacity_limit,
|
||||||
CacheMetadataChargePolicy metadata_charge_policy,
|
CacheMetadataChargePolicy metadata_charge_policy,
|
||||||
const typename Table::Opts& opts)
|
MemoryAllocator* allocator, const typename Table::Opts& opts)
|
||||||
: CacheShardBase(metadata_charge_policy),
|
: CacheShardBase(metadata_charge_policy),
|
||||||
table_(capacity, strict_capacity_limit, metadata_charge_policy, opts),
|
table_(capacity, strict_capacity_limit, metadata_charge_policy, allocator,
|
||||||
|
opts),
|
||||||
capacity_(capacity),
|
capacity_(capacity),
|
||||||
strict_capacity_limit_(strict_capacity_limit) {
|
strict_capacity_limit_(strict_capacity_limit) {
|
||||||
// Initial charge metadata should not exceed capacity
|
// Initial charge metadata should not exceed capacity
|
||||||
|
@ -1006,8 +1006,9 @@ void ClockCacheShard<Table>::EraseUnRefEntries() {
|
||||||
|
|
||||||
template <class Table>
|
template <class Table>
|
||||||
void ClockCacheShard<Table>::ApplyToSomeEntries(
|
void ClockCacheShard<Table>::ApplyToSomeEntries(
|
||||||
const std::function<void(const Slice& key, void* value, size_t charge,
|
const std::function<void(const Slice& key, Cache::ObjectPtr value,
|
||||||
DeleterFn deleter)>& callback,
|
size_t charge,
|
||||||
|
const Cache::CacheItemHelper* helper)>& callback,
|
||||||
size_t average_entries_per_lock, size_t* state) {
|
size_t average_entries_per_lock, size_t* state) {
|
||||||
// The state is essentially going to be the starting hash, which works
|
// The state is essentially going to be the starting hash, which works
|
||||||
// nicely even if we resize between calls because we use upper-most
|
// nicely even if we resize between calls because we use upper-most
|
||||||
|
@ -1034,7 +1035,7 @@ void ClockCacheShard<Table>::ApplyToSomeEntries(
|
||||||
[callback](const HandleImpl& h) {
|
[callback](const HandleImpl& h) {
|
||||||
UniqueId64x2 unhashed;
|
UniqueId64x2 unhashed;
|
||||||
callback(ReverseHash(h.hashed_key, &unhashed), h.value,
|
callback(ReverseHash(h.hashed_key, &unhashed), h.value,
|
||||||
h.GetTotalCharge(), h.deleter);
|
h.GetTotalCharge(), h.helper);
|
||||||
},
|
},
|
||||||
index_begin, index_end, false);
|
index_begin, index_end, false);
|
||||||
}
|
}
|
||||||
|
@ -1078,9 +1079,9 @@ void ClockCacheShard<Table>::SetStrictCapacityLimit(
|
||||||
template <class Table>
|
template <class Table>
|
||||||
Status ClockCacheShard<Table>::Insert(const Slice& key,
|
Status ClockCacheShard<Table>::Insert(const Slice& key,
|
||||||
const UniqueId64x2& hashed_key,
|
const UniqueId64x2& hashed_key,
|
||||||
void* value, size_t charge,
|
Cache::ObjectPtr value,
|
||||||
Cache::DeleterFn deleter,
|
const Cache::CacheItemHelper* helper,
|
||||||
HandleImpl** handle,
|
size_t charge, HandleImpl** handle,
|
||||||
Cache::Priority priority) {
|
Cache::Priority priority) {
|
||||||
if (UNLIKELY(key.size() != kCacheKeySize)) {
|
if (UNLIKELY(key.size() != kCacheKeySize)) {
|
||||||
return Status::NotSupported("ClockCache only supports key size " +
|
return Status::NotSupported("ClockCache only supports key size " +
|
||||||
|
@ -1089,7 +1090,7 @@ Status ClockCacheShard<Table>::Insert(const Slice& key,
|
||||||
ClockHandleBasicData proto;
|
ClockHandleBasicData proto;
|
||||||
proto.hashed_key = hashed_key;
|
proto.hashed_key = hashed_key;
|
||||||
proto.value = value;
|
proto.value = value;
|
||||||
proto.deleter = deleter;
|
proto.helper = helper;
|
||||||
proto.total_charge = charge;
|
proto.total_charge = charge;
|
||||||
Status s = table_.Insert(
|
Status s = table_.Insert(
|
||||||
proto, handle, priority, capacity_.load(std::memory_order_relaxed),
|
proto, handle, priority, capacity_.load(std::memory_order_relaxed),
|
||||||
|
@ -1223,15 +1224,16 @@ HyperClockCache::HyperClockCache(
|
||||||
// TODO: should not need to go through two levels of pointer indirection to
|
// TODO: should not need to go through two levels of pointer indirection to
|
||||||
// get to table entries
|
// get to table entries
|
||||||
size_t per_shard = GetPerShardCapacity();
|
size_t per_shard = GetPerShardCapacity();
|
||||||
|
MemoryAllocator* alloc = this->memory_allocator();
|
||||||
InitShards([=](Shard* cs) {
|
InitShards([=](Shard* cs) {
|
||||||
HyperClockTable::Opts opts;
|
HyperClockTable::Opts opts;
|
||||||
opts.estimated_value_size = estimated_value_size;
|
opts.estimated_value_size = estimated_value_size;
|
||||||
new (cs)
|
new (cs) Shard(per_shard, strict_capacity_limit, metadata_charge_policy,
|
||||||
Shard(per_shard, strict_capacity_limit, metadata_charge_policy, opts);
|
alloc, opts);
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
void* HyperClockCache::Value(Handle* handle) {
|
Cache::ObjectPtr HyperClockCache::Value(Handle* handle) {
|
||||||
return reinterpret_cast<const HandleImpl*>(handle)->value;
|
return reinterpret_cast<const HandleImpl*>(handle)->value;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1239,9 +1241,10 @@ size_t HyperClockCache::GetCharge(Handle* handle) const {
|
||||||
return reinterpret_cast<const HandleImpl*>(handle)->GetTotalCharge();
|
return reinterpret_cast<const HandleImpl*>(handle)->GetTotalCharge();
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::DeleterFn HyperClockCache::GetDeleter(Handle* handle) const {
|
const Cache::CacheItemHelper* HyperClockCache::GetCacheItemHelper(
|
||||||
|
Handle* handle) const {
|
||||||
auto h = reinterpret_cast<const HandleImpl*>(handle);
|
auto h = reinterpret_cast<const HandleImpl*>(handle);
|
||||||
return h->deleter;
|
return h->helper;
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace {
|
namespace {
|
||||||
|
|
|
@ -305,8 +305,8 @@ constexpr double kLoadFactor = 0.7;
|
||||||
constexpr double kStrictLoadFactor = 0.84;
|
constexpr double kStrictLoadFactor = 0.84;
|
||||||
|
|
||||||
struct ClockHandleBasicData {
|
struct ClockHandleBasicData {
|
||||||
void* value = nullptr;
|
Cache::ObjectPtr value = nullptr;
|
||||||
Cache::DeleterFn deleter = nullptr;
|
const Cache::CacheItemHelper* helper = nullptr;
|
||||||
// A lossless, reversible hash of the fixed-size (16 byte) cache key. This
|
// A lossless, reversible hash of the fixed-size (16 byte) cache key. This
|
||||||
// eliminates the need to store a hash separately.
|
// eliminates the need to store a hash separately.
|
||||||
UniqueId64x2 hashed_key = kNullUniqueId64x2;
|
UniqueId64x2 hashed_key = kNullUniqueId64x2;
|
||||||
|
@ -321,7 +321,7 @@ struct ClockHandleBasicData {
|
||||||
inline size_t GetTotalCharge() const { return total_charge; }
|
inline size_t GetTotalCharge() const { return total_charge; }
|
||||||
|
|
||||||
// Calls deleter (if non-null) on cache key and value
|
// Calls deleter (if non-null) on cache key and value
|
||||||
void FreeData() const;
|
void FreeData(MemoryAllocator* allocator) const;
|
||||||
|
|
||||||
// Required by concept HandleImpl
|
// Required by concept HandleImpl
|
||||||
const UniqueId64x2& GetHash() const { return hashed_key; }
|
const UniqueId64x2& GetHash() const { return hashed_key; }
|
||||||
|
@ -411,7 +411,7 @@ class HyperClockTable {
|
||||||
|
|
||||||
HyperClockTable(size_t capacity, bool strict_capacity_limit,
|
HyperClockTable(size_t capacity, bool strict_capacity_limit,
|
||||||
CacheMetadataChargePolicy metadata_charge_policy,
|
CacheMetadataChargePolicy metadata_charge_policy,
|
||||||
const Opts& opts);
|
MemoryAllocator* allocator, const Opts& opts);
|
||||||
~HyperClockTable();
|
~HyperClockTable();
|
||||||
|
|
||||||
Status Insert(const ClockHandleBasicData& proto, HandleImpl** handle,
|
Status Insert(const ClockHandleBasicData& proto, HandleImpl** handle,
|
||||||
|
@ -519,6 +519,8 @@ class HyperClockTable {
|
||||||
// Updates `detached_usage_` but not `usage_` nor `occupancy_`.
|
// Updates `detached_usage_` but not `usage_` nor `occupancy_`.
|
||||||
inline HandleImpl* DetachedInsert(const ClockHandleBasicData& proto);
|
inline HandleImpl* DetachedInsert(const ClockHandleBasicData& proto);
|
||||||
|
|
||||||
|
MemoryAllocator* GetAllocator() const { return allocator_; }
|
||||||
|
|
||||||
// Returns the number of bits used to hash an element in the hash
|
// Returns the number of bits used to hash an element in the hash
|
||||||
// table.
|
// table.
|
||||||
static int CalcHashBits(size_t capacity, size_t estimated_value_size,
|
static int CalcHashBits(size_t capacity, size_t estimated_value_size,
|
||||||
|
@ -538,6 +540,9 @@ class HyperClockTable {
|
||||||
// Array of slots comprising the hash table.
|
// Array of slots comprising the hash table.
|
||||||
const std::unique_ptr<HandleImpl[]> array_;
|
const std::unique_ptr<HandleImpl[]> array_;
|
||||||
|
|
||||||
|
// From Cache, for deleter
|
||||||
|
MemoryAllocator* const allocator_;
|
||||||
|
|
||||||
// We partition the following members into different cache lines
|
// We partition the following members into different cache lines
|
||||||
// to avoid false sharing among Lookup, Release, Erase and Insert
|
// to avoid false sharing among Lookup, Release, Erase and Insert
|
||||||
// operations in ClockCacheShard.
|
// operations in ClockCacheShard.
|
||||||
|
@ -563,7 +568,7 @@ class ALIGN_AS(CACHE_LINE_SIZE) ClockCacheShard final : public CacheShardBase {
|
||||||
public:
|
public:
|
||||||
ClockCacheShard(size_t capacity, bool strict_capacity_limit,
|
ClockCacheShard(size_t capacity, bool strict_capacity_limit,
|
||||||
CacheMetadataChargePolicy metadata_charge_policy,
|
CacheMetadataChargePolicy metadata_charge_policy,
|
||||||
const typename Table::Opts& opts);
|
MemoryAllocator* allocator, const typename Table::Opts& opts);
|
||||||
|
|
||||||
// For CacheShard concept
|
// For CacheShard concept
|
||||||
using HandleImpl = typename Table::HandleImpl;
|
using HandleImpl = typename Table::HandleImpl;
|
||||||
|
@ -600,9 +605,9 @@ class ALIGN_AS(CACHE_LINE_SIZE) ClockCacheShard final : public CacheShardBase {
|
||||||
|
|
||||||
void SetStrictCapacityLimit(bool strict_capacity_limit);
|
void SetStrictCapacityLimit(bool strict_capacity_limit);
|
||||||
|
|
||||||
Status Insert(const Slice& key, const UniqueId64x2& hashed_key, void* value,
|
Status Insert(const Slice& key, const UniqueId64x2& hashed_key,
|
||||||
size_t charge, Cache::DeleterFn deleter, HandleImpl** handle,
|
Cache::ObjectPtr value, const Cache::CacheItemHelper* helper,
|
||||||
Cache::Priority priority);
|
size_t charge, HandleImpl** handle, Cache::Priority priority);
|
||||||
|
|
||||||
HandleImpl* Lookup(const Slice& key, const UniqueId64x2& hashed_key);
|
HandleImpl* Lookup(const Slice& key, const UniqueId64x2& hashed_key);
|
||||||
|
|
||||||
|
@ -629,25 +634,18 @@ class ALIGN_AS(CACHE_LINE_SIZE) ClockCacheShard final : public CacheShardBase {
|
||||||
size_t GetTableAddressCount() const;
|
size_t GetTableAddressCount() const;
|
||||||
|
|
||||||
void ApplyToSomeEntries(
|
void ApplyToSomeEntries(
|
||||||
const std::function<void(const Slice& key, void* value, size_t charge,
|
const std::function<void(const Slice& key, Cache::ObjectPtr obj,
|
||||||
DeleterFn deleter)>& callback,
|
size_t charge,
|
||||||
|
const Cache::CacheItemHelper* helper)>& callback,
|
||||||
size_t average_entries_per_lock, size_t* state);
|
size_t average_entries_per_lock, size_t* state);
|
||||||
|
|
||||||
void EraseUnRefEntries();
|
void EraseUnRefEntries();
|
||||||
|
|
||||||
std::string GetPrintableOptions() const { return std::string{}; }
|
std::string GetPrintableOptions() const { return std::string{}; }
|
||||||
|
|
||||||
// SecondaryCache not yet supported
|
|
||||||
Status Insert(const Slice& key, const UniqueId64x2& hashed_key, void* value,
|
|
||||||
const Cache::CacheItemHelper* helper, size_t charge,
|
|
||||||
HandleImpl** handle, Cache::Priority priority) {
|
|
||||||
return Insert(key, hashed_key, value, charge, helper->del_cb, handle,
|
|
||||||
priority);
|
|
||||||
}
|
|
||||||
|
|
||||||
HandleImpl* Lookup(const Slice& key, const UniqueId64x2& hashed_key,
|
HandleImpl* Lookup(const Slice& key, const UniqueId64x2& hashed_key,
|
||||||
const Cache::CacheItemHelper* /*helper*/,
|
const Cache::CacheItemHelper* /*helper*/,
|
||||||
const Cache::CreateCallback& /*create_cb*/,
|
Cache::CreateContext* /*create_context*/,
|
||||||
Cache::Priority /*priority*/, bool /*wait*/,
|
Cache::Priority /*priority*/, bool /*wait*/,
|
||||||
Statistics* /*stats*/) {
|
Statistics* /*stats*/) {
|
||||||
return Lookup(key, hashed_key);
|
return Lookup(key, hashed_key);
|
||||||
|
@ -686,11 +684,11 @@ class HyperClockCache
|
||||||
|
|
||||||
const char* Name() const override { return "HyperClockCache"; }
|
const char* Name() const override { return "HyperClockCache"; }
|
||||||
|
|
||||||
void* Value(Handle* handle) override;
|
Cache::ObjectPtr Value(Handle* handle) override;
|
||||||
|
|
||||||
size_t GetCharge(Handle* handle) const override;
|
size_t GetCharge(Handle* handle) const override;
|
||||||
|
|
||||||
DeleterFn GetDeleter(Handle* handle) const override;
|
const CacheItemHelper* GetCacheItemHelper(Handle* handle) const override;
|
||||||
|
|
||||||
void ReportProblems(
|
void ReportProblems(
|
||||||
const std::shared_ptr<Logger>& /*info_log*/) const override;
|
const std::shared_ptr<Logger>& /*info_log*/) const override;
|
||||||
|
|
|
@ -37,8 +37,10 @@ CompressedSecondaryCache::CompressedSecondaryCache(
|
||||||
CompressedSecondaryCache::~CompressedSecondaryCache() { cache_.reset(); }
|
CompressedSecondaryCache::~CompressedSecondaryCache() { cache_.reset(); }
|
||||||
|
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> CompressedSecondaryCache::Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> CompressedSecondaryCache::Lookup(
|
||||||
const Slice& key, const Cache::CreateCallback& create_cb, bool /*wait*/,
|
const Slice& key, const Cache::CacheItemHelper* helper,
|
||||||
bool advise_erase, bool& is_in_sec_cache) {
|
Cache::CreateContext* create_context, bool /*wait*/, bool advise_erase,
|
||||||
|
bool& is_in_sec_cache) {
|
||||||
|
assert(helper);
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle;
|
std::unique_ptr<SecondaryCacheResultHandle> handle;
|
||||||
is_in_sec_cache = false;
|
is_in_sec_cache = false;
|
||||||
Cache::Handle* lru_handle = cache_->Lookup(key);
|
Cache::Handle* lru_handle = cache_->Lookup(key);
|
||||||
|
@ -64,12 +66,14 @@ std::unique_ptr<SecondaryCacheResultHandle> CompressedSecondaryCache::Lookup(
|
||||||
ptr = reinterpret_cast<CacheAllocationPtr*>(handle_value);
|
ptr = reinterpret_cast<CacheAllocationPtr*>(handle_value);
|
||||||
handle_value_charge = cache_->GetCharge(lru_handle);
|
handle_value_charge = cache_->GetCharge(lru_handle);
|
||||||
}
|
}
|
||||||
|
MemoryAllocator* allocator = cache_options_.memory_allocator.get();
|
||||||
|
|
||||||
Status s;
|
Status s;
|
||||||
void* value{nullptr};
|
Cache::ObjectPtr value{nullptr};
|
||||||
size_t charge{0};
|
size_t charge{0};
|
||||||
if (cache_options_.compression_type == kNoCompression) {
|
if (cache_options_.compression_type == kNoCompression) {
|
||||||
s = create_cb(ptr->get(), handle_value_charge, &value, &charge);
|
s = helper->create_cb(Slice(ptr->get(), handle_value_charge),
|
||||||
|
create_context, allocator, &value, &charge);
|
||||||
} else {
|
} else {
|
||||||
UncompressionContext uncompression_context(cache_options_.compression_type);
|
UncompressionContext uncompression_context(cache_options_.compression_type);
|
||||||
UncompressionInfo uncompression_info(uncompression_context,
|
UncompressionInfo uncompression_info(uncompression_context,
|
||||||
|
@ -79,14 +83,14 @@ std::unique_ptr<SecondaryCacheResultHandle> CompressedSecondaryCache::Lookup(
|
||||||
size_t uncompressed_size{0};
|
size_t uncompressed_size{0};
|
||||||
CacheAllocationPtr uncompressed = UncompressData(
|
CacheAllocationPtr uncompressed = UncompressData(
|
||||||
uncompression_info, (char*)ptr->get(), handle_value_charge,
|
uncompression_info, (char*)ptr->get(), handle_value_charge,
|
||||||
&uncompressed_size, cache_options_.compress_format_version,
|
&uncompressed_size, cache_options_.compress_format_version, allocator);
|
||||||
cache_options_.memory_allocator.get());
|
|
||||||
|
|
||||||
if (!uncompressed) {
|
if (!uncompressed) {
|
||||||
cache_->Release(lru_handle, /*erase_if_last_ref=*/true);
|
cache_->Release(lru_handle, /*erase_if_last_ref=*/true);
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
s = create_cb(uncompressed.get(), uncompressed_size, &value, &charge);
|
s = helper->create_cb(Slice(uncompressed.get(), uncompressed_size),
|
||||||
|
create_context, allocator, &value, &charge);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
|
@ -98,8 +102,9 @@ std::unique_ptr<SecondaryCacheResultHandle> CompressedSecondaryCache::Lookup(
|
||||||
cache_->Release(lru_handle, /*erase_if_last_ref=*/true);
|
cache_->Release(lru_handle, /*erase_if_last_ref=*/true);
|
||||||
// Insert a dummy handle.
|
// Insert a dummy handle.
|
||||||
cache_
|
cache_
|
||||||
->Insert(key, /*value=*/nullptr, /*charge=*/0,
|
->Insert(key, /*obj=*/nullptr,
|
||||||
GetDeletionCallback(cache_options_.enable_custom_split_merge))
|
GetHelper(cache_options_.enable_custom_split_merge),
|
||||||
|
/*charge=*/0)
|
||||||
.PermitUncheckedError();
|
.PermitUncheckedError();
|
||||||
} else {
|
} else {
|
||||||
is_in_sec_cache = true;
|
is_in_sec_cache = true;
|
||||||
|
@ -109,19 +114,20 @@ std::unique_ptr<SecondaryCacheResultHandle> CompressedSecondaryCache::Lookup(
|
||||||
return handle;
|
return handle;
|
||||||
}
|
}
|
||||||
|
|
||||||
Status CompressedSecondaryCache::Insert(const Slice& key, void* value,
|
Status CompressedSecondaryCache::Insert(const Slice& key,
|
||||||
|
Cache::ObjectPtr value,
|
||||||
const Cache::CacheItemHelper* helper) {
|
const Cache::CacheItemHelper* helper) {
|
||||||
if (value == nullptr) {
|
if (value == nullptr) {
|
||||||
return Status::InvalidArgument();
|
return Status::InvalidArgument();
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::Handle* lru_handle = cache_->Lookup(key);
|
Cache::Handle* lru_handle = cache_->Lookup(key);
|
||||||
Cache::DeleterFn del_cb =
|
auto internal_helper = GetHelper(cache_options_.enable_custom_split_merge);
|
||||||
GetDeletionCallback(cache_options_.enable_custom_split_merge);
|
|
||||||
if (lru_handle == nullptr) {
|
if (lru_handle == nullptr) {
|
||||||
PERF_COUNTER_ADD(compressed_sec_cache_insert_dummy_count, 1);
|
PERF_COUNTER_ADD(compressed_sec_cache_insert_dummy_count, 1);
|
||||||
// Insert a dummy handle if the handle is evicted for the first time.
|
// Insert a dummy handle if the handle is evicted for the first time.
|
||||||
return cache_->Insert(key, /*value=*/nullptr, /*charge=*/0, del_cb);
|
return cache_->Insert(key, /*obj=*/nullptr, internal_helper,
|
||||||
|
/*charge=*/0);
|
||||||
} else {
|
} else {
|
||||||
cache_->Release(lru_handle, /*erase_if_last_ref=*/false);
|
cache_->Release(lru_handle, /*erase_if_last_ref=*/false);
|
||||||
}
|
}
|
||||||
|
@ -169,10 +175,10 @@ Status CompressedSecondaryCache::Insert(const Slice& key, void* value,
|
||||||
size_t charge{0};
|
size_t charge{0};
|
||||||
CacheValueChunk* value_chunks_head =
|
CacheValueChunk* value_chunks_head =
|
||||||
SplitValueIntoChunks(val, cache_options_.compression_type, charge);
|
SplitValueIntoChunks(val, cache_options_.compression_type, charge);
|
||||||
return cache_->Insert(key, value_chunks_head, charge, del_cb);
|
return cache_->Insert(key, value_chunks_head, internal_helper, charge);
|
||||||
} else {
|
} else {
|
||||||
CacheAllocationPtr* buf = new CacheAllocationPtr(std::move(ptr));
|
CacheAllocationPtr* buf = new CacheAllocationPtr(std::move(ptr));
|
||||||
return cache_->Insert(key, buf, size, del_cb);
|
return cache_->Insert(key, buf, internal_helper, size);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -276,23 +282,29 @@ CacheAllocationPtr CompressedSecondaryCache::MergeChunksIntoValue(
|
||||||
return ptr;
|
return ptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::DeleterFn CompressedSecondaryCache::GetDeletionCallback(
|
const Cache::CacheItemHelper* CompressedSecondaryCache::GetHelper(
|
||||||
bool enable_custom_split_merge) {
|
bool enable_custom_split_merge) const {
|
||||||
if (enable_custom_split_merge) {
|
if (enable_custom_split_merge) {
|
||||||
return [](const Slice& /*key*/, void* obj) {
|
static const Cache::CacheItemHelper kHelper{
|
||||||
CacheValueChunk* chunks_head = reinterpret_cast<CacheValueChunk*>(obj);
|
CacheEntryRole::kMisc,
|
||||||
while (chunks_head != nullptr) {
|
[](Cache::ObjectPtr obj, MemoryAllocator* /*alloc*/) {
|
||||||
CacheValueChunk* tmp_chunk = chunks_head;
|
CacheValueChunk* chunks_head = static_cast<CacheValueChunk*>(obj);
|
||||||
chunks_head = chunks_head->next;
|
while (chunks_head != nullptr) {
|
||||||
tmp_chunk->Free();
|
CacheValueChunk* tmp_chunk = chunks_head;
|
||||||
obj = nullptr;
|
chunks_head = chunks_head->next;
|
||||||
};
|
tmp_chunk->Free();
|
||||||
};
|
obj = nullptr;
|
||||||
|
};
|
||||||
|
}};
|
||||||
|
return &kHelper;
|
||||||
} else {
|
} else {
|
||||||
return [](const Slice& /*key*/, void* obj) {
|
static const Cache::CacheItemHelper kHelper{
|
||||||
delete reinterpret_cast<CacheAllocationPtr*>(obj);
|
CacheEntryRole::kMisc,
|
||||||
obj = nullptr;
|
[](Cache::ObjectPtr obj, MemoryAllocator* /*alloc*/) {
|
||||||
};
|
delete static_cast<CacheAllocationPtr*>(obj);
|
||||||
|
obj = nullptr;
|
||||||
|
}};
|
||||||
|
return &kHelper;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -21,7 +21,7 @@ namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
class CompressedSecondaryCacheResultHandle : public SecondaryCacheResultHandle {
|
class CompressedSecondaryCacheResultHandle : public SecondaryCacheResultHandle {
|
||||||
public:
|
public:
|
||||||
CompressedSecondaryCacheResultHandle(void* value, size_t size)
|
CompressedSecondaryCacheResultHandle(Cache::ObjectPtr value, size_t size)
|
||||||
: value_(value), size_(size) {}
|
: value_(value), size_(size) {}
|
||||||
~CompressedSecondaryCacheResultHandle() override = default;
|
~CompressedSecondaryCacheResultHandle() override = default;
|
||||||
|
|
||||||
|
@ -34,12 +34,12 @@ class CompressedSecondaryCacheResultHandle : public SecondaryCacheResultHandle {
|
||||||
|
|
||||||
void Wait() override {}
|
void Wait() override {}
|
||||||
|
|
||||||
void* Value() override { return value_; }
|
Cache::ObjectPtr Value() override { return value_; }
|
||||||
|
|
||||||
size_t Size() override { return size_; }
|
size_t Size() override { return size_; }
|
||||||
|
|
||||||
private:
|
private:
|
||||||
void* value_;
|
Cache::ObjectPtr value_;
|
||||||
size_t size_;
|
size_t size_;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -83,12 +83,13 @@ class CompressedSecondaryCache : public SecondaryCache {
|
||||||
|
|
||||||
const char* Name() const override { return "CompressedSecondaryCache"; }
|
const char* Name() const override { return "CompressedSecondaryCache"; }
|
||||||
|
|
||||||
Status Insert(const Slice& key, void* value,
|
Status Insert(const Slice& key, Cache::ObjectPtr value,
|
||||||
const Cache::CacheItemHelper* helper) override;
|
const Cache::CacheItemHelper* helper) override;
|
||||||
|
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> Lookup(
|
||||||
const Slice& key, const Cache::CreateCallback& create_cb, bool /*wait*/,
|
const Slice& key, const Cache::CacheItemHelper* helper,
|
||||||
bool advise_erase, bool& is_in_sec_cache) override;
|
Cache::CreateContext* create_context, bool /*wait*/, bool advise_erase,
|
||||||
|
bool& is_in_sec_cache) override;
|
||||||
|
|
||||||
bool SupportForceErase() const override { return true; }
|
bool SupportForceErase() const override { return true; }
|
||||||
|
|
||||||
|
@ -129,8 +130,8 @@ class CompressedSecondaryCache : public SecondaryCache {
|
||||||
CacheAllocationPtr MergeChunksIntoValue(const void* chunks_head,
|
CacheAllocationPtr MergeChunksIntoValue(const void* chunks_head,
|
||||||
size_t& charge);
|
size_t& charge);
|
||||||
|
|
||||||
// An implementation of Cache::DeleterFn.
|
// TODO: clean up to use cleaner interfaces in typed_cache.h
|
||||||
static Cache::DeleterFn GetDeletionCallback(bool enable_custom_split_merge);
|
const Cache::CacheItemHelper* GetHelper(bool enable_custom_split_merge) const;
|
||||||
std::shared_ptr<Cache> cache_;
|
std::shared_ptr<Cache> cache_;
|
||||||
CompressedSecondaryCacheOptions cache_options_;
|
CompressedSecondaryCacheOptions cache_options_;
|
||||||
mutable port::Mutex capacity_mutex_;
|
mutable port::Mutex capacity_mutex_;
|
||||||
|
|
|
@ -16,7 +16,8 @@
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
class CompressedSecondaryCacheTest : public testing::Test {
|
class CompressedSecondaryCacheTest : public testing::Test,
|
||||||
|
public Cache::CreateContext {
|
||||||
public:
|
public:
|
||||||
CompressedSecondaryCacheTest() : fail_create_(false) {}
|
CompressedSecondaryCacheTest() : fail_create_(false) {}
|
||||||
~CompressedSecondaryCacheTest() override = default;
|
~CompressedSecondaryCacheTest() override = default;
|
||||||
|
@ -37,13 +38,13 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
size_t size_;
|
size_t size_;
|
||||||
};
|
};
|
||||||
|
|
||||||
static size_t SizeCallback(void* obj) {
|
static size_t SizeCallback(Cache::ObjectPtr obj) {
|
||||||
return reinterpret_cast<TestItem*>(obj)->Size();
|
return static_cast<TestItem*>(obj)->Size();
|
||||||
}
|
}
|
||||||
|
|
||||||
static Status SaveToCallback(void* from_obj, size_t from_offset,
|
static Status SaveToCallback(Cache::ObjectPtr from_obj, size_t from_offset,
|
||||||
size_t length, void* out) {
|
size_t length, char* out) {
|
||||||
auto item = reinterpret_cast<TestItem*>(from_obj);
|
auto item = static_cast<TestItem*>(from_obj);
|
||||||
const char* buf = item->Buf();
|
const char* buf = item->Buf();
|
||||||
EXPECT_EQ(length, item->Size());
|
EXPECT_EQ(length, item->Size());
|
||||||
EXPECT_EQ(from_offset, 0);
|
EXPECT_EQ(from_offset, 0);
|
||||||
|
@ -51,30 +52,36 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
static void DeletionCallback(const Slice& /*key*/, void* obj) {
|
static void DeletionCallback(Cache::ObjectPtr obj,
|
||||||
delete reinterpret_cast<TestItem*>(obj);
|
MemoryAllocator* /*alloc*/) {
|
||||||
|
delete static_cast<TestItem*>(obj);
|
||||||
obj = nullptr;
|
obj = nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
static Cache::CacheItemHelper helper_;
|
static Status SaveToCallbackFail(Cache::ObjectPtr /*obj*/, size_t /*offset*/,
|
||||||
|
size_t /*size*/, char* /*out*/) {
|
||||||
static Status SaveToCallbackFail(void* /*obj*/, size_t /*offset*/,
|
|
||||||
size_t /*size*/, void* /*out*/) {
|
|
||||||
return Status::NotSupported();
|
return Status::NotSupported();
|
||||||
}
|
}
|
||||||
|
|
||||||
static Cache::CacheItemHelper helper_fail_;
|
static Status CreateCallback(const Slice& data, Cache::CreateContext* context,
|
||||||
|
MemoryAllocator* /*allocator*/,
|
||||||
Cache::CreateCallback test_item_creator = [&](const void* buf, size_t size,
|
Cache::ObjectPtr* out_obj, size_t* out_charge) {
|
||||||
void** out_obj,
|
auto t = static_cast<CompressedSecondaryCacheTest*>(context);
|
||||||
size_t* charge) -> Status {
|
if (t->fail_create_) {
|
||||||
if (fail_create_) {
|
|
||||||
return Status::NotSupported();
|
return Status::NotSupported();
|
||||||
}
|
}
|
||||||
*out_obj = reinterpret_cast<void*>(new TestItem((char*)buf, size));
|
*out_obj = new TestItem(data.data(), data.size());
|
||||||
*charge = size;
|
*out_charge = data.size();
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
};
|
}
|
||||||
|
|
||||||
|
static constexpr Cache::CacheItemHelper kHelper{
|
||||||
|
CacheEntryRole::kMisc, &DeletionCallback, &SizeCallback, &SaveToCallback,
|
||||||
|
&CreateCallback};
|
||||||
|
|
||||||
|
static constexpr Cache::CacheItemHelper kHelperFail{
|
||||||
|
CacheEntryRole::kMisc, &DeletionCallback, &SizeCallback,
|
||||||
|
&SaveToCallbackFail, &CreateCallback};
|
||||||
|
|
||||||
void SetFailCreate(bool fail) { fail_create_ = fail; }
|
void SetFailCreate(bool fail) { fail_create_ = fail; }
|
||||||
|
|
||||||
|
@ -84,7 +91,7 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
bool is_in_sec_cache{true};
|
bool is_in_sec_cache{true};
|
||||||
// Lookup an non-existent key.
|
// Lookup an non-existent key.
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle0 = sec_cache->Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> handle0 = sec_cache->Lookup(
|
||||||
"k0", test_item_creator, true, /*advise_erase=*/true, is_in_sec_cache);
|
"k0", &kHelper, this, true, /*advise_erase=*/true, is_in_sec_cache);
|
||||||
ASSERT_EQ(handle0, nullptr);
|
ASSERT_EQ(handle0, nullptr);
|
||||||
|
|
||||||
Random rnd(301);
|
Random rnd(301);
|
||||||
|
@ -92,23 +99,21 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
std::string str1(rnd.RandomString(1000));
|
std::string str1(rnd.RandomString(1000));
|
||||||
TestItem item1(str1.data(), str1.length());
|
TestItem item1(str1.data(), str1.length());
|
||||||
// A dummy handle is inserted if the item is inserted for the first time.
|
// A dummy handle is inserted if the item is inserted for the first time.
|
||||||
ASSERT_OK(sec_cache->Insert("k1", &item1,
|
ASSERT_OK(sec_cache->Insert("k1", &item1, &kHelper));
|
||||||
&CompressedSecondaryCacheTest::helper_));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 1);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 1);
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_uncompressed_bytes, 0);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_uncompressed_bytes, 0);
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_compressed_bytes, 0);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_compressed_bytes, 0);
|
||||||
|
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle1_1 = sec_cache->Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> handle1_1 = sec_cache->Lookup(
|
||||||
"k1", test_item_creator, true, /*advise_erase=*/false, is_in_sec_cache);
|
"k1", &kHelper, this, true, /*advise_erase=*/false, is_in_sec_cache);
|
||||||
ASSERT_EQ(handle1_1, nullptr);
|
ASSERT_EQ(handle1_1, nullptr);
|
||||||
|
|
||||||
// Insert and Lookup the item k1 for the second time and advise erasing it.
|
// Insert and Lookup the item k1 for the second time and advise erasing it.
|
||||||
ASSERT_OK(sec_cache->Insert("k1", &item1,
|
ASSERT_OK(sec_cache->Insert("k1", &item1, &kHelper));
|
||||||
&CompressedSecondaryCacheTest::helper_));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 1);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 1);
|
||||||
|
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle1_2 = sec_cache->Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> handle1_2 = sec_cache->Lookup(
|
||||||
"k1", test_item_creator, true, /*advise_erase=*/true, is_in_sec_cache);
|
"k1", &kHelper, this, true, /*advise_erase=*/true, is_in_sec_cache);
|
||||||
ASSERT_NE(handle1_2, nullptr);
|
ASSERT_NE(handle1_2, nullptr);
|
||||||
ASSERT_FALSE(is_in_sec_cache);
|
ASSERT_FALSE(is_in_sec_cache);
|
||||||
if (sec_cache_is_compressed) {
|
if (sec_cache_is_compressed) {
|
||||||
|
@ -128,21 +133,19 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
|
|
||||||
// Lookup the item k1 again.
|
// Lookup the item k1 again.
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle1_3 = sec_cache->Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> handle1_3 = sec_cache->Lookup(
|
||||||
"k1", test_item_creator, true, /*advise_erase=*/true, is_in_sec_cache);
|
"k1", &kHelper, this, true, /*advise_erase=*/true, is_in_sec_cache);
|
||||||
ASSERT_EQ(handle1_3, nullptr);
|
ASSERT_EQ(handle1_3, nullptr);
|
||||||
|
|
||||||
// Insert and Lookup the item k2.
|
// Insert and Lookup the item k2.
|
||||||
std::string str2(rnd.RandomString(1000));
|
std::string str2(rnd.RandomString(1000));
|
||||||
TestItem item2(str2.data(), str2.length());
|
TestItem item2(str2.data(), str2.length());
|
||||||
ASSERT_OK(sec_cache->Insert("k2", &item2,
|
ASSERT_OK(sec_cache->Insert("k2", &item2, &kHelper));
|
||||||
&CompressedSecondaryCacheTest::helper_));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 2);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 2);
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle2_1 = sec_cache->Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> handle2_1 = sec_cache->Lookup(
|
||||||
"k2", test_item_creator, true, /*advise_erase=*/false, is_in_sec_cache);
|
"k2", &kHelper, this, true, /*advise_erase=*/false, is_in_sec_cache);
|
||||||
ASSERT_EQ(handle2_1, nullptr);
|
ASSERT_EQ(handle2_1, nullptr);
|
||||||
|
|
||||||
ASSERT_OK(sec_cache->Insert("k2", &item2,
|
ASSERT_OK(sec_cache->Insert("k2", &item2, &kHelper));
|
||||||
&CompressedSecondaryCacheTest::helper_));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 2);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 2);
|
||||||
if (sec_cache_is_compressed) {
|
if (sec_cache_is_compressed) {
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_uncompressed_bytes,
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_uncompressed_bytes,
|
||||||
|
@ -154,7 +157,7 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_compressed_bytes, 0);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_compressed_bytes, 0);
|
||||||
}
|
}
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle2_2 = sec_cache->Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> handle2_2 = sec_cache->Lookup(
|
||||||
"k2", test_item_creator, true, /*advise_erase=*/false, is_in_sec_cache);
|
"k2", &kHelper, this, true, /*advise_erase=*/false, is_in_sec_cache);
|
||||||
ASSERT_NE(handle2_2, nullptr);
|
ASSERT_NE(handle2_2, nullptr);
|
||||||
std::unique_ptr<TestItem> val2 =
|
std::unique_ptr<TestItem> val2 =
|
||||||
std::unique_ptr<TestItem>(static_cast<TestItem*>(handle2_2->Value()));
|
std::unique_ptr<TestItem>(static_cast<TestItem*>(handle2_2->Value()));
|
||||||
|
@ -223,28 +226,24 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
std::string str1(rnd.RandomString(1000));
|
std::string str1(rnd.RandomString(1000));
|
||||||
TestItem item1(str1.data(), str1.length());
|
TestItem item1(str1.data(), str1.length());
|
||||||
// Insert a dummy handle.
|
// Insert a dummy handle.
|
||||||
ASSERT_OK(sec_cache->Insert("k1", &item1,
|
ASSERT_OK(sec_cache->Insert("k1", &item1, &kHelper));
|
||||||
&CompressedSecondaryCacheTest::helper_));
|
|
||||||
// Insert k1.
|
// Insert k1.
|
||||||
ASSERT_OK(sec_cache->Insert("k1", &item1,
|
ASSERT_OK(sec_cache->Insert("k1", &item1, &kHelper));
|
||||||
&CompressedSecondaryCacheTest::helper_));
|
|
||||||
|
|
||||||
// Insert and Lookup the second item.
|
// Insert and Lookup the second item.
|
||||||
std::string str2(rnd.RandomString(200));
|
std::string str2(rnd.RandomString(200));
|
||||||
TestItem item2(str2.data(), str2.length());
|
TestItem item2(str2.data(), str2.length());
|
||||||
// Insert a dummy handle, k1 is not evicted.
|
// Insert a dummy handle, k1 is not evicted.
|
||||||
ASSERT_OK(sec_cache->Insert("k2", &item2,
|
ASSERT_OK(sec_cache->Insert("k2", &item2, &kHelper));
|
||||||
&CompressedSecondaryCacheTest::helper_));
|
|
||||||
bool is_in_sec_cache{false};
|
bool is_in_sec_cache{false};
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle1 = sec_cache->Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> handle1 = sec_cache->Lookup(
|
||||||
"k1", test_item_creator, true, /*advise_erase=*/false, is_in_sec_cache);
|
"k1", &kHelper, this, true, /*advise_erase=*/false, is_in_sec_cache);
|
||||||
ASSERT_EQ(handle1, nullptr);
|
ASSERT_EQ(handle1, nullptr);
|
||||||
|
|
||||||
// Insert k2 and k1 is evicted.
|
// Insert k2 and k1 is evicted.
|
||||||
ASSERT_OK(sec_cache->Insert("k2", &item2,
|
ASSERT_OK(sec_cache->Insert("k2", &item2, &kHelper));
|
||||||
&CompressedSecondaryCacheTest::helper_));
|
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle2 = sec_cache->Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> handle2 = sec_cache->Lookup(
|
||||||
"k2", test_item_creator, true, /*advise_erase=*/false, is_in_sec_cache);
|
"k2", &kHelper, this, true, /*advise_erase=*/false, is_in_sec_cache);
|
||||||
ASSERT_NE(handle2, nullptr);
|
ASSERT_NE(handle2, nullptr);
|
||||||
std::unique_ptr<TestItem> val2 =
|
std::unique_ptr<TestItem> val2 =
|
||||||
std::unique_ptr<TestItem>(static_cast<TestItem*>(handle2->Value()));
|
std::unique_ptr<TestItem>(static_cast<TestItem*>(handle2->Value()));
|
||||||
|
@ -252,27 +251,24 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
ASSERT_EQ(memcmp(val2->Buf(), item2.Buf(), item2.Size()), 0);
|
ASSERT_EQ(memcmp(val2->Buf(), item2.Buf(), item2.Size()), 0);
|
||||||
|
|
||||||
// Insert k1 again and a dummy handle is inserted.
|
// Insert k1 again and a dummy handle is inserted.
|
||||||
ASSERT_OK(sec_cache->Insert("k1", &item1,
|
ASSERT_OK(sec_cache->Insert("k1", &item1, &kHelper));
|
||||||
&CompressedSecondaryCacheTest::helper_));
|
|
||||||
|
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle1_1 = sec_cache->Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> handle1_1 = sec_cache->Lookup(
|
||||||
"k1", test_item_creator, true, /*advise_erase=*/false, is_in_sec_cache);
|
"k1", &kHelper, this, true, /*advise_erase=*/false, is_in_sec_cache);
|
||||||
ASSERT_EQ(handle1_1, nullptr);
|
ASSERT_EQ(handle1_1, nullptr);
|
||||||
|
|
||||||
// Create Fails.
|
// Create Fails.
|
||||||
SetFailCreate(true);
|
SetFailCreate(true);
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> handle2_1 = sec_cache->Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> handle2_1 = sec_cache->Lookup(
|
||||||
"k2", test_item_creator, true, /*advise_erase=*/true, is_in_sec_cache);
|
"k2", &kHelper, this, true, /*advise_erase=*/true, is_in_sec_cache);
|
||||||
ASSERT_EQ(handle2_1, nullptr);
|
ASSERT_EQ(handle2_1, nullptr);
|
||||||
|
|
||||||
// Save Fails.
|
// Save Fails.
|
||||||
std::string str3 = rnd.RandomString(10);
|
std::string str3 = rnd.RandomString(10);
|
||||||
TestItem item3(str3.data(), str3.length());
|
TestItem item3(str3.data(), str3.length());
|
||||||
// The Status is OK because a dummy handle is inserted.
|
// The Status is OK because a dummy handle is inserted.
|
||||||
ASSERT_OK(sec_cache->Insert("k3", &item3,
|
ASSERT_OK(sec_cache->Insert("k3", &item3, &kHelperFail));
|
||||||
&CompressedSecondaryCacheTest::helper_fail_));
|
ASSERT_NOK(sec_cache->Insert("k3", &item3, &kHelperFail));
|
||||||
ASSERT_NOK(sec_cache->Insert("k3", &item3,
|
|
||||||
&CompressedSecondaryCacheTest::helper_fail_));
|
|
||||||
|
|
||||||
sec_cache.reset();
|
sec_cache.reset();
|
||||||
}
|
}
|
||||||
|
@ -309,15 +305,13 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
Random rnd(301);
|
Random rnd(301);
|
||||||
std::string str1 = rnd.RandomString(1001);
|
std::string str1 = rnd.RandomString(1001);
|
||||||
auto item1_1 = new TestItem(str1.data(), str1.length());
|
auto item1_1 = new TestItem(str1.data(), str1.length());
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k1", item1_1, &kHelper, str1.length()));
|
||||||
"k1", item1_1, &CompressedSecondaryCacheTest::helper_, str1.length()));
|
|
||||||
|
|
||||||
std::string str2 = rnd.RandomString(1012);
|
std::string str2 = rnd.RandomString(1012);
|
||||||
auto item2_1 = new TestItem(str2.data(), str2.length());
|
auto item2_1 = new TestItem(str2.data(), str2.length());
|
||||||
// After this Insert, primary cache contains k2 and secondary cache contains
|
// After this Insert, primary cache contains k2 and secondary cache contains
|
||||||
// k1's dummy item.
|
// k1's dummy item.
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k2", item2_1, &kHelper, str2.length()));
|
||||||
"k2", item2_1, &CompressedSecondaryCacheTest::helper_, str2.length()));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 1);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 1);
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_uncompressed_bytes, 0);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_uncompressed_bytes, 0);
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_compressed_bytes, 0);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_compressed_bytes, 0);
|
||||||
|
@ -326,22 +320,19 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
auto item3_1 = new TestItem(str3.data(), str3.length());
|
auto item3_1 = new TestItem(str3.data(), str3.length());
|
||||||
// After this Insert, primary cache contains k3 and secondary cache contains
|
// After this Insert, primary cache contains k3 and secondary cache contains
|
||||||
// k1's dummy item and k2's dummy item.
|
// k1's dummy item and k2's dummy item.
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k3", item3_1, &kHelper, str3.length()));
|
||||||
"k3", item3_1, &CompressedSecondaryCacheTest::helper_, str3.length()));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 2);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 2);
|
||||||
|
|
||||||
// After this Insert, primary cache contains k1 and secondary cache contains
|
// After this Insert, primary cache contains k1 and secondary cache contains
|
||||||
// k1's dummy item, k2's dummy item, and k3's dummy item.
|
// k1's dummy item, k2's dummy item, and k3's dummy item.
|
||||||
auto item1_2 = new TestItem(str1.data(), str1.length());
|
auto item1_2 = new TestItem(str1.data(), str1.length());
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k1", item1_2, &kHelper, str1.length()));
|
||||||
"k1", item1_2, &CompressedSecondaryCacheTest::helper_, str1.length()));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 3);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 3);
|
||||||
|
|
||||||
// After this Insert, primary cache contains k2 and secondary cache contains
|
// After this Insert, primary cache contains k2 and secondary cache contains
|
||||||
// k1's item, k2's dummy item, and k3's dummy item.
|
// k1's item, k2's dummy item, and k3's dummy item.
|
||||||
auto item2_2 = new TestItem(str2.data(), str2.length());
|
auto item2_2 = new TestItem(str2.data(), str2.length());
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k2", item2_2, &kHelper, str2.length()));
|
||||||
"k2", item2_2, &CompressedSecondaryCacheTest::helper_, str2.length()));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 1);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 1);
|
||||||
if (sec_cache_is_compressed) {
|
if (sec_cache_is_compressed) {
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_uncompressed_bytes,
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_uncompressed_bytes,
|
||||||
|
@ -356,8 +347,7 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
// After this Insert, primary cache contains k3 and secondary cache contains
|
// After this Insert, primary cache contains k3 and secondary cache contains
|
||||||
// k1's item and k2's item.
|
// k1's item and k2's item.
|
||||||
auto item3_2 = new TestItem(str3.data(), str3.length());
|
auto item3_2 = new TestItem(str3.data(), str3.length());
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k3", item3_2, &kHelper, str3.length()));
|
||||||
"k3", item3_2, &CompressedSecondaryCacheTest::helper_, str3.length()));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 2);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 2);
|
||||||
if (sec_cache_is_compressed) {
|
if (sec_cache_is_compressed) {
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_uncompressed_bytes,
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_uncompressed_bytes,
|
||||||
|
@ -370,8 +360,7 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
handle = cache->Lookup("k3", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k3", &kHelper, this, Cache::Priority::LOW, true,
|
||||||
test_item_creator, Cache::Priority::LOW, true,
|
|
||||||
stats.get());
|
stats.get());
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
auto val3 = static_cast<TestItem*>(cache->Value(handle));
|
auto val3 = static_cast<TestItem*>(cache->Value(handle));
|
||||||
|
@ -380,15 +369,13 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
|
|
||||||
// Lookup an non-existent key.
|
// Lookup an non-existent key.
|
||||||
handle = cache->Lookup("k0", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k0", &kHelper, this, Cache::Priority::LOW, true,
|
||||||
test_item_creator, Cache::Priority::LOW, true,
|
|
||||||
stats.get());
|
stats.get());
|
||||||
ASSERT_EQ(handle, nullptr);
|
ASSERT_EQ(handle, nullptr);
|
||||||
|
|
||||||
// This Lookup should just insert a dummy handle in the primary cache
|
// This Lookup should just insert a dummy handle in the primary cache
|
||||||
// and the k1 is still in the secondary cache.
|
// and the k1 is still in the secondary cache.
|
||||||
handle = cache->Lookup("k1", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k1", &kHelper, this, Cache::Priority::LOW, true,
|
||||||
test_item_creator, Cache::Priority::LOW, true,
|
|
||||||
stats.get());
|
stats.get());
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
ASSERT_EQ(get_perf_context()->block_cache_standalone_handle_count, 1);
|
ASSERT_EQ(get_perf_context()->block_cache_standalone_handle_count, 1);
|
||||||
|
@ -400,8 +387,7 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
// This Lookup should erase k1 from the secondary cache and insert
|
// This Lookup should erase k1 from the secondary cache and insert
|
||||||
// it into primary cache; then k3 is demoted.
|
// it into primary cache; then k3 is demoted.
|
||||||
// k2 and k3 are in secondary cache.
|
// k2 and k3 are in secondary cache.
|
||||||
handle = cache->Lookup("k1", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k1", &kHelper, this, Cache::Priority::LOW, true,
|
||||||
test_item_creator, Cache::Priority::LOW, true,
|
|
||||||
stats.get());
|
stats.get());
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
ASSERT_EQ(get_perf_context()->block_cache_standalone_handle_count, 1);
|
ASSERT_EQ(get_perf_context()->block_cache_standalone_handle_count, 1);
|
||||||
|
@ -409,8 +395,7 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
|
|
||||||
// k2 is still in secondary cache.
|
// k2 is still in secondary cache.
|
||||||
handle = cache->Lookup("k2", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k2", &kHelper, this, Cache::Priority::LOW, true,
|
||||||
test_item_creator, Cache::Priority::LOW, true,
|
|
||||||
stats.get());
|
stats.get());
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
ASSERT_EQ(get_perf_context()->block_cache_standalone_handle_count, 2);
|
ASSERT_EQ(get_perf_context()->block_cache_standalone_handle_count, 2);
|
||||||
|
@ -418,8 +403,7 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
|
|
||||||
// Testing SetCapacity().
|
// Testing SetCapacity().
|
||||||
ASSERT_OK(secondary_cache->SetCapacity(0));
|
ASSERT_OK(secondary_cache->SetCapacity(0));
|
||||||
handle = cache->Lookup("k3", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k3", &kHelper, this, Cache::Priority::LOW, true,
|
||||||
test_item_creator, Cache::Priority::LOW, true,
|
|
||||||
stats.get());
|
stats.get());
|
||||||
ASSERT_EQ(handle, nullptr);
|
ASSERT_EQ(handle, nullptr);
|
||||||
|
|
||||||
|
@ -429,35 +413,30 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
ASSERT_EQ(capacity, 7000);
|
ASSERT_EQ(capacity, 7000);
|
||||||
auto item1_3 = new TestItem(str1.data(), str1.length());
|
auto item1_3 = new TestItem(str1.data(), str1.length());
|
||||||
// After this Insert, primary cache contains k1.
|
// After this Insert, primary cache contains k1.
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k1", item1_3, &kHelper, str2.length()));
|
||||||
"k1", item1_3, &CompressedSecondaryCacheTest::helper_, str2.length()));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 3);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 3);
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 4);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 4);
|
||||||
|
|
||||||
auto item2_3 = new TestItem(str2.data(), str2.length());
|
auto item2_3 = new TestItem(str2.data(), str2.length());
|
||||||
// After this Insert, primary cache contains k2 and secondary cache contains
|
// After this Insert, primary cache contains k2 and secondary cache contains
|
||||||
// k1's dummy item.
|
// k1's dummy item.
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k2", item2_3, &kHelper, str1.length()));
|
||||||
"k2", item2_3, &CompressedSecondaryCacheTest::helper_, str1.length()));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 4);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 4);
|
||||||
|
|
||||||
auto item1_4 = new TestItem(str1.data(), str1.length());
|
auto item1_4 = new TestItem(str1.data(), str1.length());
|
||||||
// After this Insert, primary cache contains k1 and secondary cache contains
|
// After this Insert, primary cache contains k1 and secondary cache contains
|
||||||
// k1's dummy item and k2's dummy item.
|
// k1's dummy item and k2's dummy item.
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k1", item1_4, &kHelper, str2.length()));
|
||||||
"k1", item1_4, &CompressedSecondaryCacheTest::helper_, str2.length()));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 5);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_dummy_count, 5);
|
||||||
|
|
||||||
auto item2_4 = new TestItem(str2.data(), str2.length());
|
auto item2_4 = new TestItem(str2.data(), str2.length());
|
||||||
// After this Insert, primary cache contains k2 and secondary cache contains
|
// After this Insert, primary cache contains k2 and secondary cache contains
|
||||||
// k1's real item and k2's dummy item.
|
// k1's real item and k2's dummy item.
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k2", item2_4, &kHelper, str2.length()));
|
||||||
"k2", item2_4, &CompressedSecondaryCacheTest::helper_, str2.length()));
|
|
||||||
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 5);
|
ASSERT_EQ(get_perf_context()->compressed_sec_cache_insert_real_count, 5);
|
||||||
// This Lookup should just insert a dummy handle in the primary cache
|
// This Lookup should just insert a dummy handle in the primary cache
|
||||||
// and the k1 is still in the secondary cache.
|
// and the k1 is still in the secondary cache.
|
||||||
handle = cache->Lookup("k1", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k1", &kHelper, this, Cache::Priority::LOW, true,
|
||||||
test_item_creator, Cache::Priority::LOW, true,
|
|
||||||
stats.get());
|
stats.get());
|
||||||
|
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
|
@ -496,18 +475,13 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
Random rnd(301);
|
Random rnd(301);
|
||||||
std::string str1 = rnd.RandomString(1001);
|
std::string str1 = rnd.RandomString(1001);
|
||||||
auto item1 = std::make_unique<TestItem>(str1.data(), str1.length());
|
auto item1 = std::make_unique<TestItem>(str1.data(), str1.length());
|
||||||
ASSERT_NOK(cache->Insert("k1", item1.get(), nullptr, str1.length()));
|
ASSERT_OK(cache->Insert("k1", item1.get(), &kHelper, str1.length()));
|
||||||
ASSERT_OK(cache->Insert("k1", item1.get(),
|
|
||||||
&CompressedSecondaryCacheTest::helper_,
|
|
||||||
str1.length()));
|
|
||||||
item1.release(); // Appease clang-analyze "potential memory leak"
|
item1.release(); // Appease clang-analyze "potential memory leak"
|
||||||
|
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
handle = cache->Lookup("k2", nullptr, test_item_creator,
|
handle = cache->Lookup("k2", nullptr, this, Cache::Priority::LOW, true);
|
||||||
Cache::Priority::LOW, true);
|
|
||||||
ASSERT_EQ(handle, nullptr);
|
ASSERT_EQ(handle, nullptr);
|
||||||
handle = cache->Lookup("k2", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k2", &kHelper, this, Cache::Priority::LOW, false);
|
||||||
test_item_creator, Cache::Priority::LOW, false);
|
|
||||||
ASSERT_EQ(handle, nullptr);
|
ASSERT_EQ(handle, nullptr);
|
||||||
|
|
||||||
cache.reset();
|
cache.reset();
|
||||||
|
@ -543,29 +517,25 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
Random rnd(301);
|
Random rnd(301);
|
||||||
std::string str1 = rnd.RandomString(1001);
|
std::string str1 = rnd.RandomString(1001);
|
||||||
auto item1 = new TestItem(str1.data(), str1.length());
|
auto item1 = new TestItem(str1.data(), str1.length());
|
||||||
ASSERT_OK(cache->Insert("k1", item1,
|
ASSERT_OK(cache->Insert("k1", item1, &kHelperFail, str1.length()));
|
||||||
&CompressedSecondaryCacheTest::helper_fail_,
|
|
||||||
str1.length()));
|
|
||||||
|
|
||||||
std::string str2 = rnd.RandomString(1002);
|
std::string str2 = rnd.RandomString(1002);
|
||||||
auto item2 = new TestItem(str2.data(), str2.length());
|
auto item2 = new TestItem(str2.data(), str2.length());
|
||||||
// k1 should be demoted to the secondary cache.
|
// k1 should be demoted to the secondary cache.
|
||||||
ASSERT_OK(cache->Insert("k2", item2,
|
ASSERT_OK(cache->Insert("k2", item2, &kHelperFail, str2.length()));
|
||||||
&CompressedSecondaryCacheTest::helper_fail_,
|
|
||||||
str2.length()));
|
|
||||||
|
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
handle = cache->Lookup("k2", &CompressedSecondaryCacheTest::helper_fail_,
|
handle =
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
cache->Lookup("k2", &kHelperFail, this, Cache::Priority::LOW, true);
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
// This lookup should fail, since k1 demotion would have failed.
|
// This lookup should fail, since k1 demotion would have failed.
|
||||||
handle = cache->Lookup("k1", &CompressedSecondaryCacheTest::helper_fail_,
|
handle =
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
cache->Lookup("k1", &kHelperFail, this, Cache::Priority::LOW, true);
|
||||||
ASSERT_EQ(handle, nullptr);
|
ASSERT_EQ(handle, nullptr);
|
||||||
// Since k1 was not promoted, k2 should still be in cache.
|
// Since k1 was not promoted, k2 should still be in cache.
|
||||||
handle = cache->Lookup("k2", &CompressedSecondaryCacheTest::helper_fail_,
|
handle =
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
cache->Lookup("k2", &kHelperFail, this, Cache::Priority::LOW, true);
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
|
|
||||||
|
@ -602,28 +572,23 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
Random rnd(301);
|
Random rnd(301);
|
||||||
std::string str1 = rnd.RandomString(1001);
|
std::string str1 = rnd.RandomString(1001);
|
||||||
auto item1 = new TestItem(str1.data(), str1.length());
|
auto item1 = new TestItem(str1.data(), str1.length());
|
||||||
ASSERT_OK(cache->Insert("k1", item1, &CompressedSecondaryCacheTest::helper_,
|
ASSERT_OK(cache->Insert("k1", item1, &kHelper, str1.length()));
|
||||||
str1.length()));
|
|
||||||
|
|
||||||
std::string str2 = rnd.RandomString(1002);
|
std::string str2 = rnd.RandomString(1002);
|
||||||
auto item2 = new TestItem(str2.data(), str2.length());
|
auto item2 = new TestItem(str2.data(), str2.length());
|
||||||
// k1 should be demoted to the secondary cache.
|
// k1 should be demoted to the secondary cache.
|
||||||
ASSERT_OK(cache->Insert("k2", item2, &CompressedSecondaryCacheTest::helper_,
|
ASSERT_OK(cache->Insert("k2", item2, &kHelper, str2.length()));
|
||||||
str2.length()));
|
|
||||||
|
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
SetFailCreate(true);
|
SetFailCreate(true);
|
||||||
handle = cache->Lookup("k2", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k2", &kHelper, this, Cache::Priority::LOW, true);
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
// This lookup should fail, since k1 creation would have failed
|
// This lookup should fail, since k1 creation would have failed
|
||||||
handle = cache->Lookup("k1", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k1", &kHelper, this, Cache::Priority::LOW, true);
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
|
||||||
ASSERT_EQ(handle, nullptr);
|
ASSERT_EQ(handle, nullptr);
|
||||||
// Since k1 didn't get promoted, k2 should still be in cache
|
// Since k1 didn't get promoted, k2 should still be in cache
|
||||||
handle = cache->Lookup("k2", &CompressedSecondaryCacheTest::helper_,
|
handle = cache->Lookup("k2", &kHelper, this, Cache::Priority::LOW, true);
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
|
|
||||||
|
@ -660,32 +625,27 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
Random rnd(301);
|
Random rnd(301);
|
||||||
std::string str1 = rnd.RandomString(1001);
|
std::string str1 = rnd.RandomString(1001);
|
||||||
auto item1_1 = new TestItem(str1.data(), str1.length());
|
auto item1_1 = new TestItem(str1.data(), str1.length());
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k1", item1_1, &kHelper, str1.length()));
|
||||||
"k1", item1_1, &CompressedSecondaryCacheTest::helper_, str1.length()));
|
|
||||||
|
|
||||||
std::string str2 = rnd.RandomString(1002);
|
std::string str2 = rnd.RandomString(1002);
|
||||||
std::string str2_clone{str2};
|
std::string str2_clone{str2};
|
||||||
auto item2 = new TestItem(str2.data(), str2.length());
|
auto item2 = new TestItem(str2.data(), str2.length());
|
||||||
// After this Insert, primary cache contains k2 and secondary cache contains
|
// After this Insert, primary cache contains k2 and secondary cache contains
|
||||||
// k1's dummy item.
|
// k1's dummy item.
|
||||||
ASSERT_OK(cache->Insert("k2", item2, &CompressedSecondaryCacheTest::helper_,
|
ASSERT_OK(cache->Insert("k2", item2, &kHelper, str2.length()));
|
||||||
str2.length()));
|
|
||||||
|
|
||||||
// After this Insert, primary cache contains k1 and secondary cache contains
|
// After this Insert, primary cache contains k1 and secondary cache contains
|
||||||
// k1's dummy item and k2's dummy item.
|
// k1's dummy item and k2's dummy item.
|
||||||
auto item1_2 = new TestItem(str1.data(), str1.length());
|
auto item1_2 = new TestItem(str1.data(), str1.length());
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k1", item1_2, &kHelper, str1.length()));
|
||||||
"k1", item1_2, &CompressedSecondaryCacheTest::helper_, str1.length()));
|
|
||||||
|
|
||||||
auto item2_2 = new TestItem(str2.data(), str2.length());
|
auto item2_2 = new TestItem(str2.data(), str2.length());
|
||||||
// After this Insert, primary cache contains k2 and secondary cache contains
|
// After this Insert, primary cache contains k2 and secondary cache contains
|
||||||
// k1's item and k2's dummy item.
|
// k1's item and k2's dummy item.
|
||||||
ASSERT_OK(cache->Insert(
|
ASSERT_OK(cache->Insert("k2", item2_2, &kHelper, str2.length()));
|
||||||
"k2", item2_2, &CompressedSecondaryCacheTest::helper_, str2.length()));
|
|
||||||
|
|
||||||
Cache::Handle* handle2;
|
Cache::Handle* handle2;
|
||||||
handle2 = cache->Lookup("k2", &CompressedSecondaryCacheTest::helper_,
|
handle2 = cache->Lookup("k2", &kHelper, this, Cache::Priority::LOW, true);
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
|
||||||
ASSERT_NE(handle2, nullptr);
|
ASSERT_NE(handle2, nullptr);
|
||||||
cache->Release(handle2);
|
cache->Release(handle2);
|
||||||
|
|
||||||
|
@ -693,14 +653,12 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
// strict_capacity_limit is true, but the lookup should still succeed.
|
// strict_capacity_limit is true, but the lookup should still succeed.
|
||||||
// A k1's dummy item is inserted into primary cache.
|
// A k1's dummy item is inserted into primary cache.
|
||||||
Cache::Handle* handle1;
|
Cache::Handle* handle1;
|
||||||
handle1 = cache->Lookup("k1", &CompressedSecondaryCacheTest::helper_,
|
handle1 = cache->Lookup("k1", &kHelper, this, Cache::Priority::LOW, true);
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
|
||||||
ASSERT_NE(handle1, nullptr);
|
ASSERT_NE(handle1, nullptr);
|
||||||
cache->Release(handle1);
|
cache->Release(handle1);
|
||||||
|
|
||||||
// Since k1 didn't get inserted, k2 should still be in cache
|
// Since k1 didn't get inserted, k2 should still be in cache
|
||||||
handle2 = cache->Lookup("k2", &CompressedSecondaryCacheTest::helper_,
|
handle2 = cache->Lookup("k2", &kHelper, this, Cache::Priority::LOW, true);
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
|
||||||
ASSERT_NE(handle2, nullptr);
|
ASSERT_NE(handle2, nullptr);
|
||||||
cache->Release(handle2);
|
cache->Release(handle2);
|
||||||
|
|
||||||
|
@ -741,7 +699,7 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
current_chunk = current_chunk->next;
|
current_chunk = current_chunk->next;
|
||||||
ASSERT_EQ(current_chunk->size, 98);
|
ASSERT_EQ(current_chunk->size, 98);
|
||||||
|
|
||||||
sec_cache->GetDeletionCallback(true)("dummy", chunks_head);
|
sec_cache->GetHelper(true)->del_cb(chunks_head, /*alloc*/ nullptr);
|
||||||
}
|
}
|
||||||
|
|
||||||
void MergeChunksIntoValueTest() {
|
void MergeChunksIntoValueTest() {
|
||||||
|
@ -822,23 +780,13 @@ class CompressedSecondaryCacheTest : public testing::Test {
|
||||||
std::string value_str{value.get(), charge};
|
std::string value_str{value.get(), charge};
|
||||||
ASSERT_EQ(strcmp(value_str.data(), str.data()), 0);
|
ASSERT_EQ(strcmp(value_str.data(), str.data()), 0);
|
||||||
|
|
||||||
sec_cache->GetDeletionCallback(true)("dummy", chunks_head);
|
sec_cache->GetHelper(true)->del_cb(chunks_head, /*alloc*/ nullptr);
|
||||||
}
|
}
|
||||||
|
|
||||||
private:
|
private:
|
||||||
bool fail_create_;
|
bool fail_create_;
|
||||||
};
|
};
|
||||||
|
|
||||||
Cache::CacheItemHelper CompressedSecondaryCacheTest::helper_(
|
|
||||||
CompressedSecondaryCacheTest::SizeCallback,
|
|
||||||
CompressedSecondaryCacheTest::SaveToCallback,
|
|
||||||
CompressedSecondaryCacheTest::DeletionCallback);
|
|
||||||
|
|
||||||
Cache::CacheItemHelper CompressedSecondaryCacheTest::helper_fail_(
|
|
||||||
CompressedSecondaryCacheTest::SizeCallback,
|
|
||||||
CompressedSecondaryCacheTest::SaveToCallbackFail,
|
|
||||||
CompressedSecondaryCacheTest::DeletionCallback);
|
|
||||||
|
|
||||||
class CompressedSecCacheTestWithCompressAndAllocatorParam
|
class CompressedSecCacheTestWithCompressAndAllocatorParam
|
||||||
: public CompressedSecondaryCacheTest,
|
: public CompressedSecondaryCacheTest,
|
||||||
public ::testing::WithParamInterface<std::tuple<bool, bool>> {
|
public ::testing::WithParamInterface<std::tuple<bool, bool>> {
|
||||||
|
|
|
@ -22,20 +22,28 @@
|
||||||
namespace ROCKSDB_NAMESPACE {
|
namespace ROCKSDB_NAMESPACE {
|
||||||
namespace lru_cache {
|
namespace lru_cache {
|
||||||
|
|
||||||
|
namespace {
|
||||||
// A distinct pointer value for marking "dummy" cache entries
|
// A distinct pointer value for marking "dummy" cache entries
|
||||||
void* const kDummyValueMarker = const_cast<char*>("kDummyValueMarker");
|
struct DummyValue {
|
||||||
|
char val[12] = "kDummyValue";
|
||||||
|
};
|
||||||
|
DummyValue kDummyValue{};
|
||||||
|
} // namespace
|
||||||
|
|
||||||
LRUHandleTable::LRUHandleTable(int max_upper_hash_bits)
|
LRUHandleTable::LRUHandleTable(int max_upper_hash_bits,
|
||||||
|
MemoryAllocator* allocator)
|
||||||
: length_bits_(/* historical starting size*/ 4),
|
: length_bits_(/* historical starting size*/ 4),
|
||||||
list_(new LRUHandle* [size_t{1} << length_bits_] {}),
|
list_(new LRUHandle* [size_t{1} << length_bits_] {}),
|
||||||
elems_(0),
|
elems_(0),
|
||||||
max_length_bits_(max_upper_hash_bits) {}
|
max_length_bits_(max_upper_hash_bits),
|
||||||
|
allocator_(allocator) {}
|
||||||
|
|
||||||
LRUHandleTable::~LRUHandleTable() {
|
LRUHandleTable::~LRUHandleTable() {
|
||||||
|
auto alloc = allocator_;
|
||||||
ApplyToEntriesRange(
|
ApplyToEntriesRange(
|
||||||
[](LRUHandle* h) {
|
[alloc](LRUHandle* h) {
|
||||||
if (!h->HasRefs()) {
|
if (!h->HasRefs()) {
|
||||||
h->Free();
|
h->Free(alloc);
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
0, size_t{1} << length_bits_);
|
0, size_t{1} << length_bits_);
|
||||||
|
@ -118,6 +126,7 @@ LRUCacheShard::LRUCacheShard(size_t capacity, bool strict_capacity_limit,
|
||||||
double low_pri_pool_ratio, bool use_adaptive_mutex,
|
double low_pri_pool_ratio, bool use_adaptive_mutex,
|
||||||
CacheMetadataChargePolicy metadata_charge_policy,
|
CacheMetadataChargePolicy metadata_charge_policy,
|
||||||
int max_upper_hash_bits,
|
int max_upper_hash_bits,
|
||||||
|
MemoryAllocator* allocator,
|
||||||
SecondaryCache* secondary_cache)
|
SecondaryCache* secondary_cache)
|
||||||
: CacheShardBase(metadata_charge_policy),
|
: CacheShardBase(metadata_charge_policy),
|
||||||
capacity_(0),
|
capacity_(0),
|
||||||
|
@ -128,7 +137,7 @@ LRUCacheShard::LRUCacheShard(size_t capacity, bool strict_capacity_limit,
|
||||||
high_pri_pool_capacity_(0),
|
high_pri_pool_capacity_(0),
|
||||||
low_pri_pool_ratio_(low_pri_pool_ratio),
|
low_pri_pool_ratio_(low_pri_pool_ratio),
|
||||||
low_pri_pool_capacity_(0),
|
low_pri_pool_capacity_(0),
|
||||||
table_(max_upper_hash_bits),
|
table_(max_upper_hash_bits, allocator),
|
||||||
usage_(0),
|
usage_(0),
|
||||||
lru_usage_(0),
|
lru_usage_(0),
|
||||||
mutex_(use_adaptive_mutex),
|
mutex_(use_adaptive_mutex),
|
||||||
|
@ -159,13 +168,14 @@ void LRUCacheShard::EraseUnRefEntries() {
|
||||||
}
|
}
|
||||||
|
|
||||||
for (auto entry : last_reference_list) {
|
for (auto entry : last_reference_list) {
|
||||||
entry->Free();
|
entry->Free(table_.GetAllocator());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void LRUCacheShard::ApplyToSomeEntries(
|
void LRUCacheShard::ApplyToSomeEntries(
|
||||||
const std::function<void(const Slice& key, void* value, size_t charge,
|
const std::function<void(const Slice& key, Cache::ObjectPtr value,
|
||||||
DeleterFn deleter)>& callback,
|
size_t charge,
|
||||||
|
const Cache::CacheItemHelper* helper)>& callback,
|
||||||
size_t average_entries_per_lock, size_t* state) {
|
size_t average_entries_per_lock, size_t* state) {
|
||||||
// The state is essentially going to be the starting hash, which works
|
// The state is essentially going to be the starting hash, which works
|
||||||
// nicely even if we resize between calls because we use upper-most
|
// nicely even if we resize between calls because we use upper-most
|
||||||
|
@ -192,11 +202,8 @@ void LRUCacheShard::ApplyToSomeEntries(
|
||||||
table_.ApplyToEntriesRange(
|
table_.ApplyToEntriesRange(
|
||||||
[callback,
|
[callback,
|
||||||
metadata_charge_policy = metadata_charge_policy_](LRUHandle* h) {
|
metadata_charge_policy = metadata_charge_policy_](LRUHandle* h) {
|
||||||
DeleterFn deleter = h->IsSecondaryCacheCompatible()
|
|
||||||
? h->info_.helper->del_cb
|
|
||||||
: h->info_.deleter;
|
|
||||||
callback(h->key(), h->value, h->GetCharge(metadata_charge_policy),
|
callback(h->key(), h->value, h->GetCharge(metadata_charge_policy),
|
||||||
deleter);
|
h->helper);
|
||||||
},
|
},
|
||||||
index_begin, index_end);
|
index_begin, index_end);
|
||||||
}
|
}
|
||||||
|
@ -339,11 +346,11 @@ void LRUCacheShard::TryInsertIntoSecondaryCache(
|
||||||
for (auto entry : evicted_handles) {
|
for (auto entry : evicted_handles) {
|
||||||
if (secondary_cache_ && entry->IsSecondaryCacheCompatible() &&
|
if (secondary_cache_ && entry->IsSecondaryCacheCompatible() &&
|
||||||
!entry->IsInSecondaryCache()) {
|
!entry->IsInSecondaryCache()) {
|
||||||
secondary_cache_->Insert(entry->key(), entry->value, entry->info_.helper)
|
secondary_cache_->Insert(entry->key(), entry->value, entry->helper)
|
||||||
.PermitUncheckedError();
|
.PermitUncheckedError();
|
||||||
}
|
}
|
||||||
// Free the entries here outside of mutex for performance reasons.
|
// Free the entries here outside of mutex for performance reasons.
|
||||||
entry->Free();
|
entry->Free(table_.GetAllocator());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -464,7 +471,7 @@ void LRUCacheShard::Promote(LRUHandle* e) {
|
||||||
TryInsertIntoSecondaryCache(last_reference_list);
|
TryInsertIntoSecondaryCache(last_reference_list);
|
||||||
if (free_standalone_handle) {
|
if (free_standalone_handle) {
|
||||||
e->Unref();
|
e->Unref();
|
||||||
e->Free();
|
e->Free(table_.GetAllocator());
|
||||||
e = nullptr;
|
e = nullptr;
|
||||||
} else {
|
} else {
|
||||||
PERF_COUNTER_ADD(block_cache_standalone_handle_count, 1);
|
PERF_COUNTER_ADD(block_cache_standalone_handle_count, 1);
|
||||||
|
@ -476,9 +483,9 @@ void LRUCacheShard::Promote(LRUHandle* e) {
|
||||||
// rare case that one exists
|
// rare case that one exists
|
||||||
Cache::Priority priority =
|
Cache::Priority priority =
|
||||||
e->IsHighPri() ? Cache::Priority::HIGH : Cache::Priority::LOW;
|
e->IsHighPri() ? Cache::Priority::HIGH : Cache::Priority::LOW;
|
||||||
s = Insert(e->key(), e->hash, kDummyValueMarker, /*charge=*/0,
|
s = Insert(e->key(), e->hash, &kDummyValue, &kNoopCacheItemHelper,
|
||||||
/*deleter=*/nullptr, /*helper=*/nullptr, /*handle=*/nullptr,
|
/*charge=*/0,
|
||||||
priority);
|
/*handle=*/nullptr, priority);
|
||||||
} else {
|
} else {
|
||||||
e->SetInCache(true);
|
e->SetInCache(true);
|
||||||
LRUHandle* handle = e;
|
LRUHandle* handle = e;
|
||||||
|
@ -508,7 +515,7 @@ void LRUCacheShard::Promote(LRUHandle* e) {
|
||||||
|
|
||||||
LRUHandle* LRUCacheShard::Lookup(const Slice& key, uint32_t hash,
|
LRUHandle* LRUCacheShard::Lookup(const Slice& key, uint32_t hash,
|
||||||
const Cache::CacheItemHelper* helper,
|
const Cache::CacheItemHelper* helper,
|
||||||
const Cache::CreateCallback& create_cb,
|
Cache::CreateContext* create_context,
|
||||||
Cache::Priority priority, bool wait,
|
Cache::Priority priority, bool wait,
|
||||||
Statistics* stats) {
|
Statistics* stats) {
|
||||||
LRUHandle* e = nullptr;
|
LRUHandle* e = nullptr;
|
||||||
|
@ -518,7 +525,7 @@ LRUHandle* LRUCacheShard::Lookup(const Slice& key, uint32_t hash,
|
||||||
e = table_.Lookup(key, hash);
|
e = table_.Lookup(key, hash);
|
||||||
if (e != nullptr) {
|
if (e != nullptr) {
|
||||||
assert(e->InCache());
|
assert(e->InCache());
|
||||||
if (e->value == kDummyValueMarker) {
|
if (e->value == &kDummyValue) {
|
||||||
// For a dummy handle, if it was retrieved from secondary cache,
|
// For a dummy handle, if it was retrieved from secondary cache,
|
||||||
// it may still exist in secondary cache.
|
// it may still exist in secondary cache.
|
||||||
// If the handle exists in secondary cache, the value should be
|
// If the handle exists in secondary cache, the value should be
|
||||||
|
@ -547,24 +554,17 @@ LRUHandle* LRUCacheShard::Lookup(const Slice& key, uint32_t hash,
|
||||||
// standalone handle is returned to the caller. Only if the block is hit
|
// standalone handle is returned to the caller. Only if the block is hit
|
||||||
// again, we erase it from CompressedSecondaryCache and add it into the
|
// again, we erase it from CompressedSecondaryCache and add it into the
|
||||||
// primary cache.
|
// primary cache.
|
||||||
if (!e && secondary_cache_ && helper && helper->saveto_cb) {
|
if (!e && secondary_cache_ && helper && helper->create_cb) {
|
||||||
// For objects from the secondary cache, we expect the caller to provide
|
|
||||||
// a way to create/delete the primary cache object. The only case where
|
|
||||||
// a deleter would not be required is for dummy entries inserted for
|
|
||||||
// accounting purposes, which we won't demote to the secondary cache
|
|
||||||
// anyway.
|
|
||||||
assert(create_cb && helper->del_cb);
|
|
||||||
bool is_in_sec_cache{false};
|
bool is_in_sec_cache{false};
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> secondary_handle =
|
std::unique_ptr<SecondaryCacheResultHandle> secondary_handle =
|
||||||
secondary_cache_->Lookup(key, create_cb, wait, found_dummy_entry,
|
secondary_cache_->Lookup(key, helper, create_context, wait,
|
||||||
is_in_sec_cache);
|
found_dummy_entry, is_in_sec_cache);
|
||||||
if (secondary_handle != nullptr) {
|
if (secondary_handle != nullptr) {
|
||||||
e = static_cast<LRUHandle*>(malloc(sizeof(LRUHandle) - 1 + key.size()));
|
e = static_cast<LRUHandle*>(malloc(sizeof(LRUHandle) - 1 + key.size()));
|
||||||
|
|
||||||
e->m_flags = 0;
|
e->m_flags = 0;
|
||||||
e->im_flags = 0;
|
e->im_flags = 0;
|
||||||
e->SetSecondaryCacheCompatible(true);
|
e->helper = helper;
|
||||||
e->info_.helper = helper;
|
|
||||||
e->key_length = key.size();
|
e->key_length = key.size();
|
||||||
e->hash = hash;
|
e->hash = hash;
|
||||||
e->refs = 0;
|
e->refs = 0;
|
||||||
|
@ -585,7 +585,7 @@ LRUHandle* LRUCacheShard::Lookup(const Slice& key, uint32_t hash,
|
||||||
if (!e->value) {
|
if (!e->value) {
|
||||||
// The secondary cache returned a handle, but the lookup failed.
|
// The secondary cache returned a handle, but the lookup failed.
|
||||||
e->Unref();
|
e->Unref();
|
||||||
e->Free();
|
e->Free(table_.GetAllocator());
|
||||||
e = nullptr;
|
e = nullptr;
|
||||||
} else {
|
} else {
|
||||||
PERF_COUNTER_ADD(secondary_cache_hit_count, 1);
|
PERF_COUNTER_ADD(secondary_cache_hit_count, 1);
|
||||||
|
@ -669,16 +669,18 @@ bool LRUCacheShard::Release(LRUHandle* e, bool /*useful*/,
|
||||||
|
|
||||||
// Free the entry here outside of mutex for performance reasons.
|
// Free the entry here outside of mutex for performance reasons.
|
||||||
if (last_reference) {
|
if (last_reference) {
|
||||||
e->Free();
|
e->Free(table_.GetAllocator());
|
||||||
}
|
}
|
||||||
return last_reference;
|
return last_reference;
|
||||||
}
|
}
|
||||||
|
|
||||||
Status LRUCacheShard::Insert(const Slice& key, uint32_t hash, void* value,
|
Status LRUCacheShard::Insert(const Slice& key, uint32_t hash,
|
||||||
size_t charge,
|
Cache::ObjectPtr value,
|
||||||
void (*deleter)(const Slice& key, void* value),
|
|
||||||
const Cache::CacheItemHelper* helper,
|
const Cache::CacheItemHelper* helper,
|
||||||
LRUHandle** handle, Cache::Priority priority) {
|
size_t charge, LRUHandle** handle,
|
||||||
|
Cache::Priority priority) {
|
||||||
|
assert(helper);
|
||||||
|
|
||||||
// Allocate the memory here outside of the mutex.
|
// Allocate the memory here outside of the mutex.
|
||||||
// If the cache is full, we'll have to release it.
|
// If the cache is full, we'll have to release it.
|
||||||
// It shouldn't happen very often though.
|
// It shouldn't happen very often though.
|
||||||
|
@ -688,17 +690,7 @@ Status LRUCacheShard::Insert(const Slice& key, uint32_t hash, void* value,
|
||||||
e->value = value;
|
e->value = value;
|
||||||
e->m_flags = 0;
|
e->m_flags = 0;
|
||||||
e->im_flags = 0;
|
e->im_flags = 0;
|
||||||
if (helper) {
|
e->helper = helper;
|
||||||
// Use only one of the two parameters
|
|
||||||
assert(deleter == nullptr);
|
|
||||||
// value == nullptr is reserved for indicating failure for when secondary
|
|
||||||
// cache compatible
|
|
||||||
assert(value != nullptr);
|
|
||||||
e->SetSecondaryCacheCompatible(true);
|
|
||||||
e->info_.helper = helper;
|
|
||||||
} else {
|
|
||||||
e->info_.deleter = deleter;
|
|
||||||
}
|
|
||||||
e->key_length = key.size();
|
e->key_length = key.size();
|
||||||
e->hash = hash;
|
e->hash = hash;
|
||||||
e->refs = 0;
|
e->refs = 0;
|
||||||
|
@ -708,6 +700,10 @@ Status LRUCacheShard::Insert(const Slice& key, uint32_t hash, void* value,
|
||||||
memcpy(e->key_data, key.data(), key.size());
|
memcpy(e->key_data, key.data(), key.size());
|
||||||
e->CalcTotalCharge(charge, metadata_charge_policy_);
|
e->CalcTotalCharge(charge, metadata_charge_policy_);
|
||||||
|
|
||||||
|
// value == nullptr is reserved for indicating failure for when secondary
|
||||||
|
// cache compatible
|
||||||
|
assert(!(e->IsSecondaryCacheCompatible() && value == nullptr));
|
||||||
|
|
||||||
return InsertItem(e, handle, /* free_handle_on_fail */ true);
|
return InsertItem(e, handle, /* free_handle_on_fail */ true);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -733,7 +729,7 @@ void LRUCacheShard::Erase(const Slice& key, uint32_t hash) {
|
||||||
// Free the entry here outside of mutex for performance reasons.
|
// Free the entry here outside of mutex for performance reasons.
|
||||||
// last_reference will only be true if e != nullptr.
|
// last_reference will only be true if e != nullptr.
|
||||||
if (last_reference) {
|
if (last_reference) {
|
||||||
e->Free();
|
e->Free(table_.GetAllocator());
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -793,18 +789,19 @@ LRUCache::LRUCache(size_t capacity, int num_shard_bits,
|
||||||
secondary_cache_(std::move(_secondary_cache)) {
|
secondary_cache_(std::move(_secondary_cache)) {
|
||||||
size_t per_shard = GetPerShardCapacity();
|
size_t per_shard = GetPerShardCapacity();
|
||||||
SecondaryCache* secondary_cache = secondary_cache_.get();
|
SecondaryCache* secondary_cache = secondary_cache_.get();
|
||||||
|
MemoryAllocator* alloc = memory_allocator();
|
||||||
InitShards([=](LRUCacheShard* cs) {
|
InitShards([=](LRUCacheShard* cs) {
|
||||||
new (cs) LRUCacheShard(
|
new (cs) LRUCacheShard(
|
||||||
per_shard, strict_capacity_limit, high_pri_pool_ratio,
|
per_shard, strict_capacity_limit, high_pri_pool_ratio,
|
||||||
low_pri_pool_ratio, use_adaptive_mutex, metadata_charge_policy,
|
low_pri_pool_ratio, use_adaptive_mutex, metadata_charge_policy,
|
||||||
/* max_upper_hash_bits */ 32 - num_shard_bits, secondary_cache);
|
/* max_upper_hash_bits */ 32 - num_shard_bits, alloc, secondary_cache);
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
||||||
void* LRUCache::Value(Handle* handle) {
|
Cache::ObjectPtr LRUCache::Value(Handle* handle) {
|
||||||
auto h = reinterpret_cast<const LRUHandle*>(handle);
|
auto h = reinterpret_cast<const LRUHandle*>(handle);
|
||||||
assert(!h->IsPending() || h->value == nullptr);
|
assert(!h->IsPending() || h->value == nullptr);
|
||||||
assert(h->value != kDummyValueMarker);
|
assert(h->value != &kDummyValue);
|
||||||
return h->value;
|
return h->value;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -813,13 +810,10 @@ size_t LRUCache::GetCharge(Handle* handle) const {
|
||||||
GetShard(0).metadata_charge_policy_);
|
GetShard(0).metadata_charge_policy_);
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::DeleterFn LRUCache::GetDeleter(Handle* handle) const {
|
const Cache::CacheItemHelper* LRUCache::GetCacheItemHelper(
|
||||||
|
Handle* handle) const {
|
||||||
auto h = reinterpret_cast<const LRUHandle*>(handle);
|
auto h = reinterpret_cast<const LRUHandle*>(handle);
|
||||||
if (h->IsSecondaryCacheCompatible()) {
|
return h->helper;
|
||||||
return h->info_.helper->del_cb;
|
|
||||||
} else {
|
|
||||||
return h->info_.deleter;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
size_t LRUCache::TEST_GetLRUSize() {
|
size_t LRUCache::TEST_GetLRUSize() {
|
||||||
|
|
|
@ -13,6 +13,7 @@
|
||||||
|
|
||||||
#include "cache/sharded_cache.h"
|
#include "cache/sharded_cache.h"
|
||||||
#include "port/lang.h"
|
#include "port/lang.h"
|
||||||
|
#include "port/likely.h"
|
||||||
#include "port/malloc.h"
|
#include "port/malloc.h"
|
||||||
#include "port/port.h"
|
#include "port/port.h"
|
||||||
#include "rocksdb/secondary_cache.h"
|
#include "rocksdb/secondary_cache.h"
|
||||||
|
@ -48,13 +49,8 @@ namespace lru_cache {
|
||||||
// While refs > 0, public properties like value and deleter must not change.
|
// While refs > 0, public properties like value and deleter must not change.
|
||||||
|
|
||||||
struct LRUHandle {
|
struct LRUHandle {
|
||||||
void* value;
|
Cache::ObjectPtr value;
|
||||||
union Info {
|
const Cache::CacheItemHelper* helper;
|
||||||
Info() {}
|
|
||||||
~Info() {}
|
|
||||||
Cache::DeleterFn deleter;
|
|
||||||
const Cache::CacheItemHelper* helper;
|
|
||||||
} info_;
|
|
||||||
// An entry is not added to the LRUHandleTable until the secondary cache
|
// An entry is not added to the LRUHandleTable until the secondary cache
|
||||||
// lookup is complete, so its safe to have this union.
|
// lookup is complete, so its safe to have this union.
|
||||||
union {
|
union {
|
||||||
|
@ -93,14 +89,12 @@ struct LRUHandle {
|
||||||
IM_IS_HIGH_PRI = (1 << 0),
|
IM_IS_HIGH_PRI = (1 << 0),
|
||||||
// Whether this entry is low priority entry.
|
// Whether this entry is low priority entry.
|
||||||
IM_IS_LOW_PRI = (1 << 1),
|
IM_IS_LOW_PRI = (1 << 1),
|
||||||
// Can this be inserted into the secondary cache.
|
|
||||||
IM_IS_SECONDARY_CACHE_COMPATIBLE = (1 << 2),
|
|
||||||
// Is the handle still being read from a lower tier.
|
// Is the handle still being read from a lower tier.
|
||||||
IM_IS_PENDING = (1 << 3),
|
IM_IS_PENDING = (1 << 2),
|
||||||
// Whether this handle is still in a lower tier
|
// Whether this handle is still in a lower tier
|
||||||
IM_IS_IN_SECONDARY_CACHE = (1 << 4),
|
IM_IS_IN_SECONDARY_CACHE = (1 << 3),
|
||||||
// Marks result handles that should not be inserted into cache
|
// Marks result handles that should not be inserted into cache
|
||||||
IM_IS_STANDALONE = (1 << 5),
|
IM_IS_STANDALONE = (1 << 4),
|
||||||
};
|
};
|
||||||
|
|
||||||
// Beginning of the key (MUST BE THE LAST FIELD IN THIS STRUCT!)
|
// Beginning of the key (MUST BE THE LAST FIELD IN THIS STRUCT!)
|
||||||
|
@ -130,9 +124,7 @@ struct LRUHandle {
|
||||||
bool IsLowPri() const { return im_flags & IM_IS_LOW_PRI; }
|
bool IsLowPri() const { return im_flags & IM_IS_LOW_PRI; }
|
||||||
bool InLowPriPool() const { return m_flags & M_IN_LOW_PRI_POOL; }
|
bool InLowPriPool() const { return m_flags & M_IN_LOW_PRI_POOL; }
|
||||||
bool HasHit() const { return m_flags & M_HAS_HIT; }
|
bool HasHit() const { return m_flags & M_HAS_HIT; }
|
||||||
bool IsSecondaryCacheCompatible() const {
|
bool IsSecondaryCacheCompatible() const { return helper->size_cb != nullptr; }
|
||||||
return im_flags & IM_IS_SECONDARY_CACHE_COMPATIBLE;
|
|
||||||
}
|
|
||||||
bool IsPending() const { return im_flags & IM_IS_PENDING; }
|
bool IsPending() const { return im_flags & IM_IS_PENDING; }
|
||||||
bool IsInSecondaryCache() const {
|
bool IsInSecondaryCache() const {
|
||||||
return im_flags & IM_IS_IN_SECONDARY_CACHE;
|
return im_flags & IM_IS_IN_SECONDARY_CACHE;
|
||||||
|
@ -178,14 +170,6 @@ struct LRUHandle {
|
||||||
|
|
||||||
void SetHit() { m_flags |= M_HAS_HIT; }
|
void SetHit() { m_flags |= M_HAS_HIT; }
|
||||||
|
|
||||||
void SetSecondaryCacheCompatible(bool compat) {
|
|
||||||
if (compat) {
|
|
||||||
im_flags |= IM_IS_SECONDARY_CACHE_COMPATIBLE;
|
|
||||||
} else {
|
|
||||||
im_flags &= ~IM_IS_SECONDARY_CACHE_COMPATIBLE;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
void SetIsPending(bool pending) {
|
void SetIsPending(bool pending) {
|
||||||
if (pending) {
|
if (pending) {
|
||||||
im_flags |= IM_IS_PENDING;
|
im_flags |= IM_IS_PENDING;
|
||||||
|
@ -210,22 +194,19 @@ struct LRUHandle {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void Free() {
|
void Free(MemoryAllocator* allocator) {
|
||||||
assert(refs == 0);
|
assert(refs == 0);
|
||||||
|
|
||||||
if (!IsSecondaryCacheCompatible() && info_.deleter) {
|
if (UNLIKELY(IsPending())) {
|
||||||
(*info_.deleter)(key(), value);
|
assert(sec_handle != nullptr);
|
||||||
} else if (IsSecondaryCacheCompatible()) {
|
SecondaryCacheResultHandle* tmp_sec_handle = sec_handle;
|
||||||
if (IsPending()) {
|
tmp_sec_handle->Wait();
|
||||||
assert(sec_handle != nullptr);
|
value = tmp_sec_handle->Value();
|
||||||
SecondaryCacheResultHandle* tmp_sec_handle = sec_handle;
|
delete tmp_sec_handle;
|
||||||
tmp_sec_handle->Wait();
|
}
|
||||||
value = tmp_sec_handle->Value();
|
assert(helper);
|
||||||
delete tmp_sec_handle;
|
if (helper->del_cb) {
|
||||||
}
|
helper->del_cb(value, allocator);
|
||||||
if (value) {
|
|
||||||
(*info_.helper->del_cb)(key(), value);
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
free(this);
|
free(this);
|
||||||
|
@ -267,7 +248,7 @@ struct LRUHandle {
|
||||||
// 4.4.3's builtin hashtable.
|
// 4.4.3's builtin hashtable.
|
||||||
class LRUHandleTable {
|
class LRUHandleTable {
|
||||||
public:
|
public:
|
||||||
explicit LRUHandleTable(int max_upper_hash_bits);
|
explicit LRUHandleTable(int max_upper_hash_bits, MemoryAllocator* allocator);
|
||||||
~LRUHandleTable();
|
~LRUHandleTable();
|
||||||
|
|
||||||
LRUHandle* Lookup(const Slice& key, uint32_t hash);
|
LRUHandle* Lookup(const Slice& key, uint32_t hash);
|
||||||
|
@ -291,6 +272,8 @@ class LRUHandleTable {
|
||||||
|
|
||||||
size_t GetOccupancyCount() const { return elems_; }
|
size_t GetOccupancyCount() const { return elems_; }
|
||||||
|
|
||||||
|
MemoryAllocator* GetAllocator() const { return allocator_; }
|
||||||
|
|
||||||
private:
|
private:
|
||||||
// Return a pointer to slot that points to a cache entry that
|
// Return a pointer to slot that points to a cache entry that
|
||||||
// matches key/hash. If there is no such cache entry, return a
|
// matches key/hash. If there is no such cache entry, return a
|
||||||
|
@ -312,6 +295,9 @@ class LRUHandleTable {
|
||||||
|
|
||||||
// Set from max_upper_hash_bits (see constructor).
|
// Set from max_upper_hash_bits (see constructor).
|
||||||
const int max_length_bits_;
|
const int max_length_bits_;
|
||||||
|
|
||||||
|
// From Cache, needed for delete
|
||||||
|
MemoryAllocator* const allocator_;
|
||||||
};
|
};
|
||||||
|
|
||||||
// A single shard of sharded cache.
|
// A single shard of sharded cache.
|
||||||
|
@ -321,7 +307,8 @@ class ALIGN_AS(CACHE_LINE_SIZE) LRUCacheShard final : public CacheShardBase {
|
||||||
double high_pri_pool_ratio, double low_pri_pool_ratio,
|
double high_pri_pool_ratio, double low_pri_pool_ratio,
|
||||||
bool use_adaptive_mutex,
|
bool use_adaptive_mutex,
|
||||||
CacheMetadataChargePolicy metadata_charge_policy,
|
CacheMetadataChargePolicy metadata_charge_policy,
|
||||||
int max_upper_hash_bits, SecondaryCache* secondary_cache);
|
int max_upper_hash_bits, MemoryAllocator* allocator,
|
||||||
|
SecondaryCache* secondary_cache);
|
||||||
|
|
||||||
public: // Type definitions expected as parameter to ShardedCache
|
public: // Type definitions expected as parameter to ShardedCache
|
||||||
using HandleImpl = LRUHandle;
|
using HandleImpl = LRUHandle;
|
||||||
|
@ -348,26 +335,15 @@ class ALIGN_AS(CACHE_LINE_SIZE) LRUCacheShard final : public CacheShardBase {
|
||||||
void SetLowPriorityPoolRatio(double low_pri_pool_ratio);
|
void SetLowPriorityPoolRatio(double low_pri_pool_ratio);
|
||||||
|
|
||||||
// Like Cache methods, but with an extra "hash" parameter.
|
// Like Cache methods, but with an extra "hash" parameter.
|
||||||
inline Status Insert(const Slice& key, uint32_t hash, void* value,
|
Status Insert(const Slice& key, uint32_t hash, Cache::ObjectPtr value,
|
||||||
size_t charge, Cache::DeleterFn deleter,
|
const Cache::CacheItemHelper* helper, size_t charge,
|
||||||
LRUHandle** handle, Cache::Priority priority) {
|
LRUHandle** handle, Cache::Priority priority);
|
||||||
return Insert(key, hash, value, charge, deleter, nullptr, handle, priority);
|
|
||||||
}
|
|
||||||
inline Status Insert(const Slice& key, uint32_t hash, void* value,
|
|
||||||
const Cache::CacheItemHelper* helper, size_t charge,
|
|
||||||
LRUHandle** handle, Cache::Priority priority) {
|
|
||||||
assert(helper);
|
|
||||||
return Insert(key, hash, value, charge, nullptr, helper, handle, priority);
|
|
||||||
}
|
|
||||||
// If helper_cb is null, the values of the following arguments don't matter.
|
|
||||||
LRUHandle* Lookup(const Slice& key, uint32_t hash,
|
LRUHandle* Lookup(const Slice& key, uint32_t hash,
|
||||||
const Cache::CacheItemHelper* helper,
|
const Cache::CacheItemHelper* helper,
|
||||||
const Cache::CreateCallback& create_cb,
|
Cache::CreateContext* create_context,
|
||||||
Cache::Priority priority, bool wait, Statistics* stats);
|
Cache::Priority priority, bool wait, Statistics* stats);
|
||||||
inline LRUHandle* Lookup(const Slice& key, uint32_t hash) {
|
|
||||||
return Lookup(key, hash, nullptr, nullptr, Cache::Priority::LOW, true,
|
|
||||||
nullptr);
|
|
||||||
}
|
|
||||||
bool Release(LRUHandle* handle, bool useful, bool erase_if_last_ref);
|
bool Release(LRUHandle* handle, bool useful, bool erase_if_last_ref);
|
||||||
bool IsReady(LRUHandle* /*handle*/);
|
bool IsReady(LRUHandle* /*handle*/);
|
||||||
void Wait(LRUHandle* /*handle*/) {}
|
void Wait(LRUHandle* /*handle*/) {}
|
||||||
|
@ -384,8 +360,9 @@ class ALIGN_AS(CACHE_LINE_SIZE) LRUCacheShard final : public CacheShardBase {
|
||||||
size_t GetTableAddressCount() const;
|
size_t GetTableAddressCount() const;
|
||||||
|
|
||||||
void ApplyToSomeEntries(
|
void ApplyToSomeEntries(
|
||||||
const std::function<void(const Slice& key, void* value, size_t charge,
|
const std::function<void(const Slice& key, Cache::ObjectPtr value,
|
||||||
DeleterFn deleter)>& callback,
|
size_t charge,
|
||||||
|
const Cache::CacheItemHelper* helper)>& callback,
|
||||||
size_t average_entries_per_lock, size_t* state);
|
size_t average_entries_per_lock, size_t* state);
|
||||||
|
|
||||||
void EraseUnRefEntries();
|
void EraseUnRefEntries();
|
||||||
|
@ -414,9 +391,6 @@ class ALIGN_AS(CACHE_LINE_SIZE) LRUCacheShard final : public CacheShardBase {
|
||||||
// nullptr.
|
// nullptr.
|
||||||
Status InsertItem(LRUHandle* item, LRUHandle** handle,
|
Status InsertItem(LRUHandle* item, LRUHandle** handle,
|
||||||
bool free_handle_on_fail);
|
bool free_handle_on_fail);
|
||||||
Status Insert(const Slice& key, uint32_t hash, void* value, size_t charge,
|
|
||||||
DeleterFn deleter, const Cache::CacheItemHelper* helper,
|
|
||||||
LRUHandle** handle, Cache::Priority priority);
|
|
||||||
// Promote an item looked up from the secondary cache to the LRU cache.
|
// Promote an item looked up from the secondary cache to the LRU cache.
|
||||||
// The item may be still in the secondary cache.
|
// The item may be still in the secondary cache.
|
||||||
// It is only inserted into the hash table and not the LRU list, and only
|
// It is only inserted into the hash table and not the LRU list, and only
|
||||||
|
@ -521,9 +495,9 @@ class LRUCache
|
||||||
kDontChargeCacheMetadata,
|
kDontChargeCacheMetadata,
|
||||||
std::shared_ptr<SecondaryCache> secondary_cache = nullptr);
|
std::shared_ptr<SecondaryCache> secondary_cache = nullptr);
|
||||||
const char* Name() const override { return "LRUCache"; }
|
const char* Name() const override { return "LRUCache"; }
|
||||||
void* Value(Handle* handle) override;
|
ObjectPtr Value(Handle* handle) override;
|
||||||
size_t GetCharge(Handle* handle) const override;
|
size_t GetCharge(Handle* handle) const override;
|
||||||
DeleterFn GetDeleter(Handle* handle) const override;
|
const CacheItemHelper* GetCacheItemHelper(Handle* handle) const override;
|
||||||
void WaitAll(std::vector<Handle*>& handles) override;
|
void WaitAll(std::vector<Handle*>& handles) override;
|
||||||
|
|
||||||
// Retrieves number of elements in LRU, for unit test purpose only.
|
// Retrieves number of elements in LRU, for unit test purpose only.
|
||||||
|
|
|
@ -10,6 +10,7 @@
|
||||||
|
|
||||||
#include "cache/cache_key.h"
|
#include "cache/cache_key.h"
|
||||||
#include "cache/clock_cache.h"
|
#include "cache/clock_cache.h"
|
||||||
|
#include "cache_helpers.h"
|
||||||
#include "db/db_test_util.h"
|
#include "db/db_test_util.h"
|
||||||
#include "file/sst_file_manager_impl.h"
|
#include "file/sst_file_manager_impl.h"
|
||||||
#include "port/port.h"
|
#include "port/port.h"
|
||||||
|
@ -19,6 +20,7 @@
|
||||||
#include "rocksdb/sst_file_manager.h"
|
#include "rocksdb/sst_file_manager.h"
|
||||||
#include "rocksdb/utilities/cache_dump_load.h"
|
#include "rocksdb/utilities/cache_dump_load.h"
|
||||||
#include "test_util/testharness.h"
|
#include "test_util/testharness.h"
|
||||||
|
#include "typed_cache.h"
|
||||||
#include "util/coding.h"
|
#include "util/coding.h"
|
||||||
#include "util/random.h"
|
#include "util/random.h"
|
||||||
#include "utilities/cache_dump_load_impl.h"
|
#include "utilities/cache_dump_load_impl.h"
|
||||||
|
@ -49,14 +51,15 @@ class LRUCacheTest : public testing::Test {
|
||||||
high_pri_pool_ratio, low_pri_pool_ratio,
|
high_pri_pool_ratio, low_pri_pool_ratio,
|
||||||
use_adaptive_mutex, kDontChargeCacheMetadata,
|
use_adaptive_mutex, kDontChargeCacheMetadata,
|
||||||
/*max_upper_hash_bits=*/24,
|
/*max_upper_hash_bits=*/24,
|
||||||
|
/*allocator*/ nullptr,
|
||||||
/*secondary_cache=*/nullptr);
|
/*secondary_cache=*/nullptr);
|
||||||
}
|
}
|
||||||
|
|
||||||
void Insert(const std::string& key,
|
void Insert(const std::string& key,
|
||||||
Cache::Priority priority = Cache::Priority::LOW) {
|
Cache::Priority priority = Cache::Priority::LOW) {
|
||||||
EXPECT_OK(cache_->Insert(key, 0 /*hash*/, nullptr /*value*/, 1 /*charge*/,
|
EXPECT_OK(cache_->Insert(key, 0 /*hash*/, nullptr /*value*/,
|
||||||
nullptr /*deleter*/, nullptr /*handle*/,
|
&kNoopCacheItemHelper, 1 /*charge*/,
|
||||||
priority));
|
nullptr /*handle*/, priority));
|
||||||
}
|
}
|
||||||
|
|
||||||
void Insert(char key, Cache::Priority priority = Cache::Priority::LOW) {
|
void Insert(char key, Cache::Priority priority = Cache::Priority::LOW) {
|
||||||
|
@ -64,7 +67,8 @@ class LRUCacheTest : public testing::Test {
|
||||||
}
|
}
|
||||||
|
|
||||||
bool Lookup(const std::string& key) {
|
bool Lookup(const std::string& key) {
|
||||||
auto handle = cache_->Lookup(key, 0 /*hash*/);
|
auto handle = cache_->Lookup(key, 0 /*hash*/, nullptr, nullptr,
|
||||||
|
Cache::Priority::LOW, true, nullptr);
|
||||||
if (handle) {
|
if (handle) {
|
||||||
cache_->Release(handle, true /*useful*/, false /*erase*/);
|
cache_->Release(handle, true /*useful*/, false /*erase*/);
|
||||||
return true;
|
return true;
|
||||||
|
@ -389,15 +393,15 @@ class ClockCacheTest : public testing::Test {
|
||||||
|
|
||||||
Table::Opts opts;
|
Table::Opts opts;
|
||||||
opts.estimated_value_size = 1;
|
opts.estimated_value_size = 1;
|
||||||
new (shard_)
|
new (shard_) Shard(capacity, strict_capacity_limit,
|
||||||
Shard(capacity, strict_capacity_limit, kDontChargeCacheMetadata, opts);
|
kDontChargeCacheMetadata, /*allocator*/ nullptr, opts);
|
||||||
}
|
}
|
||||||
|
|
||||||
Status Insert(const UniqueId64x2& hashed_key,
|
Status Insert(const UniqueId64x2& hashed_key,
|
||||||
Cache::Priority priority = Cache::Priority::LOW) {
|
Cache::Priority priority = Cache::Priority::LOW) {
|
||||||
return shard_->Insert(TestKey(hashed_key), hashed_key, nullptr /*value*/,
|
return shard_->Insert(TestKey(hashed_key), hashed_key, nullptr /*value*/,
|
||||||
1 /*charge*/, nullptr /*deleter*/, nullptr /*handle*/,
|
&kNoopCacheItemHelper, 1 /*charge*/,
|
||||||
priority);
|
nullptr /*handle*/, priority);
|
||||||
}
|
}
|
||||||
|
|
||||||
Status Insert(char key, Cache::Priority priority = Cache::Priority::LOW) {
|
Status Insert(char key, Cache::Priority priority = Cache::Priority::LOW) {
|
||||||
|
@ -407,8 +411,8 @@ class ClockCacheTest : public testing::Test {
|
||||||
Status InsertWithLen(char key, size_t len) {
|
Status InsertWithLen(char key, size_t len) {
|
||||||
std::string skey(len, key);
|
std::string skey(len, key);
|
||||||
return shard_->Insert(skey, TestHashedKey(key), nullptr /*value*/,
|
return shard_->Insert(skey, TestHashedKey(key), nullptr /*value*/,
|
||||||
1 /*charge*/, nullptr /*deleter*/, nullptr /*handle*/,
|
&kNoopCacheItemHelper, 1 /*charge*/,
|
||||||
Cache::Priority::LOW);
|
nullptr /*handle*/, Cache::Priority::LOW);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool Lookup(const Slice& key, const UniqueId64x2& hashed_key,
|
bool Lookup(const Slice& key, const UniqueId64x2& hashed_key,
|
||||||
|
@ -482,7 +486,7 @@ TEST_F(ClockCacheTest, Limits) {
|
||||||
// Single entry charge beyond capacity
|
// Single entry charge beyond capacity
|
||||||
{
|
{
|
||||||
Status s = shard_->Insert(TestKey(hkey), hkey, nullptr /*value*/,
|
Status s = shard_->Insert(TestKey(hkey), hkey, nullptr /*value*/,
|
||||||
5 /*charge*/, nullptr /*deleter*/,
|
&kNoopCacheItemHelper, 5 /*charge*/,
|
||||||
nullptr /*handle*/, Cache::Priority::LOW);
|
nullptr /*handle*/, Cache::Priority::LOW);
|
||||||
if (strict_capacity_limit) {
|
if (strict_capacity_limit) {
|
||||||
EXPECT_TRUE(s.IsMemoryLimit());
|
EXPECT_TRUE(s.IsMemoryLimit());
|
||||||
|
@ -495,7 +499,7 @@ TEST_F(ClockCacheTest, Limits) {
|
||||||
{
|
{
|
||||||
HandleImpl* h;
|
HandleImpl* h;
|
||||||
ASSERT_OK(shard_->Insert(TestKey(hkey), hkey, nullptr /*value*/,
|
ASSERT_OK(shard_->Insert(TestKey(hkey), hkey, nullptr /*value*/,
|
||||||
3 /*charge*/, nullptr /*deleter*/, &h,
|
&kNoopCacheItemHelper, 3 /*charge*/, &h,
|
||||||
Cache::Priority::LOW));
|
Cache::Priority::LOW));
|
||||||
// Try to insert more
|
// Try to insert more
|
||||||
Status s = Insert('a');
|
Status s = Insert('a');
|
||||||
|
@ -519,8 +523,9 @@ TEST_F(ClockCacheTest, Limits) {
|
||||||
for (size_t i = 0; i < n && s.ok(); ++i) {
|
for (size_t i = 0; i < n && s.ok(); ++i) {
|
||||||
hkey[1] = i;
|
hkey[1] = i;
|
||||||
s = shard_->Insert(TestKey(hkey), hkey, nullptr /*value*/,
|
s = shard_->Insert(TestKey(hkey), hkey, nullptr /*value*/,
|
||||||
(i + kCapacity < n) ? 0 : 1 /*charge*/,
|
&kNoopCacheItemHelper,
|
||||||
nullptr /*deleter*/, &ha[i], Cache::Priority::LOW);
|
(i + kCapacity < n) ? 0 : 1 /*charge*/, &ha[i],
|
||||||
|
Cache::Priority::LOW);
|
||||||
if (i == 0) {
|
if (i == 0) {
|
||||||
EXPECT_OK(s);
|
EXPECT_OK(s);
|
||||||
}
|
}
|
||||||
|
@ -658,18 +663,25 @@ TEST_F(ClockCacheTest, ClockEvictionTest) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void IncrementIntDeleter(const Slice& /*key*/, void* value) {
|
namespace {
|
||||||
*reinterpret_cast<int*>(value) += 1;
|
struct DeleteCounter {
|
||||||
}
|
int deleted = 0;
|
||||||
|
};
|
||||||
|
const Cache::CacheItemHelper kDeleteCounterHelper{
|
||||||
|
CacheEntryRole::kMisc,
|
||||||
|
[](Cache::ObjectPtr value, MemoryAllocator* /*alloc*/) {
|
||||||
|
static_cast<DeleteCounter*>(value)->deleted += 1;
|
||||||
|
}};
|
||||||
|
} // namespace
|
||||||
|
|
||||||
// Testing calls to CorrectNearOverflow in Release
|
// Testing calls to CorrectNearOverflow in Release
|
||||||
TEST_F(ClockCacheTest, ClockCounterOverflowTest) {
|
TEST_F(ClockCacheTest, ClockCounterOverflowTest) {
|
||||||
NewShard(6, /*strict_capacity_limit*/ false);
|
NewShard(6, /*strict_capacity_limit*/ false);
|
||||||
HandleImpl* h;
|
HandleImpl* h;
|
||||||
int deleted = 0;
|
DeleteCounter val;
|
||||||
UniqueId64x2 hkey = TestHashedKey('x');
|
UniqueId64x2 hkey = TestHashedKey('x');
|
||||||
ASSERT_OK(shard_->Insert(TestKey(hkey), hkey, &deleted, 1,
|
ASSERT_OK(shard_->Insert(TestKey(hkey), hkey, &val, &kDeleteCounterHelper, 1,
|
||||||
IncrementIntDeleter, &h, Cache::Priority::HIGH));
|
&h, Cache::Priority::HIGH));
|
||||||
|
|
||||||
// Some large number outstanding
|
// Some large number outstanding
|
||||||
shard_->TEST_RefN(h, 123456789);
|
shard_->TEST_RefN(h, 123456789);
|
||||||
|
@ -689,18 +701,18 @@ TEST_F(ClockCacheTest, ClockCounterOverflowTest) {
|
||||||
// Free all but last 1
|
// Free all but last 1
|
||||||
shard_->TEST_ReleaseN(h, 123456789);
|
shard_->TEST_ReleaseN(h, 123456789);
|
||||||
// Still alive
|
// Still alive
|
||||||
ASSERT_EQ(deleted, 0);
|
ASSERT_EQ(val.deleted, 0);
|
||||||
// Free last ref, which will finalize erasure
|
// Free last ref, which will finalize erasure
|
||||||
shard_->Release(h);
|
shard_->Release(h);
|
||||||
// Deleted
|
// Deleted
|
||||||
ASSERT_EQ(deleted, 1);
|
ASSERT_EQ(val.deleted, 1);
|
||||||
}
|
}
|
||||||
|
|
||||||
// This test is mostly to exercise some corner case logic, by forcing two
|
// This test is mostly to exercise some corner case logic, by forcing two
|
||||||
// keys to have the same hash, and more
|
// keys to have the same hash, and more
|
||||||
TEST_F(ClockCacheTest, CollidingInsertEraseTest) {
|
TEST_F(ClockCacheTest, CollidingInsertEraseTest) {
|
||||||
NewShard(6, /*strict_capacity_limit*/ false);
|
NewShard(6, /*strict_capacity_limit*/ false);
|
||||||
int deleted = 0;
|
DeleteCounter val;
|
||||||
UniqueId64x2 hkey1 = TestHashedKey('x');
|
UniqueId64x2 hkey1 = TestHashedKey('x');
|
||||||
Slice key1 = TestKey(hkey1);
|
Slice key1 = TestKey(hkey1);
|
||||||
UniqueId64x2 hkey2 = TestHashedKey('y');
|
UniqueId64x2 hkey2 = TestHashedKey('y');
|
||||||
|
@ -708,13 +720,13 @@ TEST_F(ClockCacheTest, CollidingInsertEraseTest) {
|
||||||
UniqueId64x2 hkey3 = TestHashedKey('z');
|
UniqueId64x2 hkey3 = TestHashedKey('z');
|
||||||
Slice key3 = TestKey(hkey3);
|
Slice key3 = TestKey(hkey3);
|
||||||
HandleImpl* h1;
|
HandleImpl* h1;
|
||||||
ASSERT_OK(shard_->Insert(key1, hkey1, &deleted, 1, IncrementIntDeleter, &h1,
|
ASSERT_OK(shard_->Insert(key1, hkey1, &val, &kDeleteCounterHelper, 1, &h1,
|
||||||
Cache::Priority::HIGH));
|
Cache::Priority::HIGH));
|
||||||
HandleImpl* h2;
|
HandleImpl* h2;
|
||||||
ASSERT_OK(shard_->Insert(key2, hkey2, &deleted, 1, IncrementIntDeleter, &h2,
|
ASSERT_OK(shard_->Insert(key2, hkey2, &val, &kDeleteCounterHelper, 1, &h2,
|
||||||
Cache::Priority::HIGH));
|
Cache::Priority::HIGH));
|
||||||
HandleImpl* h3;
|
HandleImpl* h3;
|
||||||
ASSERT_OK(shard_->Insert(key3, hkey3, &deleted, 1, IncrementIntDeleter, &h3,
|
ASSERT_OK(shard_->Insert(key3, hkey3, &val, &kDeleteCounterHelper, 1, &h3,
|
||||||
Cache::Priority::HIGH));
|
Cache::Priority::HIGH));
|
||||||
|
|
||||||
// Can repeatedly lookup+release despite the hash collision
|
// Can repeatedly lookup+release despite the hash collision
|
||||||
|
@ -739,7 +751,7 @@ TEST_F(ClockCacheTest, CollidingInsertEraseTest) {
|
||||||
shard_->Erase(key1, hkey1);
|
shard_->Erase(key1, hkey1);
|
||||||
|
|
||||||
// All still alive
|
// All still alive
|
||||||
ASSERT_EQ(deleted, 0);
|
ASSERT_EQ(val.deleted, 0);
|
||||||
|
|
||||||
// Invisible to Lookup
|
// Invisible to Lookup
|
||||||
tmp_h = shard_->Lookup(key1, hkey1);
|
tmp_h = shard_->Lookup(key1, hkey1);
|
||||||
|
@ -757,8 +769,8 @@ TEST_F(ClockCacheTest, CollidingInsertEraseTest) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// Also Insert with invisible entry there
|
// Also Insert with invisible entry there
|
||||||
ASSERT_OK(shard_->Insert(key1, hkey1, &deleted, 1, IncrementIntDeleter,
|
ASSERT_OK(shard_->Insert(key1, hkey1, &val, &kDeleteCounterHelper, 1, nullptr,
|
||||||
nullptr, Cache::Priority::HIGH));
|
Cache::Priority::HIGH));
|
||||||
tmp_h = shard_->Lookup(key1, hkey1);
|
tmp_h = shard_->Lookup(key1, hkey1);
|
||||||
// Found but distinct handle
|
// Found but distinct handle
|
||||||
ASSERT_NE(nullptr, tmp_h);
|
ASSERT_NE(nullptr, tmp_h);
|
||||||
|
@ -766,13 +778,13 @@ TEST_F(ClockCacheTest, CollidingInsertEraseTest) {
|
||||||
ASSERT_TRUE(shard_->Release(tmp_h, /*erase_if_last_ref*/ true));
|
ASSERT_TRUE(shard_->Release(tmp_h, /*erase_if_last_ref*/ true));
|
||||||
|
|
||||||
// tmp_h deleted
|
// tmp_h deleted
|
||||||
ASSERT_EQ(deleted--, 1);
|
ASSERT_EQ(val.deleted--, 1);
|
||||||
|
|
||||||
// Release last ref on h1 (already invisible)
|
// Release last ref on h1 (already invisible)
|
||||||
ASSERT_TRUE(shard_->Release(h1, /*erase_if_last_ref*/ false));
|
ASSERT_TRUE(shard_->Release(h1, /*erase_if_last_ref*/ false));
|
||||||
|
|
||||||
// h1 deleted
|
// h1 deleted
|
||||||
ASSERT_EQ(deleted--, 1);
|
ASSERT_EQ(val.deleted--, 1);
|
||||||
h1 = nullptr;
|
h1 = nullptr;
|
||||||
|
|
||||||
// Can still find h2, h3
|
// Can still find h2, h3
|
||||||
|
@ -790,7 +802,7 @@ TEST_F(ClockCacheTest, CollidingInsertEraseTest) {
|
||||||
ASSERT_FALSE(shard_->Release(h2, /*erase_if_last_ref*/ false));
|
ASSERT_FALSE(shard_->Release(h2, /*erase_if_last_ref*/ false));
|
||||||
|
|
||||||
// h2 still not deleted (unreferenced in cache)
|
// h2 still not deleted (unreferenced in cache)
|
||||||
ASSERT_EQ(deleted, 0);
|
ASSERT_EQ(val.deleted, 0);
|
||||||
|
|
||||||
// Can still find it
|
// Can still find it
|
||||||
tmp_h = shard_->Lookup(key2, hkey2);
|
tmp_h = shard_->Lookup(key2, hkey2);
|
||||||
|
@ -800,7 +812,7 @@ TEST_F(ClockCacheTest, CollidingInsertEraseTest) {
|
||||||
ASSERT_TRUE(shard_->Release(h2, /*erase_if_last_ref*/ true));
|
ASSERT_TRUE(shard_->Release(h2, /*erase_if_last_ref*/ true));
|
||||||
|
|
||||||
// h2 deleted
|
// h2 deleted
|
||||||
ASSERT_EQ(deleted--, 1);
|
ASSERT_EQ(val.deleted--, 1);
|
||||||
tmp_h = shard_->Lookup(key2, hkey2);
|
tmp_h = shard_->Lookup(key2, hkey2);
|
||||||
ASSERT_EQ(nullptr, tmp_h);
|
ASSERT_EQ(nullptr, tmp_h);
|
||||||
|
|
||||||
|
@ -815,13 +827,13 @@ TEST_F(ClockCacheTest, CollidingInsertEraseTest) {
|
||||||
ASSERT_FALSE(shard_->Release(h3, /*erase_if_last_ref*/ false));
|
ASSERT_FALSE(shard_->Release(h3, /*erase_if_last_ref*/ false));
|
||||||
|
|
||||||
// h3 still not deleted (unreferenced in cache)
|
// h3 still not deleted (unreferenced in cache)
|
||||||
ASSERT_EQ(deleted, 0);
|
ASSERT_EQ(val.deleted, 0);
|
||||||
|
|
||||||
// Explicit erase
|
// Explicit erase
|
||||||
shard_->Erase(key3, hkey3);
|
shard_->Erase(key3, hkey3);
|
||||||
|
|
||||||
// h3 deleted
|
// h3 deleted
|
||||||
ASSERT_EQ(deleted--, 1);
|
ASSERT_EQ(val.deleted--, 1);
|
||||||
tmp_h = shard_->Lookup(key3, hkey3);
|
tmp_h = shard_->Lookup(key3, hkey3);
|
||||||
ASSERT_EQ(nullptr, tmp_h);
|
ASSERT_EQ(nullptr, tmp_h);
|
||||||
}
|
}
|
||||||
|
@ -884,12 +896,12 @@ class TestSecondaryCache : public SecondaryCache {
|
||||||
using ResultMap = std::unordered_map<std::string, ResultType>;
|
using ResultMap = std::unordered_map<std::string, ResultType>;
|
||||||
|
|
||||||
explicit TestSecondaryCache(size_t capacity)
|
explicit TestSecondaryCache(size_t capacity)
|
||||||
: num_inserts_(0), num_lookups_(0), inject_failure_(false) {
|
: cache_(NewLRUCache(capacity, 0, false, 0.5 /* high_pri_pool_ratio */,
|
||||||
cache_ =
|
nullptr, kDefaultToAdaptiveMutex,
|
||||||
NewLRUCache(capacity, 0, false, 0.5 /* high_pri_pool_ratio */, nullptr,
|
kDontChargeCacheMetadata)),
|
||||||
kDefaultToAdaptiveMutex, kDontChargeCacheMetadata);
|
num_inserts_(0),
|
||||||
}
|
num_lookups_(0),
|
||||||
~TestSecondaryCache() override { cache_.reset(); }
|
inject_failure_(false) {}
|
||||||
|
|
||||||
const char* Name() const override { return "TestSecondaryCache"; }
|
const char* Name() const override { return "TestSecondaryCache"; }
|
||||||
|
|
||||||
|
@ -897,7 +909,7 @@ class TestSecondaryCache : public SecondaryCache {
|
||||||
|
|
||||||
void ResetInjectFailure() { inject_failure_ = false; }
|
void ResetInjectFailure() { inject_failure_ = false; }
|
||||||
|
|
||||||
Status Insert(const Slice& key, void* value,
|
Status Insert(const Slice& key, Cache::ObjectPtr value,
|
||||||
const Cache::CacheItemHelper* helper) override {
|
const Cache::CacheItemHelper* helper) override {
|
||||||
if (inject_failure_) {
|
if (inject_failure_) {
|
||||||
return Status::Corruption("Insertion Data Corrupted");
|
return Status::Corruption("Insertion Data Corrupted");
|
||||||
|
@ -916,14 +928,12 @@ class TestSecondaryCache : public SecondaryCache {
|
||||||
delete[] buf;
|
delete[] buf;
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
return cache_->Insert(key, buf, size,
|
return cache_.Insert(key, buf, size);
|
||||||
[](const Slice& /*key*/, void* val) -> void {
|
|
||||||
delete[] static_cast<char*>(val);
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
|
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> Lookup(
|
||||||
const Slice& key, const Cache::CreateCallback& create_cb, bool /*wait*/,
|
const Slice& key, const Cache::CacheItemHelper* helper,
|
||||||
|
Cache::CreateContext* create_context, bool /*wait*/,
|
||||||
bool /*advise_erase*/, bool& is_in_sec_cache) override {
|
bool /*advise_erase*/, bool& is_in_sec_cache) override {
|
||||||
std::string key_str = key.ToString();
|
std::string key_str = key.ToString();
|
||||||
TEST_SYNC_POINT_CALLBACK("TestSecondaryCache::Lookup", &key_str);
|
TEST_SYNC_POINT_CALLBACK("TestSecondaryCache::Lookup", &key_str);
|
||||||
|
@ -939,24 +949,25 @@ class TestSecondaryCache : public SecondaryCache {
|
||||||
return secondary_handle;
|
return secondary_handle;
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::Handle* handle = cache_->Lookup(key);
|
TypedHandle* handle = cache_.Lookup(key);
|
||||||
num_lookups_++;
|
num_lookups_++;
|
||||||
if (handle) {
|
if (handle) {
|
||||||
void* value = nullptr;
|
Cache::ObjectPtr value = nullptr;
|
||||||
size_t charge = 0;
|
size_t charge = 0;
|
||||||
Status s;
|
Status s;
|
||||||
if (type != ResultType::DEFER_AND_FAIL) {
|
if (type != ResultType::DEFER_AND_FAIL) {
|
||||||
char* ptr = (char*)cache_->Value(handle);
|
char* ptr = cache_.Value(handle);
|
||||||
size_t size = DecodeFixed64(ptr);
|
size_t size = DecodeFixed64(ptr);
|
||||||
ptr += sizeof(uint64_t);
|
ptr += sizeof(uint64_t);
|
||||||
s = create_cb(ptr, size, &value, &charge);
|
s = helper->create_cb(Slice(ptr, size), create_context,
|
||||||
|
/*alloc*/ nullptr, &value, &charge);
|
||||||
}
|
}
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
secondary_handle.reset(new TestSecondaryCacheResultHandle(
|
secondary_handle.reset(new TestSecondaryCacheResultHandle(
|
||||||
cache_.get(), handle, value, charge, type));
|
cache_.get(), handle, value, charge, type));
|
||||||
is_in_sec_cache = true;
|
is_in_sec_cache = true;
|
||||||
} else {
|
} else {
|
||||||
cache_->Release(handle);
|
cache_.Release(handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return secondary_handle;
|
return secondary_handle;
|
||||||
|
@ -995,7 +1006,8 @@ class TestSecondaryCache : public SecondaryCache {
|
||||||
class TestSecondaryCacheResultHandle : public SecondaryCacheResultHandle {
|
class TestSecondaryCacheResultHandle : public SecondaryCacheResultHandle {
|
||||||
public:
|
public:
|
||||||
TestSecondaryCacheResultHandle(Cache* cache, Cache::Handle* handle,
|
TestSecondaryCacheResultHandle(Cache* cache, Cache::Handle* handle,
|
||||||
void* value, size_t size, ResultType type)
|
Cache::ObjectPtr value, size_t size,
|
||||||
|
ResultType type)
|
||||||
: cache_(cache),
|
: cache_(cache),
|
||||||
handle_(handle),
|
handle_(handle),
|
||||||
value_(value),
|
value_(value),
|
||||||
|
@ -1012,7 +1024,7 @@ class TestSecondaryCache : public SecondaryCache {
|
||||||
|
|
||||||
void Wait() override {}
|
void Wait() override {}
|
||||||
|
|
||||||
void* Value() override {
|
Cache::ObjectPtr Value() override {
|
||||||
assert(is_ready_);
|
assert(is_ready_);
|
||||||
return value_;
|
return value_;
|
||||||
}
|
}
|
||||||
|
@ -1024,12 +1036,15 @@ class TestSecondaryCache : public SecondaryCache {
|
||||||
private:
|
private:
|
||||||
Cache* cache_;
|
Cache* cache_;
|
||||||
Cache::Handle* handle_;
|
Cache::Handle* handle_;
|
||||||
void* value_;
|
Cache::ObjectPtr value_;
|
||||||
size_t size_;
|
size_t size_;
|
||||||
bool is_ready_;
|
bool is_ready_;
|
||||||
};
|
};
|
||||||
|
|
||||||
std::shared_ptr<Cache> cache_;
|
using SharedCache =
|
||||||
|
BasicTypedSharedCacheInterface<char[], CacheEntryRole::kMisc>;
|
||||||
|
using TypedHandle = SharedCache::TypedHandle;
|
||||||
|
SharedCache cache_;
|
||||||
uint32_t num_inserts_;
|
uint32_t num_inserts_;
|
||||||
uint32_t num_lookups_;
|
uint32_t num_lookups_;
|
||||||
bool inject_failure_;
|
bool inject_failure_;
|
||||||
|
@ -1049,7 +1064,8 @@ class DBSecondaryCacheTest : public DBTestBase {
|
||||||
std::unique_ptr<Env> fault_env_;
|
std::unique_ptr<Env> fault_env_;
|
||||||
};
|
};
|
||||||
|
|
||||||
class LRUCacheSecondaryCacheTest : public LRUCacheTest {
|
class LRUCacheSecondaryCacheTest : public LRUCacheTest,
|
||||||
|
public Cache::CreateContext {
|
||||||
public:
|
public:
|
||||||
LRUCacheSecondaryCacheTest() : fail_create_(false) {}
|
LRUCacheSecondaryCacheTest() : fail_create_(false) {}
|
||||||
~LRUCacheSecondaryCacheTest() {}
|
~LRUCacheSecondaryCacheTest() {}
|
||||||
|
@ -1071,13 +1087,13 @@ class LRUCacheSecondaryCacheTest : public LRUCacheTest {
|
||||||
size_t size_;
|
size_t size_;
|
||||||
};
|
};
|
||||||
|
|
||||||
static size_t SizeCallback(void* obj) {
|
static size_t SizeCallback(Cache::ObjectPtr obj) {
|
||||||
return reinterpret_cast<TestItem*>(obj)->Size();
|
return static_cast<TestItem*>(obj)->Size();
|
||||||
}
|
}
|
||||||
|
|
||||||
static Status SaveToCallback(void* from_obj, size_t from_offset,
|
static Status SaveToCallback(Cache::ObjectPtr from_obj, size_t from_offset,
|
||||||
size_t length, void* out) {
|
size_t length, char* out) {
|
||||||
TestItem* item = reinterpret_cast<TestItem*>(from_obj);
|
TestItem* item = static_cast<TestItem*>(from_obj);
|
||||||
char* buf = item->Buf();
|
char* buf = item->Buf();
|
||||||
EXPECT_EQ(length, item->Size());
|
EXPECT_EQ(length, item->Size());
|
||||||
EXPECT_EQ(from_offset, 0);
|
EXPECT_EQ(from_offset, 0);
|
||||||
|
@ -1085,27 +1101,30 @@ class LRUCacheSecondaryCacheTest : public LRUCacheTest {
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
static void DeletionCallback(const Slice& /*key*/, void* obj) {
|
static void DeletionCallback(Cache::ObjectPtr obj,
|
||||||
delete reinterpret_cast<TestItem*>(obj);
|
MemoryAllocator* /*alloc*/) {
|
||||||
|
delete static_cast<TestItem*>(obj);
|
||||||
}
|
}
|
||||||
|
|
||||||
static Cache::CacheItemHelper helper_;
|
static Cache::CacheItemHelper helper_;
|
||||||
|
|
||||||
static Status SaveToCallbackFail(void* /*obj*/, size_t /*offset*/,
|
static Status SaveToCallbackFail(Cache::ObjectPtr /*from_obj*/,
|
||||||
size_t /*size*/, void* /*out*/) {
|
size_t /*from_offset*/, size_t /*length*/,
|
||||||
|
char* /*out*/) {
|
||||||
return Status::NotSupported();
|
return Status::NotSupported();
|
||||||
}
|
}
|
||||||
|
|
||||||
static Cache::CacheItemHelper helper_fail_;
|
static Cache::CacheItemHelper helper_fail_;
|
||||||
|
|
||||||
Cache::CreateCallback test_item_creator = [&](const void* buf, size_t size,
|
static Status CreateCallback(const Slice& data, Cache::CreateContext* context,
|
||||||
void** out_obj,
|
MemoryAllocator* /*allocator*/,
|
||||||
size_t* charge) -> Status {
|
Cache::ObjectPtr* out_obj, size_t* out_charge) {
|
||||||
if (fail_create_) {
|
auto t = static_cast<LRUCacheSecondaryCacheTest*>(context);
|
||||||
|
if (t->fail_create_) {
|
||||||
return Status::NotSupported();
|
return Status::NotSupported();
|
||||||
}
|
}
|
||||||
*out_obj = reinterpret_cast<void*>(new TestItem((char*)buf, size));
|
*out_obj = new TestItem(data.data(), data.size());
|
||||||
*charge = size;
|
*out_charge = data.size();
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -1115,15 +1134,17 @@ class LRUCacheSecondaryCacheTest : public LRUCacheTest {
|
||||||
bool fail_create_;
|
bool fail_create_;
|
||||||
};
|
};
|
||||||
|
|
||||||
Cache::CacheItemHelper LRUCacheSecondaryCacheTest::helper_(
|
Cache::CacheItemHelper LRUCacheSecondaryCacheTest::helper_{
|
||||||
|
CacheEntryRole::kMisc, LRUCacheSecondaryCacheTest::DeletionCallback,
|
||||||
LRUCacheSecondaryCacheTest::SizeCallback,
|
LRUCacheSecondaryCacheTest::SizeCallback,
|
||||||
LRUCacheSecondaryCacheTest::SaveToCallback,
|
LRUCacheSecondaryCacheTest::SaveToCallback,
|
||||||
LRUCacheSecondaryCacheTest::DeletionCallback);
|
LRUCacheSecondaryCacheTest::CreateCallback};
|
||||||
|
|
||||||
Cache::CacheItemHelper LRUCacheSecondaryCacheTest::helper_fail_(
|
Cache::CacheItemHelper LRUCacheSecondaryCacheTest::helper_fail_{
|
||||||
|
CacheEntryRole::kMisc, LRUCacheSecondaryCacheTest::DeletionCallback,
|
||||||
LRUCacheSecondaryCacheTest::SizeCallback,
|
LRUCacheSecondaryCacheTest::SizeCallback,
|
||||||
LRUCacheSecondaryCacheTest::SaveToCallbackFail,
|
LRUCacheSecondaryCacheTest::SaveToCallbackFail,
|
||||||
LRUCacheSecondaryCacheTest::DeletionCallback);
|
LRUCacheSecondaryCacheTest::CreateCallback};
|
||||||
|
|
||||||
TEST_F(LRUCacheSecondaryCacheTest, BasicTest) {
|
TEST_F(LRUCacheSecondaryCacheTest, BasicTest) {
|
||||||
LRUCacheOptions opts(1024 /* capacity */, 0 /* num_shard_bits */,
|
LRUCacheOptions opts(1024 /* capacity */, 0 /* num_shard_bits */,
|
||||||
|
@ -1159,7 +1180,7 @@ TEST_F(LRUCacheSecondaryCacheTest, BasicTest) {
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
handle =
|
handle =
|
||||||
cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, true, stats.get());
|
/*context*/ this, Cache::Priority::LOW, true, stats.get());
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
ASSERT_EQ(static_cast<TestItem*>(cache->Value(handle))->Size(), str2.size());
|
ASSERT_EQ(static_cast<TestItem*>(cache->Value(handle))->Size(), str2.size());
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
|
@ -1167,7 +1188,7 @@ TEST_F(LRUCacheSecondaryCacheTest, BasicTest) {
|
||||||
// This lookup should promote k1 and demote k2
|
// This lookup should promote k1 and demote k2
|
||||||
handle =
|
handle =
|
||||||
cache->Lookup(k1.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
cache->Lookup(k1.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, true, stats.get());
|
/*context*/ this, Cache::Priority::LOW, true, stats.get());
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
ASSERT_EQ(static_cast<TestItem*>(cache->Value(handle))->Size(), str1.size());
|
ASSERT_EQ(static_cast<TestItem*>(cache->Value(handle))->Size(), str1.size());
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
|
@ -1175,7 +1196,7 @@ TEST_F(LRUCacheSecondaryCacheTest, BasicTest) {
|
||||||
// This lookup should promote k3 and demote k1
|
// This lookup should promote k3 and demote k1
|
||||||
handle =
|
handle =
|
||||||
cache->Lookup(k3.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
cache->Lookup(k3.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, true, stats.get());
|
/*context*/ this, Cache::Priority::LOW, true, stats.get());
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
ASSERT_EQ(static_cast<TestItem*>(cache->Value(handle))->Size(), str3.size());
|
ASSERT_EQ(static_cast<TestItem*>(cache->Value(handle))->Size(), str3.size());
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
|
@ -1207,18 +1228,19 @@ TEST_F(LRUCacheSecondaryCacheTest, BasicFailTest) {
|
||||||
Random rnd(301);
|
Random rnd(301);
|
||||||
std::string str1 = rnd.RandomString(1020);
|
std::string str1 = rnd.RandomString(1020);
|
||||||
auto item1 = std::make_unique<TestItem>(str1.data(), str1.length());
|
auto item1 = std::make_unique<TestItem>(str1.data(), str1.length());
|
||||||
ASSERT_TRUE(cache->Insert(k1.AsSlice(), item1.get(), nullptr, str1.length())
|
// NOTE: changed to assert helper != nullptr for efficiency / code size
|
||||||
.IsInvalidArgument());
|
// ASSERT_TRUE(cache->Insert(k1.AsSlice(), item1.get(), nullptr,
|
||||||
|
// str1.length()).IsInvalidArgument());
|
||||||
ASSERT_OK(cache->Insert(k1.AsSlice(), item1.get(),
|
ASSERT_OK(cache->Insert(k1.AsSlice(), item1.get(),
|
||||||
&LRUCacheSecondaryCacheTest::helper_, str1.length()));
|
&LRUCacheSecondaryCacheTest::helper_, str1.length()));
|
||||||
item1.release(); // Appease clang-analyze "potential memory leak"
|
item1.release(); // Appease clang-analyze "potential memory leak"
|
||||||
|
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
handle = cache->Lookup(k2.AsSlice(), nullptr, test_item_creator,
|
handle = cache->Lookup(k2.AsSlice(), nullptr, /*context*/ this,
|
||||||
Cache::Priority::LOW, true);
|
Cache::Priority::LOW, true);
|
||||||
ASSERT_EQ(handle, nullptr);
|
ASSERT_EQ(handle, nullptr);
|
||||||
handle = cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
handle = cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, false);
|
/*context*/ this, Cache::Priority::LOW, false);
|
||||||
ASSERT_EQ(handle, nullptr);
|
ASSERT_EQ(handle, nullptr);
|
||||||
|
|
||||||
cache.reset();
|
cache.reset();
|
||||||
|
@ -1256,18 +1278,18 @@ TEST_F(LRUCacheSecondaryCacheTest, SaveFailTest) {
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
handle =
|
handle =
|
||||||
cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_fail_,
|
cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_fail_,
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
/*context*/ this, Cache::Priority::LOW, true);
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
// This lookup should fail, since k1 demotion would have failed
|
// This lookup should fail, since k1 demotion would have failed
|
||||||
handle =
|
handle =
|
||||||
cache->Lookup(k1.AsSlice(), &LRUCacheSecondaryCacheTest::helper_fail_,
|
cache->Lookup(k1.AsSlice(), &LRUCacheSecondaryCacheTest::helper_fail_,
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
/*context*/ this, Cache::Priority::LOW, true);
|
||||||
ASSERT_EQ(handle, nullptr);
|
ASSERT_EQ(handle, nullptr);
|
||||||
// Since k1 didn't get promoted, k2 should still be in cache
|
// Since k1 didn't get promoted, k2 should still be in cache
|
||||||
handle =
|
handle =
|
||||||
cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_fail_,
|
cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_fail_,
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
/*context*/ this, Cache::Priority::LOW, true);
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
ASSERT_EQ(secondary_cache->num_inserts(), 1u);
|
ASSERT_EQ(secondary_cache->num_inserts(), 1u);
|
||||||
|
@ -1304,16 +1326,16 @@ TEST_F(LRUCacheSecondaryCacheTest, CreateFailTest) {
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
SetFailCreate(true);
|
SetFailCreate(true);
|
||||||
handle = cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
handle = cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
/*context*/ this, Cache::Priority::LOW, true);
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
// This lookup should fail, since k1 creation would have failed
|
// This lookup should fail, since k1 creation would have failed
|
||||||
handle = cache->Lookup(k1.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
handle = cache->Lookup(k1.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
/*context*/ this, Cache::Priority::LOW, true);
|
||||||
ASSERT_EQ(handle, nullptr);
|
ASSERT_EQ(handle, nullptr);
|
||||||
// Since k1 didn't get promoted, k2 should still be in cache
|
// Since k1 didn't get promoted, k2 should still be in cache
|
||||||
handle = cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
handle = cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
/*context*/ this, Cache::Priority::LOW, true);
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
ASSERT_EQ(secondary_cache->num_inserts(), 1u);
|
ASSERT_EQ(secondary_cache->num_inserts(), 1u);
|
||||||
|
@ -1349,19 +1371,19 @@ TEST_F(LRUCacheSecondaryCacheTest, FullCapacityTest) {
|
||||||
|
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
handle = cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
handle = cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
/*context*/ this, Cache::Priority::LOW, true);
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
// k1 promotion should fail due to the block cache being at capacity,
|
// k1 promotion should fail due to the block cache being at capacity,
|
||||||
// but the lookup should still succeed
|
// but the lookup should still succeed
|
||||||
Cache::Handle* handle2;
|
Cache::Handle* handle2;
|
||||||
handle2 = cache->Lookup(k1.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
handle2 = cache->Lookup(k1.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
/*context*/ this, Cache::Priority::LOW, true);
|
||||||
ASSERT_NE(handle2, nullptr);
|
ASSERT_NE(handle2, nullptr);
|
||||||
// Since k1 didn't get inserted, k2 should still be in cache
|
// Since k1 didn't get inserted, k2 should still be in cache
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
cache->Release(handle2);
|
cache->Release(handle2);
|
||||||
handle = cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
handle = cache->Lookup(k2.AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, true);
|
/*context*/ this, Cache::Priority::LOW, true);
|
||||||
ASSERT_NE(handle, nullptr);
|
ASSERT_NE(handle, nullptr);
|
||||||
cache->Release(handle);
|
cache->Release(handle);
|
||||||
ASSERT_EQ(secondary_cache->num_inserts(), 1u);
|
ASSERT_EQ(secondary_cache->num_inserts(), 1u);
|
||||||
|
@ -1838,7 +1860,7 @@ TEST_F(LRUCacheSecondaryCacheTest, BasicWaitAllTest) {
|
||||||
for (int i = 0; i < 6; ++i) {
|
for (int i = 0; i < 6; ++i) {
|
||||||
results.emplace_back(cache->Lookup(
|
results.emplace_back(cache->Lookup(
|
||||||
ock.WithOffset(i).AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
ock.WithOffset(i).AsSlice(), &LRUCacheSecondaryCacheTest::helper_,
|
||||||
test_item_creator, Cache::Priority::LOW, false));
|
/*context*/ this, Cache::Priority::LOW, false));
|
||||||
}
|
}
|
||||||
cache->WaitAll(results);
|
cache->WaitAll(results);
|
||||||
for (int i = 0; i < 6; ++i) {
|
for (int i = 0; i < 6; ++i) {
|
||||||
|
@ -1964,26 +1986,18 @@ class LRUCacheWithStat : public LRUCache {
|
||||||
}
|
}
|
||||||
~LRUCacheWithStat() {}
|
~LRUCacheWithStat() {}
|
||||||
|
|
||||||
Status Insert(const Slice& key, void* value, size_t charge, DeleterFn deleter,
|
Status Insert(const Slice& key, Cache::ObjectPtr value,
|
||||||
Handle** handle, Priority priority) override {
|
const CacheItemHelper* helper, size_t charge,
|
||||||
insert_count_++;
|
Handle** handle = nullptr,
|
||||||
return LRUCache::Insert(key, value, charge, deleter, handle, priority);
|
|
||||||
}
|
|
||||||
Status Insert(const Slice& key, void* value, const CacheItemHelper* helper,
|
|
||||||
size_t charge, Handle** handle = nullptr,
|
|
||||||
Priority priority = Priority::LOW) override {
|
Priority priority = Priority::LOW) override {
|
||||||
insert_count_++;
|
insert_count_++;
|
||||||
return LRUCache::Insert(key, value, helper, charge, handle, priority);
|
return LRUCache::Insert(key, value, helper, charge, handle, priority);
|
||||||
}
|
}
|
||||||
Handle* Lookup(const Slice& key, Statistics* stats) override {
|
|
||||||
lookup_count_++;
|
|
||||||
return LRUCache::Lookup(key, stats);
|
|
||||||
}
|
|
||||||
Handle* Lookup(const Slice& key, const CacheItemHelper* helper,
|
Handle* Lookup(const Slice& key, const CacheItemHelper* helper,
|
||||||
const CreateCallback& create_cb, Priority priority, bool wait,
|
CreateContext* create_context, Priority priority, bool wait,
|
||||||
Statistics* stats = nullptr) override {
|
Statistics* stats = nullptr) override {
|
||||||
lookup_count_++;
|
lookup_count_++;
|
||||||
return LRUCache::Lookup(key, helper, create_cb, priority, wait, stats);
|
return LRUCache::Lookup(key, helper, create_context, priority, wait, stats);
|
||||||
}
|
}
|
||||||
|
|
||||||
uint32_t GetInsertCount() { return insert_count_; }
|
uint32_t GetInsertCount() { return insert_count_; }
|
||||||
|
|
|
@ -11,20 +11,29 @@ namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
namespace {
|
namespace {
|
||||||
|
|
||||||
size_t SliceSize(void* obj) { return static_cast<Slice*>(obj)->size(); }
|
void NoopDelete(Cache::ObjectPtr, MemoryAllocator*) {}
|
||||||
|
|
||||||
Status SliceSaveTo(void* from_obj, size_t from_offset, size_t length,
|
size_t SliceSize(Cache::ObjectPtr obj) {
|
||||||
void* out) {
|
return static_cast<Slice*>(obj)->size();
|
||||||
|
}
|
||||||
|
|
||||||
|
Status SliceSaveTo(Cache::ObjectPtr from_obj, size_t from_offset, size_t length,
|
||||||
|
char* out) {
|
||||||
const Slice& slice = *static_cast<Slice*>(from_obj);
|
const Slice& slice = *static_cast<Slice*>(from_obj);
|
||||||
std::memcpy(out, slice.data() + from_offset, length);
|
std::memcpy(out, slice.data() + from_offset, length);
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
Status FailCreate(const Slice&, Cache::CreateContext*, MemoryAllocator*,
|
||||||
|
Cache::ObjectPtr*, size_t*) {
|
||||||
|
return Status::NotSupported("Only for dumping data into SecondaryCache");
|
||||||
|
}
|
||||||
|
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
Status SecondaryCache::InsertSaved(const Slice& key, const Slice& saved) {
|
Status SecondaryCache::InsertSaved(const Slice& key, const Slice& saved) {
|
||||||
static Cache::CacheItemHelper helper{
|
static Cache::CacheItemHelper helper{CacheEntryRole::kMisc, &NoopDelete,
|
||||||
&SliceSize, &SliceSaveTo, GetNoopDeleterForRole<CacheEntryRole::kMisc>()};
|
&SliceSize, &SliceSaveTo, &FailCreate};
|
||||||
// NOTE: depends on Insert() being synchronous, not keeping pointer `&saved`
|
// NOTE: depends on Insert() being synchronous, not keeping pointer `&saved`
|
||||||
return Insert(key, const_cast<Slice*>(&saved), &helper);
|
return Insert(key, const_cast<Slice*>(&saved), &helper);
|
||||||
}
|
}
|
||||||
|
|
|
@ -49,16 +49,12 @@ class CacheShardBase {
|
||||||
HashCref GetHash() const;
|
HashCref GetHash() const;
|
||||||
...
|
...
|
||||||
};
|
};
|
||||||
Status Insert(const Slice& key, HashCref hash, void* value, size_t charge,
|
Status Insert(const Slice& key, HashCref hash, Cache::ObjectPtr value,
|
||||||
DeleterFn deleter, HandleImpl** handle,
|
|
||||||
Cache::Priority priority) = 0;
|
|
||||||
Status Insert(const Slice& key, HashCref hash, void* value,
|
|
||||||
const Cache::CacheItemHelper* helper, size_t charge,
|
const Cache::CacheItemHelper* helper, size_t charge,
|
||||||
HandleImpl** handle, Cache::Priority priority) = 0;
|
HandleImpl** handle, Cache::Priority priority) = 0;
|
||||||
HandleImpl* Lookup(const Slice& key, HashCref hash) = 0;
|
|
||||||
HandleImpl* Lookup(const Slice& key, HashCref hash,
|
HandleImpl* Lookup(const Slice& key, HashCref hash,
|
||||||
const Cache::CacheItemHelper* helper,
|
const Cache::CacheItemHelper* helper,
|
||||||
const Cache::CreateCallback& create_cb,
|
Cache::CreateContext* create_context,
|
||||||
Cache::Priority priority, bool wait,
|
Cache::Priority priority, bool wait,
|
||||||
Statistics* stats) = 0;
|
Statistics* stats) = 0;
|
||||||
bool Release(HandleImpl* handle, bool useful, bool erase_if_last_ref) = 0;
|
bool Release(HandleImpl* handle, bool useful, bool erase_if_last_ref) = 0;
|
||||||
|
@ -77,8 +73,9 @@ class CacheShardBase {
|
||||||
// *state == 0 and implementation sets *state = SIZE_MAX to indicate
|
// *state == 0 and implementation sets *state = SIZE_MAX to indicate
|
||||||
// completion.
|
// completion.
|
||||||
void ApplyToSomeEntries(
|
void ApplyToSomeEntries(
|
||||||
const std::function<void(const Slice& key, void* value, size_t charge,
|
const std::function<void(const Slice& key, ObjectPtr value,
|
||||||
DeleterFn deleter)>& callback,
|
size_t charge,
|
||||||
|
const Cache::CacheItemHelper* helper)>& callback,
|
||||||
size_t average_entries_per_lock, size_t* state) = 0;
|
size_t average_entries_per_lock, size_t* state) = 0;
|
||||||
void EraseUnRefEntries() = 0;
|
void EraseUnRefEntries() = 0;
|
||||||
*/
|
*/
|
||||||
|
@ -172,36 +169,24 @@ class ShardedCache : public ShardedCacheBase {
|
||||||
[s_c_l](CacheShard* cs) { cs->SetStrictCapacityLimit(s_c_l); });
|
[s_c_l](CacheShard* cs) { cs->SetStrictCapacityLimit(s_c_l); });
|
||||||
}
|
}
|
||||||
|
|
||||||
Status Insert(const Slice& key, void* value, size_t charge, DeleterFn deleter,
|
Status Insert(const Slice& key, ObjectPtr value,
|
||||||
Handle** handle, Priority priority) override {
|
const CacheItemHelper* helper, size_t charge,
|
||||||
HashVal hash = CacheShard::ComputeHash(key);
|
Handle** handle = nullptr,
|
||||||
auto h_out = reinterpret_cast<HandleImpl**>(handle);
|
|
||||||
return GetShard(hash).Insert(key, hash, value, charge, deleter, h_out,
|
|
||||||
priority);
|
|
||||||
}
|
|
||||||
Status Insert(const Slice& key, void* value, const CacheItemHelper* helper,
|
|
||||||
size_t charge, Handle** handle = nullptr,
|
|
||||||
Priority priority = Priority::LOW) override {
|
Priority priority = Priority::LOW) override {
|
||||||
if (!helper) {
|
assert(helper);
|
||||||
return Status::InvalidArgument();
|
|
||||||
}
|
|
||||||
HashVal hash = CacheShard::ComputeHash(key);
|
HashVal hash = CacheShard::ComputeHash(key);
|
||||||
auto h_out = reinterpret_cast<HandleImpl**>(handle);
|
auto h_out = reinterpret_cast<HandleImpl**>(handle);
|
||||||
return GetShard(hash).Insert(key, hash, value, helper, charge, h_out,
|
return GetShard(hash).Insert(key, hash, value, helper, charge, h_out,
|
||||||
priority);
|
priority);
|
||||||
}
|
}
|
||||||
|
|
||||||
Handle* Lookup(const Slice& key, Statistics* /*stats*/) override {
|
Handle* Lookup(const Slice& key, const CacheItemHelper* helper = nullptr,
|
||||||
HashVal hash = CacheShard::ComputeHash(key);
|
CreateContext* create_context = nullptr,
|
||||||
HandleImpl* result = GetShard(hash).Lookup(key, hash);
|
Priority priority = Priority::LOW, bool wait = true,
|
||||||
return reinterpret_cast<Handle*>(result);
|
|
||||||
}
|
|
||||||
Handle* Lookup(const Slice& key, const CacheItemHelper* helper,
|
|
||||||
const CreateCallback& create_cb, Priority priority, bool wait,
|
|
||||||
Statistics* stats = nullptr) override {
|
Statistics* stats = nullptr) override {
|
||||||
HashVal hash = CacheShard::ComputeHash(key);
|
HashVal hash = CacheShard::ComputeHash(key);
|
||||||
HandleImpl* result = GetShard(hash).Lookup(key, hash, helper, create_cb,
|
HandleImpl* result = GetShard(hash).Lookup(
|
||||||
priority, wait, stats);
|
key, hash, helper, create_context, priority, wait, stats);
|
||||||
return reinterpret_cast<Handle*>(result);
|
return reinterpret_cast<Handle*>(result);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -244,8 +229,8 @@ class ShardedCache : public ShardedCacheBase {
|
||||||
return SumOverShards2(&CacheShard::GetTableAddressCount);
|
return SumOverShards2(&CacheShard::GetTableAddressCount);
|
||||||
}
|
}
|
||||||
void ApplyToAllEntries(
|
void ApplyToAllEntries(
|
||||||
const std::function<void(const Slice& key, void* value, size_t charge,
|
const std::function<void(const Slice& key, ObjectPtr value, size_t charge,
|
||||||
DeleterFn deleter)>& callback,
|
const CacheItemHelper* helper)>& callback,
|
||||||
const ApplyToAllEntriesOptions& opts) override {
|
const ApplyToAllEntriesOptions& opts) override {
|
||||||
uint32_t num_shards = GetNumShards();
|
uint32_t num_shards = GetNumShards();
|
||||||
// Iterate over part of each shard, rotating between shards, to
|
// Iterate over part of each shard, rotating between shards, to
|
||||||
|
|
|
@ -0,0 +1,339 @@
|
||||||
|
// Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
// This source code is licensed under both the GPLv2 (found in the
|
||||||
|
// COPYING file in the root directory) and Apache 2.0 License
|
||||||
|
// (found in the LICENSE.Apache file in the root directory).
|
||||||
|
|
||||||
|
// APIs for accessing Cache in a type-safe and convenient way. Cache is kept
|
||||||
|
// at a low, thin level of abstraction so that different implementations can
|
||||||
|
// be plugged in, but these wrappers provide clean, convenient access to the
|
||||||
|
// most common operations.
|
||||||
|
//
|
||||||
|
// A number of template classes are needed for sharing common structure. The
|
||||||
|
// key classes are these:
|
||||||
|
//
|
||||||
|
// * PlaceholderCacheInterface - Used for making cache reservations, with
|
||||||
|
// entries that have a charge but no value.
|
||||||
|
// * BasicTypedCacheInterface<TValue> - Used for primary cache storage of
|
||||||
|
// objects of type TValue.
|
||||||
|
// * FullTypedCacheHelper<TValue, TCreateContext> - Used for secondary cache
|
||||||
|
// compatible storage of objects of type TValue.
|
||||||
|
// * For each of these, there's a "Shared" version
|
||||||
|
// (e.g. FullTypedSharedCacheInterface) that holds a shared_ptr to the Cache,
|
||||||
|
// rather than assuming external ownership by holding only a raw `Cache*`.
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <algorithm>
|
||||||
|
#include <cstdint>
|
||||||
|
#include <memory>
|
||||||
|
#include <type_traits>
|
||||||
|
|
||||||
|
#include "cache/cache_helpers.h"
|
||||||
|
#include "rocksdb/advanced_options.h"
|
||||||
|
#include "rocksdb/cache.h"
|
||||||
|
|
||||||
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
|
// For future consideration:
|
||||||
|
// * Pass in value to Insert with std::unique_ptr& to simplify ownership
|
||||||
|
// transfer logic in callers
|
||||||
|
// * Make key type a template parameter (e.g. useful for table cache)
|
||||||
|
// * Closer integration with CacheHandleGuard (opt-in, so not always
|
||||||
|
// paying the extra overhead)
|
||||||
|
|
||||||
|
#define CACHE_TYPE_DEFS() \
|
||||||
|
using Priority = Cache::Priority; \
|
||||||
|
using Handle = Cache::Handle; \
|
||||||
|
using ObjectPtr = Cache::ObjectPtr; \
|
||||||
|
using CreateContext = Cache::CreateContext; \
|
||||||
|
using CacheItemHelper = Cache::CacheItemHelper /* caller ; */
|
||||||
|
|
||||||
|
template <typename CachePtr>
|
||||||
|
class BaseCacheInterface {
|
||||||
|
public:
|
||||||
|
CACHE_TYPE_DEFS();
|
||||||
|
|
||||||
|
/*implicit*/ BaseCacheInterface(CachePtr cache) : cache_(std::move(cache)) {}
|
||||||
|
|
||||||
|
inline void Release(Handle* handle) { cache_->Release(handle); }
|
||||||
|
|
||||||
|
inline void ReleaseAndEraseIfLastRef(Handle* handle) {
|
||||||
|
cache_->Release(handle, /*erase_if_last_ref*/ true);
|
||||||
|
}
|
||||||
|
|
||||||
|
inline void RegisterReleaseAsCleanup(Handle* handle, Cleanable& cleanable) {
|
||||||
|
cleanable.RegisterCleanup(&ReleaseCacheHandleCleanup, get(), handle);
|
||||||
|
}
|
||||||
|
|
||||||
|
inline Cache* get() const { return &*cache_; }
|
||||||
|
|
||||||
|
explicit inline operator bool() const noexcept { return cache_ != nullptr; }
|
||||||
|
|
||||||
|
protected:
|
||||||
|
CachePtr cache_;
|
||||||
|
};
|
||||||
|
|
||||||
|
// PlaceholderCacheInterface - Used for making cache reservations, with
|
||||||
|
// entries that have a charge but no value. CacheEntryRole is required as
|
||||||
|
// a template parameter.
|
||||||
|
template <CacheEntryRole kRole, typename CachePtr = Cache*>
|
||||||
|
class PlaceholderCacheInterface : public BaseCacheInterface<CachePtr> {
|
||||||
|
public:
|
||||||
|
CACHE_TYPE_DEFS();
|
||||||
|
using BaseCacheInterface<CachePtr>::BaseCacheInterface;
|
||||||
|
|
||||||
|
inline Status Insert(const Slice& key, size_t charge, Handle** handle) {
|
||||||
|
return this->cache_->Insert(key, /*value=*/nullptr, &kHelper, charge,
|
||||||
|
handle);
|
||||||
|
}
|
||||||
|
|
||||||
|
static constexpr Cache::CacheItemHelper kHelper{kRole};
|
||||||
|
};
|
||||||
|
|
||||||
|
template <CacheEntryRole kRole>
|
||||||
|
using PlaceholderSharedCacheInterface =
|
||||||
|
PlaceholderCacheInterface<kRole, std::shared_ptr<Cache>>;
|
||||||
|
|
||||||
|
template <class TValue>
|
||||||
|
class BasicTypedCacheHelperFns {
|
||||||
|
public:
|
||||||
|
CACHE_TYPE_DEFS();
|
||||||
|
// E.g. char* for char[]
|
||||||
|
using TValuePtr = std::remove_extent_t<TValue>*;
|
||||||
|
|
||||||
|
protected:
|
||||||
|
inline static ObjectPtr UpCastValue(TValuePtr value) { return value; }
|
||||||
|
inline static TValuePtr DownCastValue(ObjectPtr value) {
|
||||||
|
return static_cast<TValuePtr>(value);
|
||||||
|
}
|
||||||
|
|
||||||
|
static void Delete(ObjectPtr value, MemoryAllocator* allocator) {
|
||||||
|
// FIXME: Currently, no callers actually allocate the ObjectPtr objects
|
||||||
|
// using the custom allocator, just subobjects that keep a reference to
|
||||||
|
// the allocator themselves (with CacheAllocationPtr).
|
||||||
|
if (/*DISABLED*/ false && allocator) {
|
||||||
|
if constexpr (std::is_destructible_v<TValue>) {
|
||||||
|
DownCastValue(value)->~TValue();
|
||||||
|
}
|
||||||
|
allocator->Deallocate(value);
|
||||||
|
} else {
|
||||||
|
// Like delete but properly handles TValue=char[] etc.
|
||||||
|
std::default_delete<TValue>{}(DownCastValue(value));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// In its own class to try to minimize the number of distinct CacheItemHelper
|
||||||
|
// instances (e.g. don't vary by CachePtr)
|
||||||
|
template <class TValue, CacheEntryRole kRole>
|
||||||
|
class BasicTypedCacheHelper : public BasicTypedCacheHelperFns<TValue> {
|
||||||
|
public:
|
||||||
|
static constexpr Cache::CacheItemHelper kBasicHelper{
|
||||||
|
kRole, &BasicTypedCacheHelper::Delete};
|
||||||
|
};
|
||||||
|
|
||||||
|
// BasicTypedCacheInterface - Used for primary cache storage of objects of
|
||||||
|
// type TValue, which can be cleaned up with std::default_delete<TValue>. The
|
||||||
|
// role is provided by TValue::kCacheEntryRole or given in an optional
|
||||||
|
// template parameter.
|
||||||
|
template <class TValue, CacheEntryRole kRole = TValue::kCacheEntryRole,
|
||||||
|
typename CachePtr = Cache*>
|
||||||
|
class BasicTypedCacheInterface : public BaseCacheInterface<CachePtr>,
|
||||||
|
public BasicTypedCacheHelper<TValue, kRole> {
|
||||||
|
public:
|
||||||
|
CACHE_TYPE_DEFS();
|
||||||
|
using typename BasicTypedCacheHelperFns<TValue>::TValuePtr;
|
||||||
|
struct TypedHandle : public Handle {};
|
||||||
|
using BasicTypedCacheHelper<TValue, kRole>::kBasicHelper;
|
||||||
|
// ctor
|
||||||
|
using BaseCacheInterface<CachePtr>::BaseCacheInterface;
|
||||||
|
|
||||||
|
inline Status Insert(const Slice& key, TValuePtr value, size_t charge,
|
||||||
|
TypedHandle** handle = nullptr,
|
||||||
|
Priority priority = Priority::LOW) {
|
||||||
|
auto untyped_handle = reinterpret_cast<Handle**>(handle);
|
||||||
|
return this->cache_->Insert(
|
||||||
|
key, BasicTypedCacheHelperFns<TValue>::UpCastValue(value),
|
||||||
|
&kBasicHelper, charge, untyped_handle, priority);
|
||||||
|
}
|
||||||
|
|
||||||
|
inline TypedHandle* Lookup(const Slice& key, Statistics* stats = nullptr) {
|
||||||
|
return reinterpret_cast<TypedHandle*>(
|
||||||
|
this->cache_->BasicLookup(key, stats));
|
||||||
|
}
|
||||||
|
|
||||||
|
inline CacheHandleGuard<TValue> Guard(TypedHandle* handle) {
|
||||||
|
if (handle) {
|
||||||
|
return CacheHandleGuard<TValue>(&*this->cache_, handle);
|
||||||
|
} else {
|
||||||
|
return {};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
inline std::shared_ptr<TValue> SharedGuard(TypedHandle* handle) {
|
||||||
|
if (handle) {
|
||||||
|
return MakeSharedCacheHandleGuard<TValue>(&*this->cache_, handle);
|
||||||
|
} else {
|
||||||
|
return {};
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
inline TValuePtr Value(TypedHandle* handle) {
|
||||||
|
return BasicTypedCacheHelperFns<TValue>::DownCastValue(
|
||||||
|
this->cache_->Value(handle));
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// BasicTypedSharedCacheInterface - Like BasicTypedCacheInterface but with a
|
||||||
|
// shared_ptr<Cache> for keeping Cache alive.
|
||||||
|
template <class TValue, CacheEntryRole kRole = TValue::kCacheEntryRole>
|
||||||
|
using BasicTypedSharedCacheInterface =
|
||||||
|
BasicTypedCacheInterface<TValue, kRole, std::shared_ptr<Cache>>;
|
||||||
|
|
||||||
|
// TValue must implement ContentSlice() and ~TValue
|
||||||
|
// TCreateContext must implement Create(std::unique_ptr<TValue>*, ...)
|
||||||
|
template <class TValue, class TCreateContext>
|
||||||
|
class FullTypedCacheHelperFns : public BasicTypedCacheHelperFns<TValue> {
|
||||||
|
public:
|
||||||
|
CACHE_TYPE_DEFS();
|
||||||
|
|
||||||
|
protected:
|
||||||
|
using typename BasicTypedCacheHelperFns<TValue>::TValuePtr;
|
||||||
|
using BasicTypedCacheHelperFns<TValue>::DownCastValue;
|
||||||
|
using BasicTypedCacheHelperFns<TValue>::UpCastValue;
|
||||||
|
|
||||||
|
static size_t Size(ObjectPtr v) {
|
||||||
|
TValuePtr value = DownCastValue(v);
|
||||||
|
auto slice = value->ContentSlice();
|
||||||
|
return slice.size();
|
||||||
|
}
|
||||||
|
|
||||||
|
static Status SaveTo(ObjectPtr v, size_t from_offset, size_t length,
|
||||||
|
char* out) {
|
||||||
|
TValuePtr value = DownCastValue(v);
|
||||||
|
auto slice = value->ContentSlice();
|
||||||
|
assert(from_offset < slice.size());
|
||||||
|
assert(from_offset + length <= slice.size());
|
||||||
|
std::copy_n(slice.data() + from_offset, length, out);
|
||||||
|
return Status::OK();
|
||||||
|
}
|
||||||
|
|
||||||
|
static Status Create(const Slice& data, CreateContext* context,
|
||||||
|
MemoryAllocator* allocator, ObjectPtr* out_obj,
|
||||||
|
size_t* out_charge) {
|
||||||
|
std::unique_ptr<TValue> value = nullptr;
|
||||||
|
if constexpr (sizeof(TCreateContext) > 0) {
|
||||||
|
TCreateContext* tcontext = static_cast<TCreateContext*>(context);
|
||||||
|
tcontext->Create(&value, out_charge, data, allocator);
|
||||||
|
} else {
|
||||||
|
TCreateContext::Create(&value, out_charge, data, allocator);
|
||||||
|
}
|
||||||
|
*out_obj = UpCastValue(value.release());
|
||||||
|
return Status::OK();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// In its own class to try to minimize the number of distinct CacheItemHelper
|
||||||
|
// instances (e.g. don't vary by CachePtr)
|
||||||
|
template <class TValue, class TCreateContext, CacheEntryRole kRole>
|
||||||
|
class FullTypedCacheHelper
|
||||||
|
: public FullTypedCacheHelperFns<TValue, TCreateContext> {
|
||||||
|
public:
|
||||||
|
static constexpr Cache::CacheItemHelper kFullHelper{
|
||||||
|
kRole, &FullTypedCacheHelper::Delete, &FullTypedCacheHelper::Size,
|
||||||
|
&FullTypedCacheHelper::SaveTo, &FullTypedCacheHelper::Create};
|
||||||
|
};
|
||||||
|
|
||||||
|
// FullTypedCacheHelper - Used for secondary cache compatible storage of
|
||||||
|
// objects of type TValue. In addition to BasicTypedCacheInterface constraints,
|
||||||
|
// we require TValue::ContentSlice() to return persistable data. This
|
||||||
|
// simplifies usage for the normal case of simple secondary cache compatibility
|
||||||
|
// (can give you a Slice to the data already in memory). In addition to
|
||||||
|
// TCreateContext performing the role of Cache::CreateContext, it is also
|
||||||
|
// expected to provide a function Create(std::unique_ptr<TValue>* value,
|
||||||
|
// size_t* out_charge, const Slice& data, MemoryAllocator* allocator) for
|
||||||
|
// creating new TValue.
|
||||||
|
template <class TValue, class TCreateContext,
|
||||||
|
CacheEntryRole kRole = TValue::kCacheEntryRole,
|
||||||
|
typename CachePtr = Cache*>
|
||||||
|
class FullTypedCacheInterface
|
||||||
|
: public BasicTypedCacheInterface<TValue, kRole, CachePtr>,
|
||||||
|
public FullTypedCacheHelper<TValue, TCreateContext, kRole> {
|
||||||
|
public:
|
||||||
|
CACHE_TYPE_DEFS();
|
||||||
|
using typename BasicTypedCacheInterface<TValue, kRole, CachePtr>::TypedHandle;
|
||||||
|
using typename BasicTypedCacheHelperFns<TValue>::TValuePtr;
|
||||||
|
using BasicTypedCacheHelper<TValue, kRole>::kBasicHelper;
|
||||||
|
using FullTypedCacheHelper<TValue, TCreateContext, kRole>::kFullHelper;
|
||||||
|
using BasicTypedCacheHelperFns<TValue>::UpCastValue;
|
||||||
|
using BasicTypedCacheHelperFns<TValue>::DownCastValue;
|
||||||
|
// ctor
|
||||||
|
using BasicTypedCacheInterface<TValue, kRole,
|
||||||
|
CachePtr>::BasicTypedCacheInterface;
|
||||||
|
|
||||||
|
// Insert with SecondaryCache compatibility (subject to CacheTier).
|
||||||
|
// (Basic Insert() also inherited.)
|
||||||
|
inline Status InsertFull(
|
||||||
|
const Slice& key, TValuePtr value, size_t charge,
|
||||||
|
TypedHandle** handle = nullptr, Priority priority = Priority::LOW,
|
||||||
|
CacheTier lowest_used_cache_tier = CacheTier::kNonVolatileBlockTier) {
|
||||||
|
auto untyped_handle = reinterpret_cast<Handle**>(handle);
|
||||||
|
auto helper = lowest_used_cache_tier == CacheTier::kNonVolatileBlockTier
|
||||||
|
? &kFullHelper
|
||||||
|
: &kBasicHelper;
|
||||||
|
return this->cache_->Insert(key, UpCastValue(value), helper, charge,
|
||||||
|
untyped_handle, priority);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Like SecondaryCache::InsertSaved, with SecondaryCache compatibility
|
||||||
|
// (subject to CacheTier).
|
||||||
|
inline Status InsertSaved(
|
||||||
|
const Slice& key, const Slice& data, TCreateContext* create_context,
|
||||||
|
Priority priority = Priority::LOW,
|
||||||
|
CacheTier lowest_used_cache_tier = CacheTier::kNonVolatileBlockTier,
|
||||||
|
size_t* out_charge = nullptr) {
|
||||||
|
ObjectPtr value;
|
||||||
|
size_t charge;
|
||||||
|
Status st = kFullHelper.create_cb(data, create_context,
|
||||||
|
this->cache_->memory_allocator(), &value,
|
||||||
|
&charge);
|
||||||
|
if (out_charge) {
|
||||||
|
*out_charge = charge;
|
||||||
|
}
|
||||||
|
if (st.ok()) {
|
||||||
|
st = InsertFull(key, DownCastValue(value), charge, nullptr /*handle*/,
|
||||||
|
priority, lowest_used_cache_tier);
|
||||||
|
} else {
|
||||||
|
kFullHelper.del_cb(value, this->cache_->memory_allocator());
|
||||||
|
}
|
||||||
|
return st;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Lookup with SecondaryCache support (subject to CacheTier).
|
||||||
|
// (Basic Lookup() also inherited.)
|
||||||
|
inline TypedHandle* LookupFull(
|
||||||
|
const Slice& key, TCreateContext* create_context = nullptr,
|
||||||
|
Priority priority = Priority::LOW, bool wait = true,
|
||||||
|
Statistics* stats = nullptr,
|
||||||
|
CacheTier lowest_used_cache_tier = CacheTier::kNonVolatileBlockTier) {
|
||||||
|
if (lowest_used_cache_tier == CacheTier::kNonVolatileBlockTier) {
|
||||||
|
return reinterpret_cast<TypedHandle*>(this->cache_->Lookup(
|
||||||
|
key, &kFullHelper, create_context, priority, wait, stats));
|
||||||
|
} else {
|
||||||
|
return BasicTypedCacheInterface<TValue, kRole, CachePtr>::Lookup(key,
|
||||||
|
stats);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
// FullTypedSharedCacheInterface - Like FullTypedCacheInterface but with a
|
||||||
|
// shared_ptr<Cache> for keeping Cache alive.
|
||||||
|
template <class TValue, class TCreateContext,
|
||||||
|
CacheEntryRole kRole = TValue::kCacheEntryRole>
|
||||||
|
using FullTypedSharedCacheInterface =
|
||||||
|
FullTypedCacheInterface<TValue, TCreateContext, kRole,
|
||||||
|
std::shared_ptr<Cache>>;
|
||||||
|
|
||||||
|
#undef CACHE_TYPE_DEFS
|
||||||
|
|
||||||
|
} // namespace ROCKSDB_NAMESPACE
|
|
@ -13,12 +13,6 @@
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
std::unique_ptr<BlobContents> BlobContents::Create(
|
|
||||||
CacheAllocationPtr&& allocation, size_t size) {
|
|
||||||
return std::unique_ptr<BlobContents>(
|
|
||||||
new BlobContents(std::move(allocation), size));
|
|
||||||
}
|
|
||||||
|
|
||||||
size_t BlobContents::ApproximateMemoryUsage() const {
|
size_t BlobContents::ApproximateMemoryUsage() const {
|
||||||
size_t usage = 0;
|
size_t usage = 0;
|
||||||
|
|
||||||
|
@ -45,46 +39,4 @@ size_t BlobContents::ApproximateMemoryUsage() const {
|
||||||
return usage;
|
return usage;
|
||||||
}
|
}
|
||||||
|
|
||||||
size_t BlobContents::SizeCallback(void* obj) {
|
|
||||||
assert(obj);
|
|
||||||
|
|
||||||
return static_cast<const BlobContents*>(obj)->size();
|
|
||||||
}
|
|
||||||
|
|
||||||
Status BlobContents::SaveToCallback(void* from_obj, size_t from_offset,
|
|
||||||
size_t length, void* out) {
|
|
||||||
assert(from_obj);
|
|
||||||
|
|
||||||
const BlobContents* buf = static_cast<const BlobContents*>(from_obj);
|
|
||||||
assert(buf->size() >= from_offset + length);
|
|
||||||
|
|
||||||
memcpy(out, buf->data().data() + from_offset, length);
|
|
||||||
|
|
||||||
return Status::OK();
|
|
||||||
}
|
|
||||||
|
|
||||||
Cache::CacheItemHelper* BlobContents::GetCacheItemHelper() {
|
|
||||||
static Cache::CacheItemHelper cache_helper(
|
|
||||||
&SizeCallback, &SaveToCallback,
|
|
||||||
GetCacheEntryDeleterForRole<BlobContents, CacheEntryRole::kBlobValue>());
|
|
||||||
|
|
||||||
return &cache_helper;
|
|
||||||
}
|
|
||||||
|
|
||||||
Status BlobContents::CreateCallback(CacheAllocationPtr&& allocation,
|
|
||||||
const void* buf, size_t size,
|
|
||||||
void** out_obj, size_t* charge) {
|
|
||||||
assert(allocation);
|
|
||||||
|
|
||||||
memcpy(allocation.get(), buf, size);
|
|
||||||
|
|
||||||
std::unique_ptr<BlobContents> obj = Create(std::move(allocation), size);
|
|
||||||
BlobContents* const contents = obj.release();
|
|
||||||
|
|
||||||
*out_obj = contents;
|
|
||||||
*charge = contents->ApproximateMemoryUsage();
|
|
||||||
|
|
||||||
return Status::OK();
|
|
||||||
}
|
|
||||||
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
} // namespace ROCKSDB_NAMESPACE
|
||||||
|
|
|
@ -18,8 +18,8 @@ namespace ROCKSDB_NAMESPACE {
|
||||||
// A class representing a single uncompressed value read from a blob file.
|
// A class representing a single uncompressed value read from a blob file.
|
||||||
class BlobContents {
|
class BlobContents {
|
||||||
public:
|
public:
|
||||||
static std::unique_ptr<BlobContents> Create(CacheAllocationPtr&& allocation,
|
BlobContents(CacheAllocationPtr&& allocation, size_t size)
|
||||||
size_t size);
|
: allocation_(std::move(allocation)), data_(allocation_.get(), size) {}
|
||||||
|
|
||||||
BlobContents(const BlobContents&) = delete;
|
BlobContents(const BlobContents&) = delete;
|
||||||
BlobContents& operator=(const BlobContents&) = delete;
|
BlobContents& operator=(const BlobContents&) = delete;
|
||||||
|
@ -34,23 +34,26 @@ class BlobContents {
|
||||||
|
|
||||||
size_t ApproximateMemoryUsage() const;
|
size_t ApproximateMemoryUsage() const;
|
||||||
|
|
||||||
// Callbacks for secondary cache
|
// For TypedCacheInterface
|
||||||
static size_t SizeCallback(void* obj);
|
const Slice& ContentSlice() const { return data_; }
|
||||||
|
static constexpr CacheEntryRole kCacheEntryRole = CacheEntryRole::kBlobValue;
|
||||||
static Status SaveToCallback(void* from_obj, size_t from_offset,
|
|
||||||
size_t length, void* out);
|
|
||||||
|
|
||||||
static Cache::CacheItemHelper* GetCacheItemHelper();
|
|
||||||
|
|
||||||
static Status CreateCallback(CacheAllocationPtr&& allocation, const void* buf,
|
|
||||||
size_t size, void** out_obj, size_t* charge);
|
|
||||||
|
|
||||||
private:
|
private:
|
||||||
BlobContents(CacheAllocationPtr&& allocation, size_t size)
|
|
||||||
: allocation_(std::move(allocation)), data_(allocation_.get(), size) {}
|
|
||||||
|
|
||||||
CacheAllocationPtr allocation_;
|
CacheAllocationPtr allocation_;
|
||||||
Slice data_;
|
Slice data_;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
class BlobContentsCreator : public Cache::CreateContext {
|
||||||
|
public:
|
||||||
|
static void Create(std::unique_ptr<BlobContents>* out, size_t* out_charge,
|
||||||
|
const Slice& contents, MemoryAllocator* alloc) {
|
||||||
|
auto raw = new BlobContents(AllocateAndCopyBlock(contents, alloc),
|
||||||
|
contents.size());
|
||||||
|
out->reset(raw);
|
||||||
|
if (out_charge) {
|
||||||
|
*out_charge = raw->ApproximateMemoryUsage();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
} // namespace ROCKSDB_NAMESPACE
|
||||||
|
|
|
@ -13,6 +13,7 @@
|
||||||
#include "db/blob/blob_index.h"
|
#include "db/blob/blob_index.h"
|
||||||
#include "db/blob/blob_log_format.h"
|
#include "db/blob/blob_log_format.h"
|
||||||
#include "db/blob/blob_log_writer.h"
|
#include "db/blob/blob_log_writer.h"
|
||||||
|
#include "db/blob/blob_source.h"
|
||||||
#include "db/event_helpers.h"
|
#include "db/event_helpers.h"
|
||||||
#include "db/version_set.h"
|
#include "db/version_set.h"
|
||||||
#include "file/filename.h"
|
#include "file/filename.h"
|
||||||
|
@ -393,7 +394,7 @@ Status BlobFileBuilder::PutBlobIntoCacheIfNeeded(const Slice& blob,
|
||||||
uint64_t blob_offset) const {
|
uint64_t blob_offset) const {
|
||||||
Status s = Status::OK();
|
Status s = Status::OK();
|
||||||
|
|
||||||
auto blob_cache = immutable_options_->blob_cache;
|
BlobSource::SharedCacheInterface blob_cache{immutable_options_->blob_cache};
|
||||||
auto statistics = immutable_options_->statistics.get();
|
auto statistics = immutable_options_->statistics.get();
|
||||||
bool warm_cache =
|
bool warm_cache =
|
||||||
prepopulate_blob_cache_ == PrepopulateBlobCache::kFlushOnly &&
|
prepopulate_blob_cache_ == PrepopulateBlobCache::kFlushOnly &&
|
||||||
|
@ -407,34 +408,12 @@ Status BlobFileBuilder::PutBlobIntoCacheIfNeeded(const Slice& blob,
|
||||||
|
|
||||||
const Cache::Priority priority = Cache::Priority::BOTTOM;
|
const Cache::Priority priority = Cache::Priority::BOTTOM;
|
||||||
|
|
||||||
// Objects to be put into the cache have to be heap-allocated and
|
s = blob_cache.InsertSaved(key, blob, nullptr /*context*/, priority,
|
||||||
// self-contained, i.e. own their contents. The Cache has to be able to
|
immutable_options_->lowest_used_cache_tier);
|
||||||
// take unique ownership of them.
|
|
||||||
CacheAllocationPtr allocation =
|
|
||||||
AllocateBlock(blob.size(), blob_cache->memory_allocator());
|
|
||||||
memcpy(allocation.get(), blob.data(), blob.size());
|
|
||||||
std::unique_ptr<BlobContents> buf =
|
|
||||||
BlobContents::Create(std::move(allocation), blob.size());
|
|
||||||
|
|
||||||
Cache::CacheItemHelper* const cache_item_helper =
|
|
||||||
BlobContents::GetCacheItemHelper();
|
|
||||||
assert(cache_item_helper);
|
|
||||||
|
|
||||||
if (immutable_options_->lowest_used_cache_tier ==
|
|
||||||
CacheTier::kNonVolatileBlockTier) {
|
|
||||||
s = blob_cache->Insert(key, buf.get(), cache_item_helper,
|
|
||||||
buf->ApproximateMemoryUsage(),
|
|
||||||
nullptr /* cache_handle */, priority);
|
|
||||||
} else {
|
|
||||||
s = blob_cache->Insert(key, buf.get(), buf->ApproximateMemoryUsage(),
|
|
||||||
cache_item_helper->del_cb,
|
|
||||||
nullptr /* cache_handle */, priority);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
RecordTick(statistics, BLOB_DB_CACHE_ADD);
|
RecordTick(statistics, BLOB_DB_CACHE_ADD);
|
||||||
RecordTick(statistics, BLOB_DB_CACHE_BYTES_WRITE, buf->size());
|
RecordTick(statistics, BLOB_DB_CACHE_BYTES_WRITE, blob.size());
|
||||||
buf.release();
|
|
||||||
} else {
|
} else {
|
||||||
RecordTick(statistics, BLOB_DB_CACHE_ADD_FAILURES);
|
RecordTick(statistics, BLOB_DB_CACHE_ADD_FAILURES);
|
||||||
}
|
}
|
||||||
|
|
|
@ -42,13 +42,13 @@ Status BlobFileCache::GetBlobFileReader(
|
||||||
assert(blob_file_reader);
|
assert(blob_file_reader);
|
||||||
assert(blob_file_reader->IsEmpty());
|
assert(blob_file_reader->IsEmpty());
|
||||||
|
|
||||||
const Slice key = GetSlice(&blob_file_number);
|
const Slice key = GetSliceForKey(&blob_file_number);
|
||||||
|
|
||||||
assert(cache_);
|
assert(cache_);
|
||||||
|
|
||||||
Cache::Handle* handle = cache_->Lookup(key);
|
TypedHandle* handle = cache_.Lookup(key);
|
||||||
if (handle) {
|
if (handle) {
|
||||||
*blob_file_reader = CacheHandleGuard<BlobFileReader>(cache_, handle);
|
*blob_file_reader = cache_.Guard(handle);
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -57,9 +57,9 @@ Status BlobFileCache::GetBlobFileReader(
|
||||||
// Check again while holding mutex
|
// Check again while holding mutex
|
||||||
MutexLock lock(mutex_.get(key));
|
MutexLock lock(mutex_.get(key));
|
||||||
|
|
||||||
handle = cache_->Lookup(key);
|
handle = cache_.Lookup(key);
|
||||||
if (handle) {
|
if (handle) {
|
||||||
*blob_file_reader = CacheHandleGuard<BlobFileReader>(cache_, handle);
|
*blob_file_reader = cache_.Guard(handle);
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -84,8 +84,7 @@ Status BlobFileCache::GetBlobFileReader(
|
||||||
{
|
{
|
||||||
constexpr size_t charge = 1;
|
constexpr size_t charge = 1;
|
||||||
|
|
||||||
const Status s = cache_->Insert(key, reader.get(), charge,
|
const Status s = cache_.Insert(key, reader.get(), charge, &handle);
|
||||||
&DeleteCacheEntry<BlobFileReader>, &handle);
|
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
RecordTick(statistics, NO_FILE_ERRORS);
|
RecordTick(statistics, NO_FILE_ERRORS);
|
||||||
return s;
|
return s;
|
||||||
|
@ -94,7 +93,7 @@ Status BlobFileCache::GetBlobFileReader(
|
||||||
|
|
||||||
reader.release();
|
reader.release();
|
||||||
|
|
||||||
*blob_file_reader = CacheHandleGuard<BlobFileReader>(cache_, handle);
|
*blob_file_reader = cache_.Guard(handle);
|
||||||
|
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
|
@ -7,7 +7,8 @@
|
||||||
|
|
||||||
#include <cinttypes>
|
#include <cinttypes>
|
||||||
|
|
||||||
#include "cache/cache_helpers.h"
|
#include "cache/typed_cache.h"
|
||||||
|
#include "db/blob/blob_file_reader.h"
|
||||||
#include "rocksdb/rocksdb_namespace.h"
|
#include "rocksdb/rocksdb_namespace.h"
|
||||||
#include "util/mutexlock.h"
|
#include "util/mutexlock.h"
|
||||||
|
|
||||||
|
@ -18,7 +19,6 @@ struct ImmutableOptions;
|
||||||
struct FileOptions;
|
struct FileOptions;
|
||||||
class HistogramImpl;
|
class HistogramImpl;
|
||||||
class Status;
|
class Status;
|
||||||
class BlobFileReader;
|
|
||||||
class Slice;
|
class Slice;
|
||||||
class IOTracer;
|
class IOTracer;
|
||||||
|
|
||||||
|
@ -36,7 +36,10 @@ class BlobFileCache {
|
||||||
CacheHandleGuard<BlobFileReader>* blob_file_reader);
|
CacheHandleGuard<BlobFileReader>* blob_file_reader);
|
||||||
|
|
||||||
private:
|
private:
|
||||||
Cache* cache_;
|
using CacheInterface =
|
||||||
|
BasicTypedCacheInterface<BlobFileReader, CacheEntryRole::kMisc>;
|
||||||
|
using TypedHandle = CacheInterface::TypedHandle;
|
||||||
|
CacheInterface cache_;
|
||||||
// Note: mutex_ below is used to guard against multiple threads racing to open
|
// Note: mutex_ below is used to guard against multiple threads racing to open
|
||||||
// the same file.
|
// the same file.
|
||||||
Striped<port::Mutex, Slice> mutex_;
|
Striped<port::Mutex, Slice> mutex_;
|
||||||
|
|
|
@ -569,12 +569,7 @@ Status BlobFileReader::UncompressBlobIfNeeded(
|
||||||
assert(result);
|
assert(result);
|
||||||
|
|
||||||
if (compression_type == kNoCompression) {
|
if (compression_type == kNoCompression) {
|
||||||
CacheAllocationPtr allocation =
|
BlobContentsCreator::Create(result, nullptr, value_slice, allocator);
|
||||||
AllocateBlock(value_slice.size(), allocator);
|
|
||||||
memcpy(allocation.get(), value_slice.data(), value_slice.size());
|
|
||||||
|
|
||||||
*result = BlobContents::Create(std::move(allocation), value_slice.size());
|
|
||||||
|
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -602,7 +597,7 @@ Status BlobFileReader::UncompressBlobIfNeeded(
|
||||||
return Status::Corruption("Unable to uncompress blob");
|
return Status::Corruption("Unable to uncompress blob");
|
||||||
}
|
}
|
||||||
|
|
||||||
*result = BlobContents::Create(std::move(output), uncompressed_size);
|
result->reset(new BlobContents(std::move(output), uncompressed_size));
|
||||||
|
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
|
@ -36,8 +36,8 @@ BlobSource::BlobSource(const ImmutableOptions* immutable_options,
|
||||||
if (bbto &&
|
if (bbto &&
|
||||||
bbto->cache_usage_options.options_overrides.at(CacheEntryRole::kBlobCache)
|
bbto->cache_usage_options.options_overrides.at(CacheEntryRole::kBlobCache)
|
||||||
.charged == CacheEntryRoleOptions::Decision::kEnabled) {
|
.charged == CacheEntryRoleOptions::Decision::kEnabled) {
|
||||||
blob_cache_ = std::make_shared<ChargedCache>(immutable_options->blob_cache,
|
blob_cache_ = SharedCacheInterface{std::make_shared<ChargedCache>(
|
||||||
bbto->block_cache);
|
immutable_options->blob_cache, bbto->block_cache)};
|
||||||
}
|
}
|
||||||
#endif // ROCKSDB_LITE
|
#endif // ROCKSDB_LITE
|
||||||
}
|
}
|
||||||
|
@ -82,9 +82,8 @@ Status BlobSource::PutBlobIntoCache(
|
||||||
assert(cached_blob);
|
assert(cached_blob);
|
||||||
assert(cached_blob->IsEmpty());
|
assert(cached_blob->IsEmpty());
|
||||||
|
|
||||||
Cache::Handle* cache_handle = nullptr;
|
TypedHandle* cache_handle = nullptr;
|
||||||
const Status s = InsertEntryIntoCache(cache_key, blob->get(),
|
const Status s = InsertEntryIntoCache(cache_key, blob->get(),
|
||||||
(*blob)->ApproximateMemoryUsage(),
|
|
||||||
&cache_handle, Cache::Priority::BOTTOM);
|
&cache_handle, Cache::Priority::BOTTOM);
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
blob->release();
|
blob->release();
|
||||||
|
@ -106,26 +105,10 @@ Status BlobSource::PutBlobIntoCache(
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::Handle* BlobSource::GetEntryFromCache(const Slice& key) const {
|
BlobSource::TypedHandle* BlobSource::GetEntryFromCache(const Slice& key) const {
|
||||||
Cache::Handle* cache_handle = nullptr;
|
return blob_cache_.LookupFull(
|
||||||
|
key, nullptr /* context */, Cache::Priority::BOTTOM,
|
||||||
if (lowest_used_cache_tier_ == CacheTier::kNonVolatileBlockTier) {
|
true /* wait_for_cache */, statistics_, lowest_used_cache_tier_);
|
||||||
Cache::CreateCallback create_cb =
|
|
||||||
[allocator = blob_cache_->memory_allocator()](
|
|
||||||
const void* buf, size_t size, void** out_obj,
|
|
||||||
size_t* charge) -> Status {
|
|
||||||
return BlobContents::CreateCallback(AllocateBlock(size, allocator), buf,
|
|
||||||
size, out_obj, charge);
|
|
||||||
};
|
|
||||||
|
|
||||||
cache_handle = blob_cache_->Lookup(key, BlobContents::GetCacheItemHelper(),
|
|
||||||
create_cb, Cache::Priority::BOTTOM,
|
|
||||||
true /* wait_for_cache */, statistics_);
|
|
||||||
} else {
|
|
||||||
cache_handle = blob_cache_->Lookup(key, statistics_);
|
|
||||||
}
|
|
||||||
|
|
||||||
return cache_handle;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void BlobSource::PinCachedBlob(CacheHandleGuard<BlobContents>* cached_blob,
|
void BlobSource::PinCachedBlob(CacheHandleGuard<BlobContents>* cached_blob,
|
||||||
|
@ -166,24 +149,11 @@ void BlobSource::PinOwnedBlob(std::unique_ptr<BlobContents>* owned_blob,
|
||||||
}
|
}
|
||||||
|
|
||||||
Status BlobSource::InsertEntryIntoCache(const Slice& key, BlobContents* value,
|
Status BlobSource::InsertEntryIntoCache(const Slice& key, BlobContents* value,
|
||||||
size_t charge,
|
TypedHandle** cache_handle,
|
||||||
Cache::Handle** cache_handle,
|
|
||||||
Cache::Priority priority) const {
|
Cache::Priority priority) const {
|
||||||
Status s;
|
return blob_cache_.InsertFull(key, value, value->ApproximateMemoryUsage(),
|
||||||
|
cache_handle, priority,
|
||||||
Cache::CacheItemHelper* const cache_item_helper =
|
lowest_used_cache_tier_);
|
||||||
BlobContents::GetCacheItemHelper();
|
|
||||||
assert(cache_item_helper);
|
|
||||||
|
|
||||||
if (lowest_used_cache_tier_ == CacheTier::kNonVolatileBlockTier) {
|
|
||||||
s = blob_cache_->Insert(key, value, cache_item_helper, charge, cache_handle,
|
|
||||||
priority);
|
|
||||||
} else {
|
|
||||||
s = blob_cache_->Insert(key, value, charge, cache_item_helper->del_cb,
|
|
||||||
cache_handle, priority);
|
|
||||||
}
|
|
||||||
|
|
||||||
return s;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
Status BlobSource::GetBlob(const ReadOptions& read_options,
|
Status BlobSource::GetBlob(const ReadOptions& read_options,
|
||||||
|
@ -252,9 +222,10 @@ Status BlobSource::GetBlob(const ReadOptions& read_options,
|
||||||
return Status::Corruption("Compression type mismatch when reading blob");
|
return Status::Corruption("Compression type mismatch when reading blob");
|
||||||
}
|
}
|
||||||
|
|
||||||
MemoryAllocator* const allocator = (blob_cache_ && read_options.fill_cache)
|
MemoryAllocator* const allocator =
|
||||||
? blob_cache_->memory_allocator()
|
(blob_cache_ && read_options.fill_cache)
|
||||||
: nullptr;
|
? blob_cache_.get()->memory_allocator()
|
||||||
|
: nullptr;
|
||||||
|
|
||||||
uint64_t read_size = 0;
|
uint64_t read_size = 0;
|
||||||
s = blob_file_reader.GetValue()->GetBlob(
|
s = blob_file_reader.GetValue()->GetBlob(
|
||||||
|
@ -418,9 +389,10 @@ void BlobSource::MultiGetBlobFromOneFile(const ReadOptions& read_options,
|
||||||
|
|
||||||
assert(blob_file_reader.GetValue());
|
assert(blob_file_reader.GetValue());
|
||||||
|
|
||||||
MemoryAllocator* const allocator = (blob_cache_ && read_options.fill_cache)
|
MemoryAllocator* const allocator =
|
||||||
? blob_cache_->memory_allocator()
|
(blob_cache_ && read_options.fill_cache)
|
||||||
: nullptr;
|
? blob_cache_.get()->memory_allocator()
|
||||||
|
: nullptr;
|
||||||
|
|
||||||
blob_file_reader.GetValue()->MultiGetBlob(read_options, allocator,
|
blob_file_reader.GetValue()->MultiGetBlob(read_options, allocator,
|
||||||
_blob_reqs, &_bytes_read);
|
_blob_reqs, &_bytes_read);
|
||||||
|
|
|
@ -8,8 +8,9 @@
|
||||||
#include <cinttypes>
|
#include <cinttypes>
|
||||||
#include <memory>
|
#include <memory>
|
||||||
|
|
||||||
#include "cache/cache_helpers.h"
|
|
||||||
#include "cache/cache_key.h"
|
#include "cache/cache_key.h"
|
||||||
|
#include "cache/typed_cache.h"
|
||||||
|
#include "db/blob/blob_contents.h"
|
||||||
#include "db/blob/blob_file_cache.h"
|
#include "db/blob/blob_file_cache.h"
|
||||||
#include "db/blob/blob_read_request.h"
|
#include "db/blob/blob_read_request.h"
|
||||||
#include "rocksdb/cache.h"
|
#include "rocksdb/cache.h"
|
||||||
|
@ -23,7 +24,6 @@ struct ImmutableOptions;
|
||||||
class Status;
|
class Status;
|
||||||
class FilePrefetchBuffer;
|
class FilePrefetchBuffer;
|
||||||
class Slice;
|
class Slice;
|
||||||
class BlobContents;
|
|
||||||
|
|
||||||
// BlobSource is a class that provides universal access to blobs, regardless of
|
// BlobSource is a class that provides universal access to blobs, regardless of
|
||||||
// whether they are in the blob cache, secondary cache, or (remote) storage.
|
// whether they are in the blob cache, secondary cache, or (remote) storage.
|
||||||
|
@ -106,6 +106,14 @@ class BlobSource {
|
||||||
bool TEST_BlobInCache(uint64_t file_number, uint64_t file_size,
|
bool TEST_BlobInCache(uint64_t file_number, uint64_t file_size,
|
||||||
uint64_t offset, size_t* charge = nullptr) const;
|
uint64_t offset, size_t* charge = nullptr) const;
|
||||||
|
|
||||||
|
// For TypedSharedCacheInterface
|
||||||
|
void Create(BlobContents** out, const char* buf, size_t size,
|
||||||
|
MemoryAllocator* alloc);
|
||||||
|
|
||||||
|
using SharedCacheInterface =
|
||||||
|
FullTypedSharedCacheInterface<BlobContents, BlobContentsCreator>;
|
||||||
|
using TypedHandle = SharedCacheInterface::TypedHandle;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
Status GetBlobFromCache(const Slice& cache_key,
|
Status GetBlobFromCache(const Slice& cache_key,
|
||||||
CacheHandleGuard<BlobContents>* cached_blob) const;
|
CacheHandleGuard<BlobContents>* cached_blob) const;
|
||||||
|
@ -120,10 +128,10 @@ class BlobSource {
|
||||||
static void PinOwnedBlob(std::unique_ptr<BlobContents>* owned_blob,
|
static void PinOwnedBlob(std::unique_ptr<BlobContents>* owned_blob,
|
||||||
PinnableSlice* value);
|
PinnableSlice* value);
|
||||||
|
|
||||||
Cache::Handle* GetEntryFromCache(const Slice& key) const;
|
TypedHandle* GetEntryFromCache(const Slice& key) const;
|
||||||
|
|
||||||
Status InsertEntryIntoCache(const Slice& key, BlobContents* value,
|
Status InsertEntryIntoCache(const Slice& key, BlobContents* value,
|
||||||
size_t charge, Cache::Handle** cache_handle,
|
TypedHandle** cache_handle,
|
||||||
Cache::Priority priority) const;
|
Cache::Priority priority) const;
|
||||||
|
|
||||||
inline CacheKey GetCacheKey(uint64_t file_number, uint64_t /*file_size*/,
|
inline CacheKey GetCacheKey(uint64_t file_number, uint64_t /*file_size*/,
|
||||||
|
@ -141,7 +149,7 @@ class BlobSource {
|
||||||
BlobFileCache* blob_file_cache_;
|
BlobFileCache* blob_file_cache_;
|
||||||
|
|
||||||
// A cache to store uncompressed blobs.
|
// A cache to store uncompressed blobs.
|
||||||
std::shared_ptr<Cache> blob_cache_;
|
mutable SharedCacheInterface blob_cache_;
|
||||||
|
|
||||||
// The control option of how the cache tiers will be used. Currently rocksdb
|
// The control option of how the cache tiers will be used. Currently rocksdb
|
||||||
// support block/blob cache (volatile tier) and secondary cache (this tier
|
// support block/blob cache (volatile tier) and secondary cache (this tier
|
||||||
|
|
|
@ -1150,15 +1150,6 @@ TEST_F(BlobSecondaryCacheTest, GetBlobsFromSecondaryCache) {
|
||||||
auto blob_cache = options_.blob_cache;
|
auto blob_cache = options_.blob_cache;
|
||||||
auto secondary_cache = lru_cache_opts_.secondary_cache;
|
auto secondary_cache = lru_cache_opts_.secondary_cache;
|
||||||
|
|
||||||
Cache::CreateCallback create_cb = [](const void* buf, size_t size,
|
|
||||||
void** out_obj,
|
|
||||||
size_t* charge) -> Status {
|
|
||||||
CacheAllocationPtr allocation(new char[size]);
|
|
||||||
|
|
||||||
return BlobContents::CreateCallback(std::move(allocation), buf, size,
|
|
||||||
out_obj, charge);
|
|
||||||
};
|
|
||||||
|
|
||||||
{
|
{
|
||||||
// GetBlob
|
// GetBlob
|
||||||
std::vector<PinnableSlice> values(keys.size());
|
std::vector<PinnableSlice> values(keys.size());
|
||||||
|
@ -1219,14 +1210,15 @@ TEST_F(BlobSecondaryCacheTest, GetBlobsFromSecondaryCache) {
|
||||||
{
|
{
|
||||||
CacheKey cache_key = base_cache_key.WithOffset(blob_offsets[0]);
|
CacheKey cache_key = base_cache_key.WithOffset(blob_offsets[0]);
|
||||||
const Slice key0 = cache_key.AsSlice();
|
const Slice key0 = cache_key.AsSlice();
|
||||||
auto handle0 = blob_cache->Lookup(key0, statistics);
|
auto handle0 = blob_cache->BasicLookup(key0, statistics);
|
||||||
ASSERT_EQ(handle0, nullptr);
|
ASSERT_EQ(handle0, nullptr);
|
||||||
|
|
||||||
// key0's item should be in the secondary cache.
|
// key0's item should be in the secondary cache.
|
||||||
bool is_in_sec_cache = false;
|
bool is_in_sec_cache = false;
|
||||||
auto sec_handle0 =
|
auto sec_handle0 = secondary_cache->Lookup(
|
||||||
secondary_cache->Lookup(key0, create_cb, true,
|
key0, &BlobSource::SharedCacheInterface::kFullHelper,
|
||||||
/*advise_erase=*/true, is_in_sec_cache);
|
/*context*/ nullptr, true,
|
||||||
|
/*advise_erase=*/true, is_in_sec_cache);
|
||||||
ASSERT_FALSE(is_in_sec_cache);
|
ASSERT_FALSE(is_in_sec_cache);
|
||||||
ASSERT_NE(sec_handle0, nullptr);
|
ASSERT_NE(sec_handle0, nullptr);
|
||||||
ASSERT_TRUE(sec_handle0->IsReady());
|
ASSERT_TRUE(sec_handle0->IsReady());
|
||||||
|
@ -1246,14 +1238,15 @@ TEST_F(BlobSecondaryCacheTest, GetBlobsFromSecondaryCache) {
|
||||||
{
|
{
|
||||||
CacheKey cache_key = base_cache_key.WithOffset(blob_offsets[1]);
|
CacheKey cache_key = base_cache_key.WithOffset(blob_offsets[1]);
|
||||||
const Slice key1 = cache_key.AsSlice();
|
const Slice key1 = cache_key.AsSlice();
|
||||||
auto handle1 = blob_cache->Lookup(key1, statistics);
|
auto handle1 = blob_cache->BasicLookup(key1, statistics);
|
||||||
ASSERT_NE(handle1, nullptr);
|
ASSERT_NE(handle1, nullptr);
|
||||||
blob_cache->Release(handle1);
|
blob_cache->Release(handle1);
|
||||||
|
|
||||||
bool is_in_sec_cache = false;
|
bool is_in_sec_cache = false;
|
||||||
auto sec_handle1 =
|
auto sec_handle1 = secondary_cache->Lookup(
|
||||||
secondary_cache->Lookup(key1, create_cb, true,
|
key1, &BlobSource::SharedCacheInterface::kFullHelper,
|
||||||
/*advise_erase=*/true, is_in_sec_cache);
|
/*context*/ nullptr, true,
|
||||||
|
/*advise_erase=*/true, is_in_sec_cache);
|
||||||
ASSERT_FALSE(is_in_sec_cache);
|
ASSERT_FALSE(is_in_sec_cache);
|
||||||
ASSERT_EQ(sec_handle1, nullptr);
|
ASSERT_EQ(sec_handle1, nullptr);
|
||||||
|
|
||||||
|
@ -1276,7 +1269,7 @@ TEST_F(BlobSecondaryCacheTest, GetBlobsFromSecondaryCache) {
|
||||||
// key0 should be in the primary cache.
|
// key0 should be in the primary cache.
|
||||||
CacheKey cache_key0 = base_cache_key.WithOffset(blob_offsets[0]);
|
CacheKey cache_key0 = base_cache_key.WithOffset(blob_offsets[0]);
|
||||||
const Slice key0 = cache_key0.AsSlice();
|
const Slice key0 = cache_key0.AsSlice();
|
||||||
auto handle0 = blob_cache->Lookup(key0, statistics);
|
auto handle0 = blob_cache->BasicLookup(key0, statistics);
|
||||||
ASSERT_NE(handle0, nullptr);
|
ASSERT_NE(handle0, nullptr);
|
||||||
auto value = static_cast<BlobContents*>(blob_cache->Value(handle0));
|
auto value = static_cast<BlobContents*>(blob_cache->Value(handle0));
|
||||||
ASSERT_NE(value, nullptr);
|
ASSERT_NE(value, nullptr);
|
||||||
|
@ -1286,12 +1279,12 @@ TEST_F(BlobSecondaryCacheTest, GetBlobsFromSecondaryCache) {
|
||||||
// key1 is not in the primary cache and is in the secondary cache.
|
// key1 is not in the primary cache and is in the secondary cache.
|
||||||
CacheKey cache_key1 = base_cache_key.WithOffset(blob_offsets[1]);
|
CacheKey cache_key1 = base_cache_key.WithOffset(blob_offsets[1]);
|
||||||
const Slice key1 = cache_key1.AsSlice();
|
const Slice key1 = cache_key1.AsSlice();
|
||||||
auto handle1 = blob_cache->Lookup(key1, statistics);
|
auto handle1 = blob_cache->BasicLookup(key1, statistics);
|
||||||
ASSERT_EQ(handle1, nullptr);
|
ASSERT_EQ(handle1, nullptr);
|
||||||
|
|
||||||
// erase key0 from the primary cache.
|
// erase key0 from the primary cache.
|
||||||
blob_cache->Erase(key0);
|
blob_cache->Erase(key0);
|
||||||
handle0 = blob_cache->Lookup(key0, statistics);
|
handle0 = blob_cache->BasicLookup(key0, statistics);
|
||||||
ASSERT_EQ(handle0, nullptr);
|
ASSERT_EQ(handle0, nullptr);
|
||||||
|
|
||||||
// key1 promotion should succeed due to the primary cache being empty. we
|
// key1 promotion should succeed due to the primary cache being empty. we
|
||||||
|
@ -1307,7 +1300,7 @@ TEST_F(BlobSecondaryCacheTest, GetBlobsFromSecondaryCache) {
|
||||||
// in the secondary cache. So, the primary cache's Lookup() without
|
// in the secondary cache. So, the primary cache's Lookup() without
|
||||||
// secondary cache support cannot see it. (NOTE: The dummy handle used
|
// secondary cache support cannot see it. (NOTE: The dummy handle used
|
||||||
// to be a leaky abstraction but not anymore.)
|
// to be a leaky abstraction but not anymore.)
|
||||||
handle1 = blob_cache->Lookup(key1, statistics);
|
handle1 = blob_cache->BasicLookup(key1, statistics);
|
||||||
ASSERT_EQ(handle1, nullptr);
|
ASSERT_EQ(handle1, nullptr);
|
||||||
|
|
||||||
// But after another access, it is promoted to primary cache
|
// But after another access, it is promoted to primary cache
|
||||||
|
@ -1315,7 +1308,7 @@ TEST_F(BlobSecondaryCacheTest, GetBlobsFromSecondaryCache) {
|
||||||
blob_offsets[1]));
|
blob_offsets[1]));
|
||||||
|
|
||||||
// And Lookup() can find it (without secondary cache support)
|
// And Lookup() can find it (without secondary cache support)
|
||||||
handle1 = blob_cache->Lookup(key1, statistics);
|
handle1 = blob_cache->BasicLookup(key1, statistics);
|
||||||
ASSERT_NE(handle1, nullptr);
|
ASSERT_NE(handle1, nullptr);
|
||||||
ASSERT_NE(blob_cache->Value(handle1), nullptr);
|
ASSERT_NE(blob_cache->Value(handle1), nullptr);
|
||||||
blob_cache->Release(handle1);
|
blob_cache->Release(handle1);
|
||||||
|
|
|
@ -3537,24 +3537,27 @@ class DBBasicTestMultiGet : public DBTestBase {
|
||||||
|
|
||||||
const char* Name() const override { return "MyBlockCache"; }
|
const char* Name() const override { return "MyBlockCache"; }
|
||||||
|
|
||||||
using Cache::Insert;
|
Status Insert(const Slice& key, Cache::ObjectPtr value,
|
||||||
Status Insert(const Slice& key, void* value, size_t charge,
|
const CacheItemHelper* helper, size_t charge,
|
||||||
void (*deleter)(const Slice& key, void* value),
|
|
||||||
Handle** handle = nullptr,
|
Handle** handle = nullptr,
|
||||||
Priority priority = Priority::LOW) override {
|
Priority priority = Priority::LOW) override {
|
||||||
num_inserts_++;
|
num_inserts_++;
|
||||||
return target_->Insert(key, value, charge, deleter, handle, priority);
|
return target_->Insert(key, value, helper, charge, handle, priority);
|
||||||
}
|
}
|
||||||
|
|
||||||
using Cache::Lookup;
|
Handle* Lookup(const Slice& key, const CacheItemHelper* helper,
|
||||||
Handle* Lookup(const Slice& key, Statistics* stats = nullptr) override {
|
CreateContext* create_context,
|
||||||
|
Priority priority = Priority::LOW, bool wait = true,
|
||||||
|
Statistics* stats = nullptr) override {
|
||||||
num_lookups_++;
|
num_lookups_++;
|
||||||
Handle* handle = target_->Lookup(key, stats);
|
Handle* handle =
|
||||||
|
target_->Lookup(key, helper, create_context, priority, wait, stats);
|
||||||
if (handle != nullptr) {
|
if (handle != nullptr) {
|
||||||
num_found_++;
|
num_found_++;
|
||||||
}
|
}
|
||||||
return handle;
|
return handle;
|
||||||
}
|
}
|
||||||
|
|
||||||
int num_lookups() { return num_lookups_; }
|
int num_lookups() { return num_lookups_; }
|
||||||
|
|
||||||
int num_found() { return num_found_; }
|
int num_found() { return num_found_; }
|
||||||
|
|
|
@ -14,6 +14,7 @@
|
||||||
#include "cache/cache_entry_roles.h"
|
#include "cache/cache_entry_roles.h"
|
||||||
#include "cache/cache_key.h"
|
#include "cache/cache_key.h"
|
||||||
#include "cache/lru_cache.h"
|
#include "cache/lru_cache.h"
|
||||||
|
#include "cache/typed_cache.h"
|
||||||
#include "db/column_family.h"
|
#include "db/column_family.h"
|
||||||
#include "db/db_impl/db_impl.h"
|
#include "db/db_impl/db_impl.h"
|
||||||
#include "db/db_test_util.h"
|
#include "db/db_test_util.h"
|
||||||
|
@ -365,9 +366,7 @@ class PersistentCacheFromCache : public PersistentCache {
|
||||||
}
|
}
|
||||||
std::unique_ptr<char[]> copy{new char[size]};
|
std::unique_ptr<char[]> copy{new char[size]};
|
||||||
std::copy_n(data, size, copy.get());
|
std::copy_n(data, size, copy.get());
|
||||||
Status s = cache_->Insert(
|
Status s = cache_.Insert(key, copy.get(), size);
|
||||||
key, copy.get(), size,
|
|
||||||
GetCacheEntryDeleterForRole<char[], CacheEntryRole::kMisc>());
|
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
copy.release();
|
copy.release();
|
||||||
}
|
}
|
||||||
|
@ -376,13 +375,13 @@ class PersistentCacheFromCache : public PersistentCache {
|
||||||
|
|
||||||
Status Lookup(const Slice& key, std::unique_ptr<char[]>* data,
|
Status Lookup(const Slice& key, std::unique_ptr<char[]>* data,
|
||||||
size_t* size) override {
|
size_t* size) override {
|
||||||
auto handle = cache_->Lookup(key);
|
auto handle = cache_.Lookup(key);
|
||||||
if (handle) {
|
if (handle) {
|
||||||
char* ptr = static_cast<char*>(cache_->Value(handle));
|
char* ptr = cache_.Value(handle);
|
||||||
*size = cache_->GetCharge(handle);
|
*size = cache_.get()->GetCharge(handle);
|
||||||
data->reset(new char[*size]);
|
data->reset(new char[*size]);
|
||||||
std::copy_n(ptr, *size, data->get());
|
std::copy_n(ptr, *size, data->get());
|
||||||
cache_->Release(handle);
|
cache_.Release(handle);
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
} else {
|
} else {
|
||||||
return Status::NotFound();
|
return Status::NotFound();
|
||||||
|
@ -395,10 +394,10 @@ class PersistentCacheFromCache : public PersistentCache {
|
||||||
|
|
||||||
std::string GetPrintableOptions() const override { return ""; }
|
std::string GetPrintableOptions() const override { return ""; }
|
||||||
|
|
||||||
uint64_t NewId() override { return cache_->NewId(); }
|
uint64_t NewId() override { return cache_.get()->NewId(); }
|
||||||
|
|
||||||
private:
|
private:
|
||||||
std::shared_ptr<Cache> cache_;
|
BasicTypedSharedCacheInterface<char[], CacheEntryRole::kMisc> cache_;
|
||||||
bool read_only_;
|
bool read_only_;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -406,8 +405,8 @@ class ReadOnlyCacheWrapper : public CacheWrapper {
|
||||||
using CacheWrapper::CacheWrapper;
|
using CacheWrapper::CacheWrapper;
|
||||||
|
|
||||||
using Cache::Insert;
|
using Cache::Insert;
|
||||||
Status Insert(const Slice& /*key*/, void* /*value*/, size_t /*charge*/,
|
Status Insert(const Slice& /*key*/, Cache::ObjectPtr /*value*/,
|
||||||
void (*)(const Slice& key, void* value) /*deleter*/,
|
const CacheItemHelper* /*helper*/, size_t /*charge*/,
|
||||||
Handle** /*handle*/, Priority /*priority*/) override {
|
Handle** /*handle*/, Priority /*priority*/) override {
|
||||||
return Status::NotSupported();
|
return Status::NotSupported();
|
||||||
}
|
}
|
||||||
|
@ -827,16 +826,15 @@ class MockCache : public LRUCache {
|
||||||
|
|
||||||
using ShardedCache::Insert;
|
using ShardedCache::Insert;
|
||||||
|
|
||||||
Status Insert(const Slice& key, void* value,
|
Status Insert(const Slice& key, Cache::ObjectPtr value,
|
||||||
const Cache::CacheItemHelper* helper_cb, size_t charge,
|
const Cache::CacheItemHelper* helper, size_t charge,
|
||||||
Handle** handle, Priority priority) override {
|
Handle** handle, Priority priority) override {
|
||||||
DeleterFn delete_cb = helper_cb->del_cb;
|
|
||||||
if (priority == Priority::LOW) {
|
if (priority == Priority::LOW) {
|
||||||
low_pri_insert_count++;
|
low_pri_insert_count++;
|
||||||
} else {
|
} else {
|
||||||
high_pri_insert_count++;
|
high_pri_insert_count++;
|
||||||
}
|
}
|
||||||
return LRUCache::Insert(key, value, charge, delete_cb, handle, priority);
|
return LRUCache::Insert(key, value, helper, charge, handle, priority);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -916,7 +914,10 @@ class LookupLiarCache : public CacheWrapper {
|
||||||
: CacheWrapper(std::move(target)) {}
|
: CacheWrapper(std::move(target)) {}
|
||||||
|
|
||||||
using Cache::Lookup;
|
using Cache::Lookup;
|
||||||
Handle* Lookup(const Slice& key, Statistics* stats) override {
|
Handle* Lookup(const Slice& key, const CacheItemHelper* helper = nullptr,
|
||||||
|
CreateContext* create_context = nullptr,
|
||||||
|
Priority priority = Priority::LOW, bool wait = true,
|
||||||
|
Statistics* stats = nullptr) override {
|
||||||
if (nth_lookup_not_found_ == 1) {
|
if (nth_lookup_not_found_ == 1) {
|
||||||
nth_lookup_not_found_ = 0;
|
nth_lookup_not_found_ = 0;
|
||||||
return nullptr;
|
return nullptr;
|
||||||
|
@ -924,7 +925,8 @@ class LookupLiarCache : public CacheWrapper {
|
||||||
if (nth_lookup_not_found_ > 1) {
|
if (nth_lookup_not_found_ > 1) {
|
||||||
--nth_lookup_not_found_;
|
--nth_lookup_not_found_;
|
||||||
}
|
}
|
||||||
return CacheWrapper::Lookup(key, stats);
|
return CacheWrapper::Lookup(key, helper, create_context, priority, wait,
|
||||||
|
stats);
|
||||||
}
|
}
|
||||||
|
|
||||||
// 1 == next lookup, 2 == after next, etc.
|
// 1 == next lookup, 2 == after next, etc.
|
||||||
|
@ -1275,12 +1277,11 @@ TEST_F(DBBlockCacheTest, CacheCompressionDict) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static void ClearCache(Cache* cache) {
|
static void ClearCache(Cache* cache) {
|
||||||
auto roles = CopyCacheDeleterRoleMap();
|
|
||||||
std::deque<std::string> keys;
|
std::deque<std::string> keys;
|
||||||
Cache::ApplyToAllEntriesOptions opts;
|
Cache::ApplyToAllEntriesOptions opts;
|
||||||
auto callback = [&](const Slice& key, void* /*value*/, size_t /*charge*/,
|
auto callback = [&](const Slice& key, Cache::ObjectPtr, size_t /*charge*/,
|
||||||
Cache::DeleterFn deleter) {
|
const Cache::CacheItemHelper* helper) {
|
||||||
if (roles.find(deleter) == roles.end()) {
|
if (helper && helper->role == CacheEntryRole::kMisc) {
|
||||||
// Keep the stats collector
|
// Keep the stats collector
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
@ -1450,14 +1451,13 @@ TEST_F(DBBlockCacheTest, CacheEntryRoleStats) {
|
||||||
ClearCache(cache.get());
|
ClearCache(cache.get());
|
||||||
Cache::Handle* h = nullptr;
|
Cache::Handle* h = nullptr;
|
||||||
if (strcmp(cache->Name(), "LRUCache") == 0) {
|
if (strcmp(cache->Name(), "LRUCache") == 0) {
|
||||||
ASSERT_OK(cache->Insert("Fill-it-up", nullptr, capacity + 1,
|
ASSERT_OK(cache->Insert("Fill-it-up", nullptr, &kNoopCacheItemHelper,
|
||||||
GetNoopDeleterForRole<CacheEntryRole::kMisc>(),
|
capacity + 1, &h, Cache::Priority::HIGH));
|
||||||
&h, Cache::Priority::HIGH));
|
|
||||||
} else {
|
} else {
|
||||||
// For ClockCache we use a 16-byte key.
|
// For ClockCache we use a 16-byte key.
|
||||||
ASSERT_OK(cache->Insert("Fill-it-up-xxxxx", nullptr, capacity + 1,
|
ASSERT_OK(cache->Insert("Fill-it-up-xxxxx", nullptr,
|
||||||
GetNoopDeleterForRole<CacheEntryRole::kMisc>(),
|
&kNoopCacheItemHelper, capacity + 1, &h,
|
||||||
&h, Cache::Priority::HIGH));
|
Cache::Priority::HIGH));
|
||||||
}
|
}
|
||||||
ASSERT_GT(cache->GetUsage(), cache->GetCapacity());
|
ASSERT_GT(cache->GetUsage(), cache->GetCapacity());
|
||||||
expected = {};
|
expected = {};
|
||||||
|
@ -1548,7 +1548,7 @@ void DummyFillCache(Cache& cache, size_t entry_size,
|
||||||
size_t charge = std::min(entry_size, capacity - my_usage);
|
size_t charge = std::min(entry_size, capacity - my_usage);
|
||||||
Cache::Handle* handle;
|
Cache::Handle* handle;
|
||||||
Status st = cache.Insert(ck.WithOffset(my_usage).AsSlice(), fake_value,
|
Status st = cache.Insert(ck.WithOffset(my_usage).AsSlice(), fake_value,
|
||||||
charge, /*deleter*/ nullptr, &handle);
|
&kNoopCacheItemHelper, charge, &handle);
|
||||||
ASSERT_OK(st);
|
ASSERT_OK(st);
|
||||||
handles.emplace_back(&cache, handle);
|
handles.emplace_back(&cache, handle);
|
||||||
my_usage += charge;
|
my_usage += charge;
|
||||||
|
|
|
@ -1848,8 +1848,8 @@ TEST_F(DBPropertiesTest, BlobCacheProperties) {
|
||||||
|
|
||||||
// Insert unpinned blob to the cache and check size.
|
// Insert unpinned blob to the cache and check size.
|
||||||
constexpr size_t kSize1 = 70;
|
constexpr size_t kSize1 = 70;
|
||||||
ASSERT_OK(blob_cache->Insert("blob1", nullptr /*value*/, kSize1,
|
ASSERT_OK(blob_cache->Insert("blob1", nullptr /*value*/,
|
||||||
nullptr /*deleter*/));
|
&kNoopCacheItemHelper, kSize1));
|
||||||
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlobCacheCapacity, &value));
|
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlobCacheCapacity, &value));
|
||||||
ASSERT_EQ(kCapacity, value);
|
ASSERT_EQ(kCapacity, value);
|
||||||
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlobCacheUsage, &value));
|
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlobCacheUsage, &value));
|
||||||
|
@ -1861,8 +1861,8 @@ TEST_F(DBPropertiesTest, BlobCacheProperties) {
|
||||||
// Insert pinned blob to the cache and check size.
|
// Insert pinned blob to the cache and check size.
|
||||||
constexpr size_t kSize2 = 60;
|
constexpr size_t kSize2 = 60;
|
||||||
Cache::Handle* blob2 = nullptr;
|
Cache::Handle* blob2 = nullptr;
|
||||||
ASSERT_OK(blob_cache->Insert("blob2", nullptr /*value*/, kSize2,
|
ASSERT_OK(blob_cache->Insert("blob2", nullptr /*value*/,
|
||||||
nullptr /*deleter*/, &blob2));
|
&kNoopCacheItemHelper, kSize2, &blob2));
|
||||||
ASSERT_NE(nullptr, blob2);
|
ASSERT_NE(nullptr, blob2);
|
||||||
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlobCacheCapacity, &value));
|
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlobCacheCapacity, &value));
|
||||||
ASSERT_EQ(kCapacity, value);
|
ASSERT_EQ(kCapacity, value);
|
||||||
|
@ -1876,8 +1876,8 @@ TEST_F(DBPropertiesTest, BlobCacheProperties) {
|
||||||
// Insert another pinned blob to make the cache over-sized.
|
// Insert another pinned blob to make the cache over-sized.
|
||||||
constexpr size_t kSize3 = 80;
|
constexpr size_t kSize3 = 80;
|
||||||
Cache::Handle* blob3 = nullptr;
|
Cache::Handle* blob3 = nullptr;
|
||||||
ASSERT_OK(blob_cache->Insert("blob3", nullptr /*value*/, kSize3,
|
ASSERT_OK(blob_cache->Insert("blob3", nullptr /*value*/,
|
||||||
nullptr /*deleter*/, &blob3));
|
&kNoopCacheItemHelper, kSize3, &blob3));
|
||||||
ASSERT_NE(nullptr, blob3);
|
ASSERT_NE(nullptr, blob3);
|
||||||
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlobCacheCapacity, &value));
|
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlobCacheCapacity, &value));
|
||||||
ASSERT_EQ(kCapacity, value);
|
ASSERT_EQ(kCapacity, value);
|
||||||
|
@ -1956,8 +1956,8 @@ TEST_F(DBPropertiesTest, BlockCacheProperties) {
|
||||||
|
|
||||||
// Insert unpinned item to the cache and check size.
|
// Insert unpinned item to the cache and check size.
|
||||||
constexpr size_t kSize1 = 50;
|
constexpr size_t kSize1 = 50;
|
||||||
ASSERT_OK(block_cache->Insert("item1", nullptr /*value*/, kSize1,
|
ASSERT_OK(block_cache->Insert("item1", nullptr /*value*/,
|
||||||
nullptr /*deleter*/));
|
&kNoopCacheItemHelper, kSize1));
|
||||||
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlockCacheCapacity, &value));
|
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlockCacheCapacity, &value));
|
||||||
ASSERT_EQ(kCapacity, value);
|
ASSERT_EQ(kCapacity, value);
|
||||||
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlockCacheUsage, &value));
|
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlockCacheUsage, &value));
|
||||||
|
@ -1969,8 +1969,8 @@ TEST_F(DBPropertiesTest, BlockCacheProperties) {
|
||||||
// Insert pinned item to the cache and check size.
|
// Insert pinned item to the cache and check size.
|
||||||
constexpr size_t kSize2 = 30;
|
constexpr size_t kSize2 = 30;
|
||||||
Cache::Handle* item2 = nullptr;
|
Cache::Handle* item2 = nullptr;
|
||||||
ASSERT_OK(block_cache->Insert("item2", nullptr /*value*/, kSize2,
|
ASSERT_OK(block_cache->Insert("item2", nullptr /*value*/,
|
||||||
nullptr /*deleter*/, &item2));
|
&kNoopCacheItemHelper, kSize2, &item2));
|
||||||
ASSERT_NE(nullptr, item2);
|
ASSERT_NE(nullptr, item2);
|
||||||
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlockCacheCapacity, &value));
|
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlockCacheCapacity, &value));
|
||||||
ASSERT_EQ(kCapacity, value);
|
ASSERT_EQ(kCapacity, value);
|
||||||
|
@ -1983,8 +1983,8 @@ TEST_F(DBPropertiesTest, BlockCacheProperties) {
|
||||||
// Insert another pinned item to make the cache over-sized.
|
// Insert another pinned item to make the cache over-sized.
|
||||||
constexpr size_t kSize3 = 80;
|
constexpr size_t kSize3 = 80;
|
||||||
Cache::Handle* item3 = nullptr;
|
Cache::Handle* item3 = nullptr;
|
||||||
ASSERT_OK(block_cache->Insert("item3", nullptr /*value*/, kSize3,
|
ASSERT_OK(block_cache->Insert("item3", nullptr /*value*/,
|
||||||
nullptr /*deleter*/, &item3));
|
&kNoopCacheItemHelper, kSize3, &item3));
|
||||||
ASSERT_NE(nullptr, item2);
|
ASSERT_NE(nullptr, item2);
|
||||||
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlockCacheCapacity, &value));
|
ASSERT_TRUE(db_->GetIntProperty(DB::Properties::kBlockCacheCapacity, &value));
|
||||||
ASSERT_EQ(kCapacity, value);
|
ASSERT_EQ(kCapacity, value);
|
||||||
|
|
|
@ -1723,12 +1723,13 @@ TargetCacheChargeTrackingCache<R>::TargetCacheChargeTrackingCache(
|
||||||
cache_charge_increments_sum_(0) {}
|
cache_charge_increments_sum_(0) {}
|
||||||
|
|
||||||
template <CacheEntryRole R>
|
template <CacheEntryRole R>
|
||||||
Status TargetCacheChargeTrackingCache<R>::Insert(
|
Status TargetCacheChargeTrackingCache<R>::Insert(const Slice& key,
|
||||||
const Slice& key, void* value, size_t charge,
|
ObjectPtr value,
|
||||||
void (*deleter)(const Slice& key, void* value), Handle** handle,
|
const CacheItemHelper* helper,
|
||||||
Priority priority) {
|
size_t charge, Handle** handle,
|
||||||
Status s = target_->Insert(key, value, charge, deleter, handle, priority);
|
Priority priority) {
|
||||||
if (deleter == kNoopDeleter) {
|
Status s = target_->Insert(key, value, helper, charge, handle, priority);
|
||||||
|
if (helper == kCrmHelper) {
|
||||||
if (last_peak_tracked_) {
|
if (last_peak_tracked_) {
|
||||||
cache_charge_peak_ = 0;
|
cache_charge_peak_ = 0;
|
||||||
cache_charge_increment_ = 0;
|
cache_charge_increment_ = 0;
|
||||||
|
@ -1747,8 +1748,8 @@ Status TargetCacheChargeTrackingCache<R>::Insert(
|
||||||
template <CacheEntryRole R>
|
template <CacheEntryRole R>
|
||||||
bool TargetCacheChargeTrackingCache<R>::Release(Handle* handle,
|
bool TargetCacheChargeTrackingCache<R>::Release(Handle* handle,
|
||||||
bool erase_if_last_ref) {
|
bool erase_if_last_ref) {
|
||||||
auto deleter = GetDeleter(handle);
|
auto helper = GetCacheItemHelper(handle);
|
||||||
if (deleter == kNoopDeleter) {
|
if (helper == kCrmHelper) {
|
||||||
if (!last_peak_tracked_) {
|
if (!last_peak_tracked_) {
|
||||||
cache_charge_peaks_.push_back(cache_charge_peak_);
|
cache_charge_peaks_.push_back(cache_charge_peak_);
|
||||||
cache_charge_increments_sum_ += cache_charge_increment_;
|
cache_charge_increments_sum_ += cache_charge_increment_;
|
||||||
|
@ -1761,8 +1762,8 @@ bool TargetCacheChargeTrackingCache<R>::Release(Handle* handle,
|
||||||
}
|
}
|
||||||
|
|
||||||
template <CacheEntryRole R>
|
template <CacheEntryRole R>
|
||||||
const Cache::DeleterFn TargetCacheChargeTrackingCache<R>::kNoopDeleter =
|
const Cache::CacheItemHelper* TargetCacheChargeTrackingCache<R>::kCrmHelper =
|
||||||
CacheReservationManagerImpl<R>::TEST_GetNoopDeleterForRole();
|
CacheReservationManagerImpl<R>::TEST_GetCacheItemHelperForRole();
|
||||||
|
|
||||||
template class TargetCacheChargeTrackingCache<
|
template class TargetCacheChargeTrackingCache<
|
||||||
CacheEntryRole::kFilterConstruction>;
|
CacheEntryRole::kFilterConstruction>;
|
||||||
|
|
|
@ -903,17 +903,18 @@ class CacheWrapper : public Cache {
|
||||||
|
|
||||||
const char* Name() const override { return target_->Name(); }
|
const char* Name() const override { return target_->Name(); }
|
||||||
|
|
||||||
using Cache::Insert;
|
Status Insert(const Slice& key, ObjectPtr value,
|
||||||
Status Insert(const Slice& key, void* value, size_t charge,
|
const CacheItemHelper* helper, size_t charge,
|
||||||
void (*deleter)(const Slice& key, void* value),
|
|
||||||
Handle** handle = nullptr,
|
Handle** handle = nullptr,
|
||||||
Priority priority = Priority::LOW) override {
|
Priority priority = Priority::LOW) override {
|
||||||
return target_->Insert(key, value, charge, deleter, handle, priority);
|
return target_->Insert(key, value, helper, charge, handle, priority);
|
||||||
}
|
}
|
||||||
|
|
||||||
using Cache::Lookup;
|
Handle* Lookup(const Slice& key, const CacheItemHelper* helper,
|
||||||
Handle* Lookup(const Slice& key, Statistics* stats = nullptr) override {
|
CreateContext* create_context,
|
||||||
return target_->Lookup(key, stats);
|
Priority priority = Priority::LOW, bool wait = true,
|
||||||
|
Statistics* stats = nullptr) override {
|
||||||
|
return target_->Lookup(key, helper, create_context, priority, wait, stats);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool Ref(Handle* handle) override { return target_->Ref(handle); }
|
bool Ref(Handle* handle) override { return target_->Ref(handle); }
|
||||||
|
@ -923,7 +924,7 @@ class CacheWrapper : public Cache {
|
||||||
return target_->Release(handle, erase_if_last_ref);
|
return target_->Release(handle, erase_if_last_ref);
|
||||||
}
|
}
|
||||||
|
|
||||||
void* Value(Handle* handle) override { return target_->Value(handle); }
|
ObjectPtr Value(Handle* handle) override { return target_->Value(handle); }
|
||||||
|
|
||||||
void Erase(const Slice& key) override { target_->Erase(key); }
|
void Erase(const Slice& key) override { target_->Erase(key); }
|
||||||
uint64_t NewId() override { return target_->NewId(); }
|
uint64_t NewId() override { return target_->NewId(); }
|
||||||
|
@ -952,18 +953,13 @@ class CacheWrapper : public Cache {
|
||||||
return target_->GetCharge(handle);
|
return target_->GetCharge(handle);
|
||||||
}
|
}
|
||||||
|
|
||||||
DeleterFn GetDeleter(Handle* handle) const override {
|
const CacheItemHelper* GetCacheItemHelper(Handle* handle) const override {
|
||||||
return target_->GetDeleter(handle);
|
return target_->GetCacheItemHelper(handle);
|
||||||
}
|
|
||||||
|
|
||||||
void ApplyToAllCacheEntries(void (*callback)(void*, size_t),
|
|
||||||
bool thread_safe) override {
|
|
||||||
target_->ApplyToAllCacheEntries(callback, thread_safe);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
void ApplyToAllEntries(
|
void ApplyToAllEntries(
|
||||||
const std::function<void(const Slice& key, void* value, size_t charge,
|
const std::function<void(const Slice& key, ObjectPtr value, size_t charge,
|
||||||
DeleterFn deleter)>& callback,
|
const CacheItemHelper* helper)>& callback,
|
||||||
const ApplyToAllEntriesOptions& opts) override {
|
const ApplyToAllEntriesOptions& opts) override {
|
||||||
target_->ApplyToAllEntries(callback, opts);
|
target_->ApplyToAllEntries(callback, opts);
|
||||||
}
|
}
|
||||||
|
@ -991,9 +987,8 @@ class TargetCacheChargeTrackingCache : public CacheWrapper {
|
||||||
public:
|
public:
|
||||||
explicit TargetCacheChargeTrackingCache(std::shared_ptr<Cache> target);
|
explicit TargetCacheChargeTrackingCache(std::shared_ptr<Cache> target);
|
||||||
|
|
||||||
using Cache::Insert;
|
Status Insert(const Slice& key, ObjectPtr value,
|
||||||
Status Insert(const Slice& key, void* value, size_t charge,
|
const CacheItemHelper* helper, size_t charge,
|
||||||
void (*deleter)(const Slice& key, void* value),
|
|
||||||
Handle** handle = nullptr,
|
Handle** handle = nullptr,
|
||||||
Priority priority = Priority::LOW) override;
|
Priority priority = Priority::LOW) override;
|
||||||
|
|
||||||
|
@ -1009,7 +1004,7 @@ class TargetCacheChargeTrackingCache : public CacheWrapper {
|
||||||
}
|
}
|
||||||
|
|
||||||
private:
|
private:
|
||||||
static const Cache::DeleterFn kNoopDeleter;
|
static const Cache::CacheItemHelper* kCrmHelper;
|
||||||
|
|
||||||
std::size_t cur_cache_charge_;
|
std::size_t cur_cache_charge_;
|
||||||
std::size_t cache_charge_peak_;
|
std::size_t cache_charge_peak_;
|
||||||
|
|
|
@ -659,17 +659,18 @@ void InternalStats::CollectCacheEntryStats(bool foreground) {
|
||||||
min_interval_factor);
|
min_interval_factor);
|
||||||
}
|
}
|
||||||
|
|
||||||
std::function<void(const Slice&, void*, size_t, Cache::DeleterFn)>
|
std::function<void()> Blah() {
|
||||||
|
static int x = 42;
|
||||||
|
return [&]() { ++x; };
|
||||||
|
}
|
||||||
|
|
||||||
|
std::function<void(const Slice& key, Cache::ObjectPtr value, size_t charge,
|
||||||
|
const Cache::CacheItemHelper* helper)>
|
||||||
InternalStats::CacheEntryRoleStats::GetEntryCallback() {
|
InternalStats::CacheEntryRoleStats::GetEntryCallback() {
|
||||||
return [&](const Slice& /*key*/, void* /*value*/, size_t charge,
|
return [&](const Slice& /*key*/, Cache::ObjectPtr /*value*/, size_t charge,
|
||||||
Cache::DeleterFn deleter) {
|
const Cache::CacheItemHelper* helper) -> void {
|
||||||
auto e = role_map_.find(deleter);
|
size_t role_idx =
|
||||||
size_t role_idx;
|
static_cast<size_t>(helper ? helper->role : CacheEntryRole::kMisc);
|
||||||
if (e == role_map_.end()) {
|
|
||||||
role_idx = static_cast<size_t>(CacheEntryRole::kMisc);
|
|
||||||
} else {
|
|
||||||
role_idx = static_cast<size_t>(e->second);
|
|
||||||
}
|
|
||||||
entry_counts[role_idx]++;
|
entry_counts[role_idx]++;
|
||||||
total_charges[role_idx] += charge;
|
total_charges[role_idx] += charge;
|
||||||
};
|
};
|
||||||
|
@ -680,7 +681,6 @@ void InternalStats::CacheEntryRoleStats::BeginCollection(
|
||||||
Clear();
|
Clear();
|
||||||
last_start_time_micros_ = start_time_micros;
|
last_start_time_micros_ = start_time_micros;
|
||||||
++collection_count;
|
++collection_count;
|
||||||
role_map_ = CopyCacheDeleterRoleMap();
|
|
||||||
std::ostringstream str;
|
std::ostringstream str;
|
||||||
str << cache->Name() << "@" << static_cast<void*>(cache) << "#"
|
str << cache->Name() << "@" << static_cast<void*>(cache) << "#"
|
||||||
<< port::GetProcessID();
|
<< port::GetProcessID();
|
||||||
|
|
|
@ -472,7 +472,8 @@ class InternalStats {
|
||||||
}
|
}
|
||||||
|
|
||||||
void BeginCollection(Cache*, SystemClock*, uint64_t start_time_micros);
|
void BeginCollection(Cache*, SystemClock*, uint64_t start_time_micros);
|
||||||
std::function<void(const Slice&, void*, size_t, Cache::DeleterFn)>
|
std::function<void(const Slice& key, Cache::ObjectPtr value, size_t charge,
|
||||||
|
const Cache::CacheItemHelper* helper)>
|
||||||
GetEntryCallback();
|
GetEntryCallback();
|
||||||
void EndCollection(Cache*, SystemClock*, uint64_t end_time_micros);
|
void EndCollection(Cache*, SystemClock*, uint64_t end_time_micros);
|
||||||
void SkippedCollection();
|
void SkippedCollection();
|
||||||
|
@ -482,7 +483,6 @@ class InternalStats {
|
||||||
SystemClock* clock) const;
|
SystemClock* clock) const;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
UnorderedMap<Cache::DeleterFn, CacheEntryRole> role_map_;
|
|
||||||
uint64_t GetLastDurationMicros() const;
|
uint64_t GetLastDurationMicros() const;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -31,16 +31,6 @@
|
||||||
#include "util/coding.h"
|
#include "util/coding.h"
|
||||||
#include "util/stop_watch.h"
|
#include "util/stop_watch.h"
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
|
||||||
namespace {
|
|
||||||
template <class T>
|
|
||||||
static void DeleteEntry(const Slice& /*key*/, void* value) {
|
|
||||||
T* typed_value = reinterpret_cast<T*>(value);
|
|
||||||
delete typed_value;
|
|
||||||
}
|
|
||||||
} // anonymous namespace
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
|
||||||
|
|
||||||
// Generate the regular and coroutine versions of some methods by
|
// Generate the regular and coroutine versions of some methods by
|
||||||
// including table_cache_sync_and_async.h twice
|
// including table_cache_sync_and_async.h twice
|
||||||
// Macros in the header will expand differently based on whether
|
// Macros in the header will expand differently based on whether
|
||||||
|
@ -58,12 +48,6 @@ namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
namespace {
|
namespace {
|
||||||
|
|
||||||
static void UnrefEntry(void* arg1, void* arg2) {
|
|
||||||
Cache* cache = reinterpret_cast<Cache*>(arg1);
|
|
||||||
Cache::Handle* h = reinterpret_cast<Cache::Handle*>(arg2);
|
|
||||||
cache->Release(h);
|
|
||||||
}
|
|
||||||
|
|
||||||
static Slice GetSliceForFileNumber(const uint64_t* file_number) {
|
static Slice GetSliceForFileNumber(const uint64_t* file_number) {
|
||||||
return Slice(reinterpret_cast<const char*>(file_number),
|
return Slice(reinterpret_cast<const char*>(file_number),
|
||||||
sizeof(*file_number));
|
sizeof(*file_number));
|
||||||
|
@ -105,14 +89,6 @@ TableCache::TableCache(const ImmutableOptions& ioptions,
|
||||||
|
|
||||||
TableCache::~TableCache() {}
|
TableCache::~TableCache() {}
|
||||||
|
|
||||||
TableReader* TableCache::GetTableReaderFromHandle(Cache::Handle* handle) {
|
|
||||||
return reinterpret_cast<TableReader*>(cache_->Value(handle));
|
|
||||||
}
|
|
||||||
|
|
||||||
void TableCache::ReleaseHandle(Cache::Handle* handle) {
|
|
||||||
cache_->Release(handle);
|
|
||||||
}
|
|
||||||
|
|
||||||
Status TableCache::GetTableReader(
|
Status TableCache::GetTableReader(
|
||||||
const ReadOptions& ro, const FileOptions& file_options,
|
const ReadOptions& ro, const FileOptions& file_options,
|
||||||
const InternalKeyComparator& internal_comparator,
|
const InternalKeyComparator& internal_comparator,
|
||||||
|
@ -178,17 +154,10 @@ Status TableCache::GetTableReader(
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
void TableCache::EraseHandle(const FileDescriptor& fd, Cache::Handle* handle) {
|
|
||||||
ReleaseHandle(handle);
|
|
||||||
uint64_t number = fd.GetNumber();
|
|
||||||
Slice key = GetSliceForFileNumber(&number);
|
|
||||||
cache_->Erase(key);
|
|
||||||
}
|
|
||||||
|
|
||||||
Status TableCache::FindTable(
|
Status TableCache::FindTable(
|
||||||
const ReadOptions& ro, const FileOptions& file_options,
|
const ReadOptions& ro, const FileOptions& file_options,
|
||||||
const InternalKeyComparator& internal_comparator,
|
const InternalKeyComparator& internal_comparator,
|
||||||
const FileMetaData& file_meta, Cache::Handle** handle,
|
const FileMetaData& file_meta, TypedHandle** handle,
|
||||||
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
||||||
const bool no_io, bool record_read_stats, HistogramImpl* file_read_hist,
|
const bool no_io, bool record_read_stats, HistogramImpl* file_read_hist,
|
||||||
bool skip_filters, int level, bool prefetch_index_and_filter_in_cache,
|
bool skip_filters, int level, bool prefetch_index_and_filter_in_cache,
|
||||||
|
@ -196,7 +165,7 @@ Status TableCache::FindTable(
|
||||||
PERF_TIMER_GUARD_WITH_CLOCK(find_table_nanos, ioptions_.clock);
|
PERF_TIMER_GUARD_WITH_CLOCK(find_table_nanos, ioptions_.clock);
|
||||||
uint64_t number = file_meta.fd.GetNumber();
|
uint64_t number = file_meta.fd.GetNumber();
|
||||||
Slice key = GetSliceForFileNumber(&number);
|
Slice key = GetSliceForFileNumber(&number);
|
||||||
*handle = cache_->Lookup(key);
|
*handle = cache_.Lookup(key);
|
||||||
TEST_SYNC_POINT_CALLBACK("TableCache::FindTable:0",
|
TEST_SYNC_POINT_CALLBACK("TableCache::FindTable:0",
|
||||||
const_cast<bool*>(&no_io));
|
const_cast<bool*>(&no_io));
|
||||||
|
|
||||||
|
@ -206,7 +175,7 @@ Status TableCache::FindTable(
|
||||||
}
|
}
|
||||||
MutexLock load_lock(loader_mutex_.get(key));
|
MutexLock load_lock(loader_mutex_.get(key));
|
||||||
// We check the cache again under loading mutex
|
// We check the cache again under loading mutex
|
||||||
*handle = cache_->Lookup(key);
|
*handle = cache_.Lookup(key);
|
||||||
if (*handle != nullptr) {
|
if (*handle != nullptr) {
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
@ -224,8 +193,7 @@ Status TableCache::FindTable(
|
||||||
// We do not cache error results so that if the error is transient,
|
// We do not cache error results so that if the error is transient,
|
||||||
// or somebody repairs the file, we recover automatically.
|
// or somebody repairs the file, we recover automatically.
|
||||||
} else {
|
} else {
|
||||||
s = cache_->Insert(key, table_reader.get(), 1, &DeleteEntry<TableReader>,
|
s = cache_.Insert(key, table_reader.get(), 1, handle);
|
||||||
handle);
|
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
// Release ownership of table reader.
|
// Release ownership of table reader.
|
||||||
table_reader.release();
|
table_reader.release();
|
||||||
|
@ -251,7 +219,7 @@ InternalIterator* TableCache::NewIterator(
|
||||||
|
|
||||||
Status s;
|
Status s;
|
||||||
TableReader* table_reader = nullptr;
|
TableReader* table_reader = nullptr;
|
||||||
Cache::Handle* handle = nullptr;
|
TypedHandle* handle = nullptr;
|
||||||
if (table_reader_ptr != nullptr) {
|
if (table_reader_ptr != nullptr) {
|
||||||
*table_reader_ptr = nullptr;
|
*table_reader_ptr = nullptr;
|
||||||
}
|
}
|
||||||
|
@ -266,7 +234,7 @@ InternalIterator* TableCache::NewIterator(
|
||||||
level, true /* prefetch_index_and_filter_in_cache */,
|
level, true /* prefetch_index_and_filter_in_cache */,
|
||||||
max_file_size_for_l0_meta_pin, file_meta.temperature);
|
max_file_size_for_l0_meta_pin, file_meta.temperature);
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
table_reader = GetTableReaderFromHandle(handle);
|
table_reader = cache_.Value(handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
InternalIterator* result = nullptr;
|
InternalIterator* result = nullptr;
|
||||||
|
@ -280,7 +248,7 @@ InternalIterator* TableCache::NewIterator(
|
||||||
file_options.compaction_readahead_size, allow_unprepared_value);
|
file_options.compaction_readahead_size, allow_unprepared_value);
|
||||||
}
|
}
|
||||||
if (handle != nullptr) {
|
if (handle != nullptr) {
|
||||||
result->RegisterCleanup(&UnrefEntry, cache_, handle);
|
cache_.RegisterReleaseAsCleanup(handle, *result);
|
||||||
handle = nullptr; // prevent from releasing below
|
handle = nullptr; // prevent from releasing below
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -330,7 +298,7 @@ InternalIterator* TableCache::NewIterator(
|
||||||
}
|
}
|
||||||
|
|
||||||
if (handle != nullptr) {
|
if (handle != nullptr) {
|
||||||
ReleaseHandle(handle);
|
cache_.Release(handle);
|
||||||
}
|
}
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
assert(result == nullptr);
|
assert(result == nullptr);
|
||||||
|
@ -348,12 +316,12 @@ Status TableCache::GetRangeTombstoneIterator(
|
||||||
const FileDescriptor& fd = file_meta.fd;
|
const FileDescriptor& fd = file_meta.fd;
|
||||||
Status s;
|
Status s;
|
||||||
TableReader* t = fd.table_reader;
|
TableReader* t = fd.table_reader;
|
||||||
Cache::Handle* handle = nullptr;
|
TypedHandle* handle = nullptr;
|
||||||
if (t == nullptr) {
|
if (t == nullptr) {
|
||||||
s = FindTable(options, file_options_, internal_comparator, file_meta,
|
s = FindTable(options, file_options_, internal_comparator, file_meta,
|
||||||
&handle);
|
&handle);
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
t = GetTableReaderFromHandle(handle);
|
t = cache_.Value(handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
|
@ -362,9 +330,9 @@ Status TableCache::GetRangeTombstoneIterator(
|
||||||
}
|
}
|
||||||
if (handle) {
|
if (handle) {
|
||||||
if (*out_iter) {
|
if (*out_iter) {
|
||||||
(*out_iter)->RegisterCleanup(&UnrefEntry, cache_, handle);
|
cache_.RegisterReleaseAsCleanup(handle, **out_iter);
|
||||||
} else {
|
} else {
|
||||||
ReleaseHandle(handle);
|
cache_.Release(handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return s;
|
return s;
|
||||||
|
@ -411,16 +379,10 @@ bool TableCache::GetFromRowCache(const Slice& user_key, IterKey& row_cache_key,
|
||||||
bool found = false;
|
bool found = false;
|
||||||
|
|
||||||
row_cache_key.TrimAppend(prefix_size, user_key.data(), user_key.size());
|
row_cache_key.TrimAppend(prefix_size, user_key.data(), user_key.size());
|
||||||
if (auto row_handle =
|
RowCacheInterface row_cache{ioptions_.row_cache.get()};
|
||||||
ioptions_.row_cache->Lookup(row_cache_key.GetUserKey())) {
|
if (auto row_handle = row_cache.Lookup(row_cache_key.GetUserKey())) {
|
||||||
// Cleanable routine to release the cache entry
|
// Cleanable routine to release the cache entry
|
||||||
Cleanable value_pinner;
|
Cleanable value_pinner;
|
||||||
auto release_cache_entry_func = [](void* cache_to_clean,
|
|
||||||
void* cache_handle) {
|
|
||||||
((Cache*)cache_to_clean)->Release((Cache::Handle*)cache_handle);
|
|
||||||
};
|
|
||||||
auto found_row_cache_entry =
|
|
||||||
static_cast<const std::string*>(ioptions_.row_cache->Value(row_handle));
|
|
||||||
// If it comes here value is located on the cache.
|
// If it comes here value is located on the cache.
|
||||||
// found_row_cache_entry points to the value on cache,
|
// found_row_cache_entry points to the value on cache,
|
||||||
// and value_pinner has cleanup procedure for the cached entry.
|
// and value_pinner has cleanup procedure for the cached entry.
|
||||||
|
@ -429,9 +391,8 @@ bool TableCache::GetFromRowCache(const Slice& user_key, IterKey& row_cache_key,
|
||||||
// cleanup routine under value_pinner will be delegated to
|
// cleanup routine under value_pinner will be delegated to
|
||||||
// get_context.pinnable_slice_. Cache entry is released when
|
// get_context.pinnable_slice_. Cache entry is released when
|
||||||
// get_context.pinnable_slice_ is reset.
|
// get_context.pinnable_slice_ is reset.
|
||||||
value_pinner.RegisterCleanup(release_cache_entry_func,
|
row_cache.RegisterReleaseAsCleanup(row_handle, value_pinner);
|
||||||
ioptions_.row_cache.get(), row_handle);
|
replayGetContextLog(*row_cache.Value(row_handle), user_key, get_context,
|
||||||
replayGetContextLog(*found_row_cache_entry, user_key, get_context,
|
|
||||||
&value_pinner);
|
&value_pinner);
|
||||||
RecordTick(ioptions_.stats, ROW_CACHE_HIT);
|
RecordTick(ioptions_.stats, ROW_CACHE_HIT);
|
||||||
found = true;
|
found = true;
|
||||||
|
@ -470,7 +431,7 @@ Status TableCache::Get(
|
||||||
#endif // ROCKSDB_LITE
|
#endif // ROCKSDB_LITE
|
||||||
Status s;
|
Status s;
|
||||||
TableReader* t = fd.table_reader;
|
TableReader* t = fd.table_reader;
|
||||||
Cache::Handle* handle = nullptr;
|
TypedHandle* handle = nullptr;
|
||||||
if (!done) {
|
if (!done) {
|
||||||
assert(s.ok());
|
assert(s.ok());
|
||||||
if (t == nullptr) {
|
if (t == nullptr) {
|
||||||
|
@ -481,7 +442,7 @@ Status TableCache::Get(
|
||||||
level, true /* prefetch_index_and_filter_in_cache */,
|
level, true /* prefetch_index_and_filter_in_cache */,
|
||||||
max_file_size_for_l0_meta_pin, file_meta.temperature);
|
max_file_size_for_l0_meta_pin, file_meta.temperature);
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
t = GetTableReaderFromHandle(handle);
|
t = cache_.Value(handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
SequenceNumber* max_covering_tombstone_seq =
|
SequenceNumber* max_covering_tombstone_seq =
|
||||||
|
@ -517,18 +478,17 @@ Status TableCache::Get(
|
||||||
#ifndef ROCKSDB_LITE
|
#ifndef ROCKSDB_LITE
|
||||||
// Put the replay log in row cache only if something was found.
|
// Put the replay log in row cache only if something was found.
|
||||||
if (!done && s.ok() && row_cache_entry && !row_cache_entry->empty()) {
|
if (!done && s.ok() && row_cache_entry && !row_cache_entry->empty()) {
|
||||||
|
RowCacheInterface row_cache{ioptions_.row_cache.get()};
|
||||||
size_t charge = row_cache_entry->capacity() + sizeof(std::string);
|
size_t charge = row_cache_entry->capacity() + sizeof(std::string);
|
||||||
void* row_ptr = new std::string(std::move(*row_cache_entry));
|
auto row_ptr = new std::string(std::move(*row_cache_entry));
|
||||||
// If row cache is full, it's OK to continue.
|
// If row cache is full, it's OK to continue.
|
||||||
ioptions_.row_cache
|
row_cache.Insert(row_cache_key.GetUserKey(), row_ptr, charge)
|
||||||
->Insert(row_cache_key.GetUserKey(), row_ptr, charge,
|
|
||||||
&DeleteEntry<std::string>)
|
|
||||||
.PermitUncheckedError();
|
.PermitUncheckedError();
|
||||||
}
|
}
|
||||||
#endif // ROCKSDB_LITE
|
#endif // ROCKSDB_LITE
|
||||||
|
|
||||||
if (handle != nullptr) {
|
if (handle != nullptr) {
|
||||||
ReleaseHandle(handle);
|
cache_.Release(handle);
|
||||||
}
|
}
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
@ -561,7 +521,7 @@ Status TableCache::MultiGetFilter(
|
||||||
const FileMetaData& file_meta,
|
const FileMetaData& file_meta,
|
||||||
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
||||||
HistogramImpl* file_read_hist, int level,
|
HistogramImpl* file_read_hist, int level,
|
||||||
MultiGetContext::Range* mget_range, Cache::Handle** table_handle) {
|
MultiGetContext::Range* mget_range, TypedHandle** table_handle) {
|
||||||
auto& fd = file_meta.fd;
|
auto& fd = file_meta.fd;
|
||||||
#ifndef ROCKSDB_LITE
|
#ifndef ROCKSDB_LITE
|
||||||
IterKey row_cache_key;
|
IterKey row_cache_key;
|
||||||
|
@ -577,7 +537,7 @@ Status TableCache::MultiGetFilter(
|
||||||
#endif // ROCKSDB_LITE
|
#endif // ROCKSDB_LITE
|
||||||
Status s;
|
Status s;
|
||||||
TableReader* t = fd.table_reader;
|
TableReader* t = fd.table_reader;
|
||||||
Cache::Handle* handle = nullptr;
|
TypedHandle* handle = nullptr;
|
||||||
MultiGetContext::Range tombstone_range(*mget_range, mget_range->begin(),
|
MultiGetContext::Range tombstone_range(*mget_range, mget_range->begin(),
|
||||||
mget_range->end());
|
mget_range->end());
|
||||||
if (t == nullptr) {
|
if (t == nullptr) {
|
||||||
|
@ -588,7 +548,7 @@ Status TableCache::MultiGetFilter(
|
||||||
level, true /* prefetch_index_and_filter_in_cache */,
|
level, true /* prefetch_index_and_filter_in_cache */,
|
||||||
/*max_file_size_for_l0_meta_pin=*/0, file_meta.temperature);
|
/*max_file_size_for_l0_meta_pin=*/0, file_meta.temperature);
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
t = GetTableReaderFromHandle(handle);
|
t = cache_.Value(handle);
|
||||||
}
|
}
|
||||||
*table_handle = handle;
|
*table_handle = handle;
|
||||||
}
|
}
|
||||||
|
@ -602,7 +562,7 @@ Status TableCache::MultiGetFilter(
|
||||||
UpdateRangeTombstoneSeqnums(options, t, tombstone_range);
|
UpdateRangeTombstoneSeqnums(options, t, tombstone_range);
|
||||||
}
|
}
|
||||||
if (mget_range->empty() && handle) {
|
if (mget_range->empty() && handle) {
|
||||||
ReleaseHandle(handle);
|
cache_.Release(handle);
|
||||||
*table_handle = nullptr;
|
*table_handle = nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -623,16 +583,16 @@ Status TableCache::GetTableProperties(
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::Handle* table_handle = nullptr;
|
TypedHandle* table_handle = nullptr;
|
||||||
Status s = FindTable(ReadOptions(), file_options, internal_comparator,
|
Status s = FindTable(ReadOptions(), file_options, internal_comparator,
|
||||||
file_meta, &table_handle, prefix_extractor, no_io);
|
file_meta, &table_handle, prefix_extractor, no_io);
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
assert(table_handle);
|
assert(table_handle);
|
||||||
auto table = GetTableReaderFromHandle(table_handle);
|
auto table = cache_.Value(table_handle);
|
||||||
*properties = table->GetTableProperties();
|
*properties = table->GetTableProperties();
|
||||||
ReleaseHandle(table_handle);
|
cache_.Release(table_handle);
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -641,18 +601,18 @@ Status TableCache::ApproximateKeyAnchors(
|
||||||
const FileMetaData& file_meta, std::vector<TableReader::Anchor>& anchors) {
|
const FileMetaData& file_meta, std::vector<TableReader::Anchor>& anchors) {
|
||||||
Status s;
|
Status s;
|
||||||
TableReader* t = file_meta.fd.table_reader;
|
TableReader* t = file_meta.fd.table_reader;
|
||||||
Cache::Handle* handle = nullptr;
|
TypedHandle* handle = nullptr;
|
||||||
if (t == nullptr) {
|
if (t == nullptr) {
|
||||||
s = FindTable(ro, file_options_, internal_comparator, file_meta, &handle);
|
s = FindTable(ro, file_options_, internal_comparator, file_meta, &handle);
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
t = GetTableReaderFromHandle(handle);
|
t = cache_.Value(handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (s.ok() && t != nullptr) {
|
if (s.ok() && t != nullptr) {
|
||||||
s = t->ApproximateKeyAnchors(ro, anchors);
|
s = t->ApproximateKeyAnchors(ro, anchors);
|
||||||
}
|
}
|
||||||
if (handle != nullptr) {
|
if (handle != nullptr) {
|
||||||
ReleaseHandle(handle);
|
cache_.Release(handle);
|
||||||
}
|
}
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
@ -668,29 +628,19 @@ size_t TableCache::GetMemoryUsageByTableReader(
|
||||||
return table_reader->ApproximateMemoryUsage();
|
return table_reader->ApproximateMemoryUsage();
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::Handle* table_handle = nullptr;
|
TypedHandle* table_handle = nullptr;
|
||||||
Status s = FindTable(ReadOptions(), file_options, internal_comparator,
|
Status s = FindTable(ReadOptions(), file_options, internal_comparator,
|
||||||
file_meta, &table_handle, prefix_extractor, true);
|
file_meta, &table_handle, prefix_extractor, true);
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
assert(table_handle);
|
assert(table_handle);
|
||||||
auto table = GetTableReaderFromHandle(table_handle);
|
auto table = cache_.Value(table_handle);
|
||||||
auto ret = table->ApproximateMemoryUsage();
|
auto ret = table->ApproximateMemoryUsage();
|
||||||
ReleaseHandle(table_handle);
|
cache_.Release(table_handle);
|
||||||
return ret;
|
return ret;
|
||||||
}
|
}
|
||||||
|
|
||||||
bool TableCache::HasEntry(Cache* cache, uint64_t file_number) {
|
|
||||||
Cache::Handle* handle = cache->Lookup(GetSliceForFileNumber(&file_number));
|
|
||||||
if (handle) {
|
|
||||||
cache->Release(handle);
|
|
||||||
return true;
|
|
||||||
} else {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
void TableCache::Evict(Cache* cache, uint64_t file_number) {
|
void TableCache::Evict(Cache* cache, uint64_t file_number) {
|
||||||
cache->Erase(GetSliceForFileNumber(&file_number));
|
cache->Erase(GetSliceForFileNumber(&file_number));
|
||||||
}
|
}
|
||||||
|
@ -701,7 +651,7 @@ uint64_t TableCache::ApproximateOffsetOf(
|
||||||
const std::shared_ptr<const SliceTransform>& prefix_extractor) {
|
const std::shared_ptr<const SliceTransform>& prefix_extractor) {
|
||||||
uint64_t result = 0;
|
uint64_t result = 0;
|
||||||
TableReader* table_reader = file_meta.fd.table_reader;
|
TableReader* table_reader = file_meta.fd.table_reader;
|
||||||
Cache::Handle* table_handle = nullptr;
|
TypedHandle* table_handle = nullptr;
|
||||||
if (table_reader == nullptr) {
|
if (table_reader == nullptr) {
|
||||||
const bool for_compaction = (caller == TableReaderCaller::kCompaction);
|
const bool for_compaction = (caller == TableReaderCaller::kCompaction);
|
||||||
Status s =
|
Status s =
|
||||||
|
@ -709,7 +659,7 @@ uint64_t TableCache::ApproximateOffsetOf(
|
||||||
&table_handle, prefix_extractor, false /* no_io */,
|
&table_handle, prefix_extractor, false /* no_io */,
|
||||||
!for_compaction /* record_read_stats */);
|
!for_compaction /* record_read_stats */);
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
table_reader = GetTableReaderFromHandle(table_handle);
|
table_reader = cache_.Value(table_handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -717,7 +667,7 @@ uint64_t TableCache::ApproximateOffsetOf(
|
||||||
result = table_reader->ApproximateOffsetOf(key, caller);
|
result = table_reader->ApproximateOffsetOf(key, caller);
|
||||||
}
|
}
|
||||||
if (table_handle != nullptr) {
|
if (table_handle != nullptr) {
|
||||||
ReleaseHandle(table_handle);
|
cache_.Release(table_handle);
|
||||||
}
|
}
|
||||||
|
|
||||||
return result;
|
return result;
|
||||||
|
@ -729,7 +679,7 @@ uint64_t TableCache::ApproximateSize(
|
||||||
const std::shared_ptr<const SliceTransform>& prefix_extractor) {
|
const std::shared_ptr<const SliceTransform>& prefix_extractor) {
|
||||||
uint64_t result = 0;
|
uint64_t result = 0;
|
||||||
TableReader* table_reader = file_meta.fd.table_reader;
|
TableReader* table_reader = file_meta.fd.table_reader;
|
||||||
Cache::Handle* table_handle = nullptr;
|
TypedHandle* table_handle = nullptr;
|
||||||
if (table_reader == nullptr) {
|
if (table_reader == nullptr) {
|
||||||
const bool for_compaction = (caller == TableReaderCaller::kCompaction);
|
const bool for_compaction = (caller == TableReaderCaller::kCompaction);
|
||||||
Status s =
|
Status s =
|
||||||
|
@ -737,7 +687,7 @@ uint64_t TableCache::ApproximateSize(
|
||||||
&table_handle, prefix_extractor, false /* no_io */,
|
&table_handle, prefix_extractor, false /* no_io */,
|
||||||
!for_compaction /* record_read_stats */);
|
!for_compaction /* record_read_stats */);
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
table_reader = GetTableReaderFromHandle(table_handle);
|
table_reader = cache_.Value(table_handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -745,7 +695,7 @@ uint64_t TableCache::ApproximateSize(
|
||||||
result = table_reader->ApproximateSize(start, end, caller);
|
result = table_reader->ApproximateSize(start, end, caller);
|
||||||
}
|
}
|
||||||
if (table_handle != nullptr) {
|
if (table_handle != nullptr) {
|
||||||
ReleaseHandle(table_handle);
|
cache_.Release(table_handle);
|
||||||
}
|
}
|
||||||
|
|
||||||
return result;
|
return result;
|
||||||
|
|
|
@ -14,6 +14,7 @@
|
||||||
#include <string>
|
#include <string>
|
||||||
#include <vector>
|
#include <vector>
|
||||||
|
|
||||||
|
#include "cache/typed_cache.h"
|
||||||
#include "db/dbformat.h"
|
#include "db/dbformat.h"
|
||||||
#include "db/range_del_aggregator.h"
|
#include "db/range_del_aggregator.h"
|
||||||
#include "options/cf_options.h"
|
#include "options/cf_options.h"
|
||||||
|
@ -56,6 +57,16 @@ class TableCache {
|
||||||
const std::string& db_session_id);
|
const std::string& db_session_id);
|
||||||
~TableCache();
|
~TableCache();
|
||||||
|
|
||||||
|
// Cache interface for table cache
|
||||||
|
using CacheInterface =
|
||||||
|
BasicTypedCacheInterface<TableReader, CacheEntryRole::kMisc>;
|
||||||
|
using TypedHandle = CacheInterface::TypedHandle;
|
||||||
|
|
||||||
|
// Cache interface for row cache
|
||||||
|
using RowCacheInterface =
|
||||||
|
BasicTypedCacheInterface<std::string, CacheEntryRole::kMisc>;
|
||||||
|
using RowHandle = RowCacheInterface::TypedHandle;
|
||||||
|
|
||||||
// Return an iterator for the specified file number (the corresponding
|
// Return an iterator for the specified file number (the corresponding
|
||||||
// file length must be exactly "file_size" bytes). If "table_reader_ptr"
|
// file length must be exactly "file_size" bytes). If "table_reader_ptr"
|
||||||
// is non-nullptr, also sets "*table_reader_ptr" to point to the Table object
|
// is non-nullptr, also sets "*table_reader_ptr" to point to the Table object
|
||||||
|
@ -124,7 +135,7 @@ class TableCache {
|
||||||
const FileMetaData& file_meta,
|
const FileMetaData& file_meta,
|
||||||
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
||||||
HistogramImpl* file_read_hist, int level,
|
HistogramImpl* file_read_hist, int level,
|
||||||
MultiGetContext::Range* mget_range, Cache::Handle** table_handle);
|
MultiGetContext::Range* mget_range, TypedHandle** table_handle);
|
||||||
|
|
||||||
// If a seek to internal key "k" in specified file finds an entry,
|
// If a seek to internal key "k" in specified file finds an entry,
|
||||||
// call get_context->SaveValue() repeatedly until
|
// call get_context->SaveValue() repeatedly until
|
||||||
|
@ -142,25 +153,18 @@ class TableCache {
|
||||||
const std::shared_ptr<const SliceTransform>& prefix_extractor = nullptr,
|
const std::shared_ptr<const SliceTransform>& prefix_extractor = nullptr,
|
||||||
HistogramImpl* file_read_hist = nullptr, bool skip_filters = false,
|
HistogramImpl* file_read_hist = nullptr, bool skip_filters = false,
|
||||||
bool skip_range_deletions = false, int level = -1,
|
bool skip_range_deletions = false, int level = -1,
|
||||||
Cache::Handle* table_handle = nullptr);
|
TypedHandle* table_handle = nullptr);
|
||||||
|
|
||||||
// Evict any entry for the specified file number
|
// Evict any entry for the specified file number
|
||||||
static void Evict(Cache* cache, uint64_t file_number);
|
static void Evict(Cache* cache, uint64_t file_number);
|
||||||
|
|
||||||
// Query whether specified file number is currently in cache
|
|
||||||
static bool HasEntry(Cache* cache, uint64_t file_number);
|
|
||||||
|
|
||||||
// Clean table handle and erase it from the table cache
|
|
||||||
// Used in DB close, or the file is not live anymore.
|
|
||||||
void EraseHandle(const FileDescriptor& fd, Cache::Handle* handle);
|
|
||||||
|
|
||||||
// Find table reader
|
// Find table reader
|
||||||
// @param skip_filters Disables loading/accessing the filter block
|
// @param skip_filters Disables loading/accessing the filter block
|
||||||
// @param level == -1 means not specified
|
// @param level == -1 means not specified
|
||||||
Status FindTable(
|
Status FindTable(
|
||||||
const ReadOptions& ro, const FileOptions& toptions,
|
const ReadOptions& ro, const FileOptions& toptions,
|
||||||
const InternalKeyComparator& internal_comparator,
|
const InternalKeyComparator& internal_comparator,
|
||||||
const FileMetaData& file_meta, Cache::Handle**,
|
const FileMetaData& file_meta, TypedHandle**,
|
||||||
const std::shared_ptr<const SliceTransform>& prefix_extractor = nullptr,
|
const std::shared_ptr<const SliceTransform>& prefix_extractor = nullptr,
|
||||||
const bool no_io = false, bool record_read_stats = true,
|
const bool no_io = false, bool record_read_stats = true,
|
||||||
HistogramImpl* file_read_hist = nullptr, bool skip_filters = false,
|
HistogramImpl* file_read_hist = nullptr, bool skip_filters = false,
|
||||||
|
@ -168,9 +172,6 @@ class TableCache {
|
||||||
size_t max_file_size_for_l0_meta_pin = 0,
|
size_t max_file_size_for_l0_meta_pin = 0,
|
||||||
Temperature file_temperature = Temperature::kUnknown);
|
Temperature file_temperature = Temperature::kUnknown);
|
||||||
|
|
||||||
// Get TableReader from a cache handle.
|
|
||||||
TableReader* GetTableReaderFromHandle(Cache::Handle* handle);
|
|
||||||
|
|
||||||
// Get the table properties of a given table.
|
// Get the table properties of a given table.
|
||||||
// @no_io: indicates if we should load table to the cache if it is not present
|
// @no_io: indicates if we should load table to the cache if it is not present
|
||||||
// in table cache yet.
|
// in table cache yet.
|
||||||
|
@ -212,10 +213,7 @@ class TableCache {
|
||||||
const InternalKeyComparator& internal_comparator,
|
const InternalKeyComparator& internal_comparator,
|
||||||
const std::shared_ptr<const SliceTransform>& prefix_extractor = nullptr);
|
const std::shared_ptr<const SliceTransform>& prefix_extractor = nullptr);
|
||||||
|
|
||||||
// Release the handle from a cache
|
CacheInterface& get_cache() { return cache_; }
|
||||||
void ReleaseHandle(Cache::Handle* handle);
|
|
||||||
|
|
||||||
Cache* get_cache() const { return cache_; }
|
|
||||||
|
|
||||||
// Capacity of the backing Cache that indicates infinite TableCache capacity.
|
// Capacity of the backing Cache that indicates infinite TableCache capacity.
|
||||||
// For example when max_open_files is -1 we set the backing Cache to this.
|
// For example when max_open_files is -1 we set the backing Cache to this.
|
||||||
|
@ -224,7 +222,7 @@ class TableCache {
|
||||||
// The tables opened with this TableCache will be immortal, i.e., their
|
// The tables opened with this TableCache will be immortal, i.e., their
|
||||||
// lifetime is as long as that of the DB.
|
// lifetime is as long as that of the DB.
|
||||||
void SetTablesAreImmortal() {
|
void SetTablesAreImmortal() {
|
||||||
if (cache_->GetCapacity() >= kInfiniteCapacity) {
|
if (cache_.get()->GetCapacity() >= kInfiniteCapacity) {
|
||||||
immortal_tables_ = true;
|
immortal_tables_ = true;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -263,7 +261,7 @@ class TableCache {
|
||||||
|
|
||||||
const ImmutableOptions& ioptions_;
|
const ImmutableOptions& ioptions_;
|
||||||
const FileOptions& file_options_;
|
const FileOptions& file_options_;
|
||||||
Cache* const cache_;
|
CacheInterface cache_;
|
||||||
std::string row_cache_id_;
|
std::string row_cache_id_;
|
||||||
bool immortal_tables_;
|
bool immortal_tables_;
|
||||||
BlockCacheTracer* const block_cache_tracer_;
|
BlockCacheTracer* const block_cache_tracer_;
|
||||||
|
|
|
@ -19,15 +19,14 @@ DEFINE_SYNC_AND_ASYNC(Status, TableCache::MultiGet)
|
||||||
const FileMetaData& file_meta, const MultiGetContext::Range* mget_range,
|
const FileMetaData& file_meta, const MultiGetContext::Range* mget_range,
|
||||||
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
||||||
HistogramImpl* file_read_hist, bool skip_filters, bool skip_range_deletions,
|
HistogramImpl* file_read_hist, bool skip_filters, bool skip_range_deletions,
|
||||||
int level, Cache::Handle* table_handle) {
|
int level, TypedHandle* handle) {
|
||||||
auto& fd = file_meta.fd;
|
auto& fd = file_meta.fd;
|
||||||
Status s;
|
Status s;
|
||||||
TableReader* t = fd.table_reader;
|
TableReader* t = fd.table_reader;
|
||||||
Cache::Handle* handle = table_handle;
|
|
||||||
MultiGetRange table_range(*mget_range, mget_range->begin(),
|
MultiGetRange table_range(*mget_range, mget_range->begin(),
|
||||||
mget_range->end());
|
mget_range->end());
|
||||||
if (handle != nullptr && t == nullptr) {
|
if (handle != nullptr && t == nullptr) {
|
||||||
t = GetTableReaderFromHandle(handle);
|
t = cache_.Value(handle);
|
||||||
}
|
}
|
||||||
#ifndef ROCKSDB_LITE
|
#ifndef ROCKSDB_LITE
|
||||||
autovector<std::string, MultiGetContext::MAX_BATCH_SIZE> row_cache_entries;
|
autovector<std::string, MultiGetContext::MAX_BATCH_SIZE> row_cache_entries;
|
||||||
|
@ -75,7 +74,7 @@ DEFINE_SYNC_AND_ASYNC(Status, TableCache::MultiGet)
|
||||||
0 /*max_file_size_for_l0_meta_pin*/, file_meta.temperature);
|
0 /*max_file_size_for_l0_meta_pin*/, file_meta.temperature);
|
||||||
TEST_SYNC_POINT_CALLBACK("TableCache::MultiGet:FindTable", &s);
|
TEST_SYNC_POINT_CALLBACK("TableCache::MultiGet:FindTable", &s);
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
t = GetTableReaderFromHandle(handle);
|
t = cache_.Value(handle);
|
||||||
assert(t);
|
assert(t);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -100,6 +99,7 @@ DEFINE_SYNC_AND_ASYNC(Status, TableCache::MultiGet)
|
||||||
#ifndef ROCKSDB_LITE
|
#ifndef ROCKSDB_LITE
|
||||||
if (lookup_row_cache) {
|
if (lookup_row_cache) {
|
||||||
size_t row_idx = 0;
|
size_t row_idx = 0;
|
||||||
|
RowCacheInterface row_cache{ioptions_.row_cache.get()};
|
||||||
|
|
||||||
for (auto miter = table_range.begin(); miter != table_range.end();
|
for (auto miter = table_range.begin(); miter != table_range.end();
|
||||||
++miter) {
|
++miter) {
|
||||||
|
@ -115,11 +115,9 @@ DEFINE_SYNC_AND_ASYNC(Status, TableCache::MultiGet)
|
||||||
// Put the replay log in row cache only if something was found.
|
// Put the replay log in row cache only if something was found.
|
||||||
if (s.ok() && !row_cache_entry.empty()) {
|
if (s.ok() && !row_cache_entry.empty()) {
|
||||||
size_t charge = row_cache_entry.capacity() + sizeof(std::string);
|
size_t charge = row_cache_entry.capacity() + sizeof(std::string);
|
||||||
void* row_ptr = new std::string(std::move(row_cache_entry));
|
auto row_ptr = new std::string(std::move(row_cache_entry));
|
||||||
// If row cache is full, it's OK.
|
// If row cache is full, it's OK.
|
||||||
ioptions_.row_cache
|
row_cache.Insert(row_cache_key.GetUserKey(), row_ptr, charge)
|
||||||
->Insert(row_cache_key.GetUserKey(), row_ptr, charge,
|
|
||||||
&DeleteEntry<std::string>)
|
|
||||||
.PermitUncheckedError();
|
.PermitUncheckedError();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -127,7 +125,7 @@ DEFINE_SYNC_AND_ASYNC(Status, TableCache::MultiGet)
|
||||||
#endif // ROCKSDB_LITE
|
#endif // ROCKSDB_LITE
|
||||||
|
|
||||||
if (handle != nullptr) {
|
if (handle != nullptr) {
|
||||||
ReleaseHandle(handle);
|
cache_.Release(handle);
|
||||||
}
|
}
|
||||||
CO_RETURN s;
|
CO_RETURN s;
|
||||||
}
|
}
|
||||||
|
|
|
@ -294,7 +294,9 @@ class VersionBuilder::Rep {
|
||||||
if (f->refs <= 0) {
|
if (f->refs <= 0) {
|
||||||
if (f->table_reader_handle) {
|
if (f->table_reader_handle) {
|
||||||
assert(table_cache_ != nullptr);
|
assert(table_cache_ != nullptr);
|
||||||
table_cache_->ReleaseHandle(f->table_reader_handle);
|
// NOTE: have to release in raw cache interface to avoid using a
|
||||||
|
// TypedHandle for FileMetaData::table_reader_handle
|
||||||
|
table_cache_->get_cache().get()->Release(f->table_reader_handle);
|
||||||
f->table_reader_handle = nullptr;
|
f->table_reader_handle = nullptr;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1258,7 +1260,8 @@ class VersionBuilder::Rep {
|
||||||
size_t max_file_size_for_l0_meta_pin) {
|
size_t max_file_size_for_l0_meta_pin) {
|
||||||
assert(table_cache_ != nullptr);
|
assert(table_cache_ != nullptr);
|
||||||
|
|
||||||
size_t table_cache_capacity = table_cache_->get_cache()->GetCapacity();
|
size_t table_cache_capacity =
|
||||||
|
table_cache_->get_cache().get()->GetCapacity();
|
||||||
bool always_load = (table_cache_capacity == TableCache::kInfiniteCapacity);
|
bool always_load = (table_cache_capacity == TableCache::kInfiniteCapacity);
|
||||||
size_t max_load = std::numeric_limits<size_t>::max();
|
size_t max_load = std::numeric_limits<size_t>::max();
|
||||||
|
|
||||||
|
@ -1280,7 +1283,7 @@ class VersionBuilder::Rep {
|
||||||
load_limit = table_cache_capacity / 4;
|
load_limit = table_cache_capacity / 4;
|
||||||
}
|
}
|
||||||
|
|
||||||
size_t table_cache_usage = table_cache_->get_cache()->GetUsage();
|
size_t table_cache_usage = table_cache_->get_cache().get()->GetUsage();
|
||||||
if (table_cache_usage >= load_limit) {
|
if (table_cache_usage >= load_limit) {
|
||||||
// TODO (yanqin) find a suitable status code.
|
// TODO (yanqin) find a suitable status code.
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
|
@ -1319,18 +1322,18 @@ class VersionBuilder::Rep {
|
||||||
|
|
||||||
auto* file_meta = files_meta[file_idx].first;
|
auto* file_meta = files_meta[file_idx].first;
|
||||||
int level = files_meta[file_idx].second;
|
int level = files_meta[file_idx].second;
|
||||||
|
TableCache::TypedHandle* handle = nullptr;
|
||||||
statuses[file_idx] = table_cache_->FindTable(
|
statuses[file_idx] = table_cache_->FindTable(
|
||||||
ReadOptions(), file_options_,
|
ReadOptions(), file_options_,
|
||||||
*(base_vstorage_->InternalComparator()), *file_meta,
|
*(base_vstorage_->InternalComparator()), *file_meta, &handle,
|
||||||
&file_meta->table_reader_handle, prefix_extractor, false /*no_io */,
|
prefix_extractor, false /*no_io */, true /* record_read_stats */,
|
||||||
true /* record_read_stats */,
|
|
||||||
internal_stats->GetFileReadHist(level), false, level,
|
internal_stats->GetFileReadHist(level), false, level,
|
||||||
prefetch_index_and_filter_in_cache, max_file_size_for_l0_meta_pin,
|
prefetch_index_and_filter_in_cache, max_file_size_for_l0_meta_pin,
|
||||||
file_meta->temperature);
|
file_meta->temperature);
|
||||||
if (file_meta->table_reader_handle != nullptr) {
|
if (handle != nullptr) {
|
||||||
|
file_meta->table_reader_handle = handle;
|
||||||
// Load table_reader
|
// Load table_reader
|
||||||
file_meta->fd.table_reader = table_cache_->GetTableReaderFromHandle(
|
file_meta->fd.table_reader = table_cache_->get_cache().Value(handle);
|
||||||
file_meta->table_reader_handle);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
|
|
|
@ -2511,7 +2511,7 @@ void Version::MultiGet(const ReadOptions& read_options, MultiGetRange* range,
|
||||||
std::vector<folly::coro::Task<Status>> mget_tasks;
|
std::vector<folly::coro::Task<Status>> mget_tasks;
|
||||||
while (f != nullptr) {
|
while (f != nullptr) {
|
||||||
MultiGetRange file_range = fp.CurrentFileRange();
|
MultiGetRange file_range = fp.CurrentFileRange();
|
||||||
Cache::Handle* table_handle = nullptr;
|
TableCache::TypedHandle* table_handle = nullptr;
|
||||||
bool skip_filters =
|
bool skip_filters =
|
||||||
IsFilterSkipped(static_cast<int>(fp.GetHitFileLevel()),
|
IsFilterSkipped(static_cast<int>(fp.GetHitFileLevel()),
|
||||||
fp.IsHitFileLastInLevel());
|
fp.IsHitFileLastInLevel());
|
||||||
|
@ -2693,7 +2693,7 @@ Status Version::ProcessBatch(
|
||||||
}
|
}
|
||||||
while (f) {
|
while (f) {
|
||||||
MultiGetRange file_range = fp.CurrentFileRange();
|
MultiGetRange file_range = fp.CurrentFileRange();
|
||||||
Cache::Handle* table_handle = nullptr;
|
TableCache::TypedHandle* table_handle = nullptr;
|
||||||
bool skip_filters = IsFilterSkipped(static_cast<int>(fp.GetHitFileLevel()),
|
bool skip_filters = IsFilterSkipped(static_cast<int>(fp.GetHitFileLevel()),
|
||||||
fp.IsHitFileLastInLevel());
|
fp.IsHitFileLastInLevel());
|
||||||
bool skip_range_deletions = false;
|
bool skip_range_deletions = false;
|
||||||
|
@ -6879,16 +6879,16 @@ Status VersionSet::VerifyFileMetadata(ColumnFamilyData* cfd,
|
||||||
|
|
||||||
InternalStats* internal_stats = cfd->internal_stats();
|
InternalStats* internal_stats = cfd->internal_stats();
|
||||||
|
|
||||||
|
TableCache::TypedHandle* handle = nullptr;
|
||||||
FileMetaData meta_copy = meta;
|
FileMetaData meta_copy = meta;
|
||||||
status = table_cache->FindTable(
|
status = table_cache->FindTable(
|
||||||
ReadOptions(), file_opts, *icmp, meta_copy,
|
ReadOptions(), file_opts, *icmp, meta_copy, &handle, pe,
|
||||||
&(meta_copy.table_reader_handle), pe,
|
|
||||||
/*no_io=*/false, /*record_read_stats=*/true,
|
/*no_io=*/false, /*record_read_stats=*/true,
|
||||||
internal_stats->GetFileReadHist(level), false, level,
|
internal_stats->GetFileReadHist(level), false, level,
|
||||||
/*prefetch_index_and_filter_in_cache*/ false, max_sz_for_l0_meta_pin,
|
/*prefetch_index_and_filter_in_cache*/ false, max_sz_for_l0_meta_pin,
|
||||||
meta_copy.temperature);
|
meta_copy.temperature);
|
||||||
if (meta_copy.table_reader_handle) {
|
if (handle) {
|
||||||
table_cache->ReleaseHandle(meta_copy.table_reader_handle);
|
table_cache->get_cache().Release(handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return status;
|
return status;
|
||||||
|
|
|
@ -1022,7 +1022,7 @@ class Version {
|
||||||
int hit_file_level, bool skip_filters, bool skip_range_deletions,
|
int hit_file_level, bool skip_filters, bool skip_range_deletions,
|
||||||
FdWithKeyRange* f,
|
FdWithKeyRange* f,
|
||||||
std::unordered_map<uint64_t, BlobReadContexts>& blob_ctxs,
|
std::unordered_map<uint64_t, BlobReadContexts>& blob_ctxs,
|
||||||
Cache::Handle* table_handle, uint64_t& num_filter_read,
|
TableCache::TypedHandle* table_handle, uint64_t& num_filter_read,
|
||||||
uint64_t& num_index_read, uint64_t& num_sst_read);
|
uint64_t& num_index_read, uint64_t& num_sst_read);
|
||||||
|
|
||||||
#ifdef USE_COROUTINES
|
#ifdef USE_COROUTINES
|
||||||
|
@ -1431,7 +1431,7 @@ class VersionSet {
|
||||||
void AddObsoleteBlobFile(uint64_t blob_file_number, std::string path) {
|
void AddObsoleteBlobFile(uint64_t blob_file_number, std::string path) {
|
||||||
assert(table_cache_);
|
assert(table_cache_);
|
||||||
|
|
||||||
table_cache_->Erase(GetSlice(&blob_file_number));
|
table_cache_->Erase(GetSliceForKey(&blob_file_number));
|
||||||
|
|
||||||
obsolete_blob_files_.emplace_back(blob_file_number, std::move(path));
|
obsolete_blob_files_.emplace_back(blob_file_number, std::move(path));
|
||||||
}
|
}
|
||||||
|
|
|
@ -16,7 +16,7 @@ DEFINE_SYNC_AND_ASYNC(Status, Version::MultiGetFromSST)
|
||||||
(const ReadOptions& read_options, MultiGetRange file_range, int hit_file_level,
|
(const ReadOptions& read_options, MultiGetRange file_range, int hit_file_level,
|
||||||
bool skip_filters, bool skip_range_deletions, FdWithKeyRange* f,
|
bool skip_filters, bool skip_range_deletions, FdWithKeyRange* f,
|
||||||
std::unordered_map<uint64_t, BlobReadContexts>& blob_ctxs,
|
std::unordered_map<uint64_t, BlobReadContexts>& blob_ctxs,
|
||||||
Cache::Handle* table_handle, uint64_t& num_filter_read,
|
TableCache::TypedHandle* table_handle, uint64_t& num_filter_read,
|
||||||
uint64_t& num_index_read, uint64_t& num_sst_read) {
|
uint64_t& num_index_read, uint64_t& num_sst_read) {
|
||||||
bool timer_enabled = GetPerfLevel() >= PerfLevel::kEnableTimeExceptForMutex &&
|
bool timer_enabled = GetPerfLevel() >= PerfLevel::kEnableTimeExceptForMutex &&
|
||||||
get_perf_context()->per_level_perf_context_enabled;
|
get_perf_context()->per_level_perf_context_enabled;
|
||||||
|
|
|
@ -7,18 +7,7 @@
|
||||||
// Use of this source code is governed by a BSD-style license that can be
|
// Use of this source code is governed by a BSD-style license that can be
|
||||||
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
||||||
//
|
//
|
||||||
// A Cache is an interface that maps keys to values. It has internal
|
// Various APIs for creating and customizing read caches in RocksDB.
|
||||||
// synchronization and may be safely accessed concurrently from
|
|
||||||
// multiple threads. It may automatically evict entries to make room
|
|
||||||
// for new entries. Values have a specified charge against the cache
|
|
||||||
// capacity. For example, a cache where the values are variable
|
|
||||||
// length strings, may use the length of the string as the charge for
|
|
||||||
// the string.
|
|
||||||
//
|
|
||||||
// A builtin cache implementation with a least-recently-used eviction
|
|
||||||
// policy is provided. Clients may use their own implementations if
|
|
||||||
// they want something more sophisticated (like scan-resistance, a
|
|
||||||
// custom eviction policy, variable cache sizing, etc.)
|
|
||||||
|
|
||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
|
@ -363,11 +352,33 @@ extern std::shared_ptr<Cache> NewClockCache(
|
||||||
CacheMetadataChargePolicy metadata_charge_policy =
|
CacheMetadataChargePolicy metadata_charge_policy =
|
||||||
kDefaultCacheMetadataChargePolicy);
|
kDefaultCacheMetadataChargePolicy);
|
||||||
|
|
||||||
|
// A Cache maps keys to objects resident in memory, tracks reference counts
|
||||||
|
// on those key-object entries, and is able to remove unreferenced entries
|
||||||
|
// whenever it wants. All operations are fully thread safe except as noted.
|
||||||
|
// Inserted entries have a specified "charge" which is some quantity in
|
||||||
|
// unspecified units, typically bytes of memory used. A Cache will typically
|
||||||
|
// have a finite capacity in units of charge, and evict entries as needed
|
||||||
|
// to stay at or below that capacity.
|
||||||
|
//
|
||||||
|
// NOTE: This API is for expert use only and is more intended for providing
|
||||||
|
// custom implementations than for calling into. It is subject to change
|
||||||
|
// as RocksDB evolves, especially the RocksDB block cache.
|
||||||
|
//
|
||||||
|
// INTERNAL: See typed_cache.h for convenient wrappers on top of this API.
|
||||||
class Cache {
|
class Cache {
|
||||||
public: // opaque types
|
public: // types hidden from API client
|
||||||
// Opaque handle to an entry stored in the cache.
|
// Opaque handle to an entry stored in the cache.
|
||||||
struct Handle {};
|
struct Handle {};
|
||||||
|
|
||||||
|
public: // types hidden from Cache implementation
|
||||||
|
// Pointer to cached object of unspecified type. (This type alias is
|
||||||
|
// provided for clarity, not really for type checking.)
|
||||||
|
using ObjectPtr = void*;
|
||||||
|
|
||||||
|
// Opaque object providing context (settings, etc.) to create objects
|
||||||
|
// for primary cache from saved (serialized) secondary cache entries.
|
||||||
|
struct CreateContext {};
|
||||||
|
|
||||||
public: // type defs
|
public: // type defs
|
||||||
// Depending on implementation, cache entries with higher priority levels
|
// Depending on implementation, cache entries with higher priority levels
|
||||||
// could be less likely to get evicted than entries with lower priority
|
// could be less likely to get evicted than entries with lower priority
|
||||||
|
@ -400,48 +411,84 @@ class Cache {
|
||||||
// so anything required for these operations should be contained in the
|
// so anything required for these operations should be contained in the
|
||||||
// object itself.
|
// object itself.
|
||||||
//
|
//
|
||||||
// The SizeCallback takes a void* pointer to the object and returns the size
|
// The SizeCallback takes a pointer to the object and returns the size
|
||||||
// of the persistable data. It can be used by the secondary cache to allocate
|
// of the persistable data. It can be used by the secondary cache to allocate
|
||||||
// memory if needed.
|
// memory if needed.
|
||||||
//
|
//
|
||||||
// RocksDB callbacks are NOT exception-safe. A callback completing with an
|
// RocksDB callbacks are NOT exception-safe. A callback completing with an
|
||||||
// exception can lead to undefined behavior in RocksDB, including data loss,
|
// exception can lead to undefined behavior in RocksDB, including data loss,
|
||||||
// unreported corruption, deadlocks, and more.
|
// unreported corruption, deadlocks, and more.
|
||||||
using SizeCallback = size_t (*)(void* obj);
|
using SizeCallback = size_t (*)(ObjectPtr obj);
|
||||||
|
|
||||||
// The SaveToCallback takes a void* object pointer and saves the persistable
|
// The SaveToCallback takes an object pointer and saves the persistable
|
||||||
// data into a buffer. The secondary cache may decide to not store it in a
|
// data into a buffer. The secondary cache may decide to not store it in a
|
||||||
// contiguous buffer, in which case this callback will be called multiple
|
// contiguous buffer, in which case this callback will be called multiple
|
||||||
// times with increasing offset
|
// times with increasing offset
|
||||||
using SaveToCallback = Status (*)(void* from_obj, size_t from_offset,
|
using SaveToCallback = Status (*)(ObjectPtr from_obj, size_t from_offset,
|
||||||
size_t length, void* out);
|
size_t length, char* out_buf);
|
||||||
|
|
||||||
// A function pointer type for custom destruction of an entry's
|
// A function pointer type for destruction of a cache object. This will
|
||||||
// value. The Cache is responsible for copying and reclaiming space
|
// typically call the destructor for the appropriate type of the object.
|
||||||
// for the key, but values are managed by the caller.
|
// The Cache is responsible for copying and reclaiming space for the key,
|
||||||
using DeleterFn = void (*)(const Slice& key, void* value);
|
// but objects are managed in part using this callback. Generally a DeleterFn
|
||||||
|
// can be nullptr if the ObjectPtr does not need destruction (e.g. nullptr or
|
||||||
|
// pointer into static data).
|
||||||
|
using DeleterFn = void (*)(ObjectPtr obj, MemoryAllocator* allocator);
|
||||||
|
|
||||||
|
// The CreateCallback is takes in a buffer from the NVM cache and constructs
|
||||||
|
// an object using it. The callback doesn't have ownership of the buffer and
|
||||||
|
// should copy the contents into its own buffer. The CreateContext* is
|
||||||
|
// provided by Lookup and may be used to follow DB- or CF-specific settings.
|
||||||
|
// In case of some error, non-OK is returned and the caller should ignore
|
||||||
|
// any result in out_obj. (The implementation must clean up after itself.)
|
||||||
|
using CreateCallback = Status (*)(const Slice& data, CreateContext* context,
|
||||||
|
MemoryAllocator* allocator,
|
||||||
|
ObjectPtr* out_obj, size_t* out_charge);
|
||||||
|
|
||||||
// A struct with pointers to helper functions for spilling items from the
|
// A struct with pointers to helper functions for spilling items from the
|
||||||
// cache into the secondary cache. May be extended in the future. An
|
// cache into the secondary cache. May be extended in the future. An
|
||||||
// instance of this struct is expected to outlive the cache.
|
// instance of this struct is expected to outlive the cache.
|
||||||
struct CacheItemHelper {
|
struct CacheItemHelper {
|
||||||
|
// Function for deleting an object on its removal from the Cache.
|
||||||
|
// nullptr is only for entries that require no destruction, such as
|
||||||
|
// "placeholder" cache entries with nullptr object.
|
||||||
|
DeleterFn del_cb; // (<- Most performance critical)
|
||||||
|
// Next three are used for persisting values as described above.
|
||||||
|
// If any is nullptr, then all three should be nullptr and persisting the
|
||||||
|
// entry to/from secondary cache is not supported.
|
||||||
SizeCallback size_cb;
|
SizeCallback size_cb;
|
||||||
SaveToCallback saveto_cb;
|
SaveToCallback saveto_cb;
|
||||||
DeleterFn del_cb;
|
CreateCallback create_cb;
|
||||||
|
// Classification of the entry for monitoring purposes in block cache.
|
||||||
|
CacheEntryRole role;
|
||||||
|
|
||||||
CacheItemHelper() : size_cb(nullptr), saveto_cb(nullptr), del_cb(nullptr) {}
|
constexpr CacheItemHelper()
|
||||||
CacheItemHelper(SizeCallback _size_cb, SaveToCallback _saveto_cb,
|
: del_cb(nullptr),
|
||||||
DeleterFn _del_cb)
|
size_cb(nullptr),
|
||||||
: size_cb(_size_cb), saveto_cb(_saveto_cb), del_cb(_del_cb) {}
|
saveto_cb(nullptr),
|
||||||
|
create_cb(nullptr),
|
||||||
|
role(CacheEntryRole::kMisc) {}
|
||||||
|
|
||||||
|
explicit constexpr CacheItemHelper(CacheEntryRole _role,
|
||||||
|
DeleterFn _del_cb = nullptr,
|
||||||
|
SizeCallback _size_cb = nullptr,
|
||||||
|
SaveToCallback _saveto_cb = nullptr,
|
||||||
|
CreateCallback _create_cb = nullptr)
|
||||||
|
: del_cb(_del_cb),
|
||||||
|
size_cb(_size_cb),
|
||||||
|
saveto_cb(_saveto_cb),
|
||||||
|
create_cb(_create_cb),
|
||||||
|
role(_role) {
|
||||||
|
// Either all three secondary cache callbacks are non-nullptr or
|
||||||
|
// all three are nullptr
|
||||||
|
assert((size_cb != nullptr) == (saveto_cb != nullptr));
|
||||||
|
assert((size_cb != nullptr) == (create_cb != nullptr));
|
||||||
|
}
|
||||||
|
inline bool IsSecondaryCacheCompatible() const {
|
||||||
|
return size_cb != nullptr;
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
// The CreateCallback is passed by the block cache user to Lookup(). It
|
|
||||||
// takes in a buffer from the NVM cache and constructs an object using
|
|
||||||
// it. The callback doesn't have ownership of the buffer and should
|
|
||||||
// copy the contents into its own buffer.
|
|
||||||
using CreateCallback = std::function<Status(const void* buf, size_t size,
|
|
||||||
void** out_obj, size_t* charge)>;
|
|
||||||
|
|
||||||
public: // ctor/dtor/create
|
public: // ctor/dtor/create
|
||||||
Cache(std::shared_ptr<MemoryAllocator> allocator = nullptr)
|
Cache(std::shared_ptr<MemoryAllocator> allocator = nullptr)
|
||||||
: memory_allocator_(std::move(allocator)) {}
|
: memory_allocator_(std::move(allocator)) {}
|
||||||
|
@ -471,8 +518,6 @@ class Cache {
|
||||||
// The type of the Cache
|
// The type of the Cache
|
||||||
virtual const char* Name() const = 0;
|
virtual const char* Name() const = 0;
|
||||||
|
|
||||||
// EXPERIMENTAL SecondaryCache support:
|
|
||||||
// Some APIs here are experimental and might change in the future.
|
|
||||||
// The Insert and Lookup APIs below are intended to allow cached objects
|
// The Insert and Lookup APIs below are intended to allow cached objects
|
||||||
// to be demoted/promoted between the primary block cache and a secondary
|
// to be demoted/promoted between the primary block cache and a secondary
|
||||||
// cache. The secondary cache could be a non-volatile cache, and will
|
// cache. The secondary cache could be a non-volatile cache, and will
|
||||||
|
@ -484,46 +529,27 @@ class Cache {
|
||||||
// multiple DBs share the same cache and the set of DBs can change
|
// multiple DBs share the same cache and the set of DBs can change
|
||||||
// over time.
|
// over time.
|
||||||
|
|
||||||
// Insert a mapping from key->value into the volatile cache only
|
// Insert a mapping from key->object into the cache and assign it
|
||||||
// and assign it with the specified charge against the total cache capacity.
|
|
||||||
// If strict_capacity_limit is true and cache reaches its full capacity,
|
|
||||||
// return Status::MemoryLimit.
|
|
||||||
//
|
|
||||||
// If handle is not nullptr, returns a handle that corresponds to the
|
|
||||||
// mapping. The caller must call this->Release(handle) when the returned
|
|
||||||
// mapping is no longer needed. In case of error caller is responsible to
|
|
||||||
// cleanup the value (i.e. calling "deleter").
|
|
||||||
//
|
|
||||||
// If handle is nullptr, it is as if Release is called immediately after
|
|
||||||
// insert. In case of error value will be cleanup.
|
|
||||||
//
|
|
||||||
// When the inserted entry is no longer needed, the key and
|
|
||||||
// value will be passed to "deleter" which must delete the value.
|
|
||||||
// (The Cache is responsible for copying and reclaiming space for
|
|
||||||
// the key.)
|
|
||||||
virtual Status Insert(const Slice& key, void* value, size_t charge,
|
|
||||||
DeleterFn deleter, Handle** handle = nullptr,
|
|
||||||
Priority priority = Priority::LOW) = 0;
|
|
||||||
|
|
||||||
// EXPERIMENTAL
|
|
||||||
// Insert a mapping from key->value into the cache and assign it
|
|
||||||
// the specified charge against the total cache capacity. If
|
// the specified charge against the total cache capacity. If
|
||||||
// strict_capacity_limit is true and cache reaches its full capacity,
|
// strict_capacity_limit is true and cache reaches its full capacity,
|
||||||
// return Status::MemoryLimit. `value` must be non-nullptr for this
|
// return Status::MemoryLimit. `obj` must be non-nullptr if compatible
|
||||||
// Insert() because Value() == nullptr is reserved for indicating failure
|
// with secondary cache (helper->size_cb != nullptr), because Value() ==
|
||||||
// with secondary-cache-compatible mappings.
|
// nullptr is reserved for indicating some secondary cache failure cases.
|
||||||
|
// On success, returns OK and takes ownership of `obj`, eventually deleting
|
||||||
|
// it with helper->del_cb. On non-OK return, the caller maintains ownership
|
||||||
|
// of `obj` so will often need to delete it in such cases.
|
||||||
//
|
//
|
||||||
// The helper argument is saved by the cache and will be used when the
|
// The helper argument is saved by the cache and will be used when the
|
||||||
// inserted object is evicted or promoted to the secondary cache. It,
|
// inserted object is evicted or considered for promotion to the secondary
|
||||||
// therefore, must outlive the cache.
|
// cache. Promotion to secondary cache is only enabled if helper->size_cb
|
||||||
|
// != nullptr. The helper must outlive the cache. Callers may use
|
||||||
|
// &kNoopCacheItemHelper as a trivial helper (no deleter for the object,
|
||||||
|
// no secondary cache). `helper` must not be nullptr (efficiency).
|
||||||
//
|
//
|
||||||
// If handle is not nullptr, returns a handle that corresponds to the
|
// If `handle` is not nullptr and return status is OK, `handle` is set
|
||||||
// mapping. The caller must call this->Release(handle) when the returned
|
// to a Handle* for the entry. The caller must call this->Release(handle)
|
||||||
// mapping is no longer needed. In case of error caller is responsible to
|
// when the returned entry is no longer needed. If `handle` is nullptr, it is
|
||||||
// cleanup the value (i.e. calling "deleter").
|
// as if Release is called immediately after Insert.
|
||||||
//
|
|
||||||
// If handle is nullptr, it is as if Release is called immediately after
|
|
||||||
// insert. In case of error value will be cleanup.
|
|
||||||
//
|
//
|
||||||
// Regardless of whether the item was inserted into the cache,
|
// Regardless of whether the item was inserted into the cache,
|
||||||
// it will attempt to insert it into the secondary cache if one is
|
// it will attempt to insert it into the secondary cache if one is
|
||||||
|
@ -532,42 +558,23 @@ class Cache {
|
||||||
// the item is only inserted into the primary cache. It may
|
// the item is only inserted into the primary cache. It may
|
||||||
// defer the insertion to the secondary cache as it sees fit.
|
// defer the insertion to the secondary cache as it sees fit.
|
||||||
//
|
//
|
||||||
// When the inserted entry is no longer needed, the key and
|
// When the inserted entry is no longer needed, it will be destroyed using
|
||||||
// value will be passed to "deleter".
|
// helper->del_cb (if non-nullptr).
|
||||||
virtual Status Insert(const Slice& key, void* value,
|
virtual Status Insert(const Slice& key, ObjectPtr obj,
|
||||||
const CacheItemHelper* helper, size_t charge,
|
const CacheItemHelper* helper, size_t charge,
|
||||||
Handle** handle = nullptr,
|
Handle** handle = nullptr,
|
||||||
Priority priority = Priority::LOW) {
|
Priority priority = Priority::LOW) = 0;
|
||||||
if (!helper) {
|
|
||||||
return Status::InvalidArgument();
|
|
||||||
}
|
|
||||||
return Insert(key, value, charge, helper->del_cb, handle, priority);
|
|
||||||
}
|
|
||||||
|
|
||||||
// If the cache has no mapping for "key", returns nullptr.
|
// Lookup the key, returning nullptr if not found. If found, returns
|
||||||
|
// a handle to the mapping that must eventually be passed to Release().
|
||||||
//
|
//
|
||||||
// Else return a handle that corresponds to the mapping. The caller
|
// If a non-nullptr helper argument is provided with a non-nullptr
|
||||||
// must call this->Release(handle) when the returned mapping is no
|
// create_cb, and a secondary cache is configured, then the secondary
|
||||||
// longer needed.
|
// cache is also queried if lookup in the primary cache fails. If found
|
||||||
// If stats is not nullptr, relative tickers could be used inside the
|
// in secondary cache, the provided create_db and create_context are
|
||||||
// function.
|
// used to promote the entry to an object in the primary cache.
|
||||||
virtual Handle* Lookup(const Slice& key, Statistics* stats = nullptr) = 0;
|
// In that case, the helper may be saved and used later when the object
|
||||||
|
// is evicted, so as usual, the pointed-to helper must outlive the cache.
|
||||||
// EXPERIMENTAL
|
|
||||||
// Lookup the key in the primary and secondary caches (if one is configured).
|
|
||||||
// The create_cb callback function object will be used to contruct the
|
|
||||||
// cached object.
|
|
||||||
// If none of the caches have the mapping for the key, returns nullptr.
|
|
||||||
// Else, returns a handle that corresponds to the mapping.
|
|
||||||
//
|
|
||||||
// This call may promote the object from the secondary cache (if one is
|
|
||||||
// configured, and has the given key) to the primary cache.
|
|
||||||
//
|
|
||||||
// The helper argument should be provided if the caller wants the lookup
|
|
||||||
// to include the secondary cache (if one is configured) and the object,
|
|
||||||
// if it exists, to be promoted to the primary cache. The helper may be
|
|
||||||
// saved and used later when the object is evicted. Therefore, it must
|
|
||||||
// outlive the cache.
|
|
||||||
//
|
//
|
||||||
// ======================== Async Lookup (wait=false) ======================
|
// ======================== Async Lookup (wait=false) ======================
|
||||||
// When wait=false, the handle returned might be in any of three states:
|
// When wait=false, the handle returned might be in any of three states:
|
||||||
|
@ -576,8 +583,8 @@ class Cache {
|
||||||
// * Pending, not ready (IsReady() == false) - secondary cache is still
|
// * Pending, not ready (IsReady() == false) - secondary cache is still
|
||||||
// working to retrieve the value. Might become ready any time.
|
// working to retrieve the value. Might become ready any time.
|
||||||
// * Pending, ready (IsReady() == true) - secondary cache has the value
|
// * Pending, ready (IsReady() == true) - secondary cache has the value
|
||||||
// but it has not been loaded into primary cache. Call to Wait()/WaitAll()
|
// but it has not been loaded as an object into primary cache. Call to
|
||||||
// will not block.
|
// Wait()/WaitAll() will not block.
|
||||||
//
|
//
|
||||||
// IMPORTANT: Pending handles are not thread-safe, and only these functions
|
// IMPORTANT: Pending handles are not thread-safe, and only these functions
|
||||||
// are allowed on them: Value(), IsReady(), Wait(), WaitAll(). Even Release()
|
// are allowed on them: Value(), IsReady(), Wait(), WaitAll(). Even Release()
|
||||||
|
@ -594,11 +601,15 @@ class Cache {
|
||||||
// Pending+ready state from the Failed state is to Wait() on it. A cache
|
// Pending+ready state from the Failed state is to Wait() on it. A cache
|
||||||
// entry not compatible with secondary cache can also have Value()==nullptr
|
// entry not compatible with secondary cache can also have Value()==nullptr
|
||||||
// like the Failed state, but this is not generally a concern.
|
// like the Failed state, but this is not generally a concern.
|
||||||
virtual Handle* Lookup(const Slice& key, const CacheItemHelper* /*helper_cb*/,
|
virtual Handle* Lookup(const Slice& key,
|
||||||
const CreateCallback& /*create_cb*/,
|
const CacheItemHelper* helper = nullptr,
|
||||||
Priority /*priority*/, bool /*wait*/,
|
CreateContext* create_context = nullptr,
|
||||||
Statistics* stats = nullptr) {
|
Priority priority = Priority::LOW, bool wait = true,
|
||||||
return Lookup(key, stats);
|
Statistics* stats = nullptr) = 0;
|
||||||
|
|
||||||
|
// Convenience wrapper when secondary cache not supported
|
||||||
|
inline Handle* BasicLookup(const Slice& key, Statistics* stats) {
|
||||||
|
return Lookup(key, nullptr, nullptr, Priority::LOW, true, stats);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Increments the reference count for the handle if it refers to an entry in
|
// Increments the reference count for the handle if it refers to an entry in
|
||||||
|
@ -620,11 +631,12 @@ class Cache {
|
||||||
// REQUIRES: handle must have been returned by a method on *this.
|
// REQUIRES: handle must have been returned by a method on *this.
|
||||||
virtual bool Release(Handle* handle, bool erase_if_last_ref = false) = 0;
|
virtual bool Release(Handle* handle, bool erase_if_last_ref = false) = 0;
|
||||||
|
|
||||||
// Return the value encapsulated in a handle returned by a
|
// Return the object assiciated with a handle returned by a successful
|
||||||
// successful Lookup().
|
// Lookup(). For historical reasons, this is also known at the "value"
|
||||||
|
// associated with the key.
|
||||||
// REQUIRES: handle must not have been released yet.
|
// REQUIRES: handle must not have been released yet.
|
||||||
// REQUIRES: handle must have been returned by a method on *this.
|
// REQUIRES: handle must have been returned by a method on *this.
|
||||||
virtual void* Value(Handle* handle) = 0;
|
virtual ObjectPtr Value(Handle* handle) = 0;
|
||||||
|
|
||||||
// If the cache contains the entry for the key, erase it. Note that the
|
// If the cache contains the entry for the key, erase it. Note that the
|
||||||
// underlying entry will be kept around until all existing handles
|
// underlying entry will be kept around until all existing handles
|
||||||
|
@ -675,11 +687,8 @@ class Cache {
|
||||||
// Returns the charge for the specific entry in the cache.
|
// Returns the charge for the specific entry in the cache.
|
||||||
virtual size_t GetCharge(Handle* handle) const = 0;
|
virtual size_t GetCharge(Handle* handle) const = 0;
|
||||||
|
|
||||||
// Returns the deleter for the specified entry. This might seem useless
|
// Returns the helper for the specified entry.
|
||||||
// as the Cache itself is responsible for calling the deleter, but
|
virtual const CacheItemHelper* GetCacheItemHelper(Handle* handle) const = 0;
|
||||||
// the deleter can essentially verify that a cache entry is of an
|
|
||||||
// expected type from an expected code source.
|
|
||||||
virtual DeleterFn GetDeleter(Handle* handle) const = 0;
|
|
||||||
|
|
||||||
// Call this on shutdown if you want to speed it up. Cache will disown
|
// Call this on shutdown if you want to speed it up. Cache will disown
|
||||||
// any underlying data and will not free it on delete. This call will leak
|
// any underlying data and will not free it on delete. This call will leak
|
||||||
|
@ -705,19 +714,10 @@ class Cache {
|
||||||
// entries is iterated over if other threads are operating on the Cache
|
// entries is iterated over if other threads are operating on the Cache
|
||||||
// also.
|
// also.
|
||||||
virtual void ApplyToAllEntries(
|
virtual void ApplyToAllEntries(
|
||||||
const std::function<void(const Slice& key, void* value, size_t charge,
|
const std::function<void(const Slice& key, ObjectPtr obj, size_t charge,
|
||||||
DeleterFn deleter)>& callback,
|
const CacheItemHelper* helper)>& callback,
|
||||||
const ApplyToAllEntriesOptions& opts) = 0;
|
const ApplyToAllEntriesOptions& opts) = 0;
|
||||||
|
|
||||||
// DEPRECATED version of above. (Default implementation uses above.)
|
|
||||||
virtual void ApplyToAllCacheEntries(void (*callback)(void* value,
|
|
||||||
size_t charge),
|
|
||||||
bool /*thread_safe*/) {
|
|
||||||
ApplyToAllEntries([callback](const Slice&, void* value, size_t charge,
|
|
||||||
DeleterFn) { callback(value, charge); },
|
|
||||||
{});
|
|
||||||
}
|
|
||||||
|
|
||||||
// Remove all entries.
|
// Remove all entries.
|
||||||
// Prerequisite: no entry is referenced.
|
// Prerequisite: no entry is referenced.
|
||||||
virtual void EraseUnRefEntries() = 0;
|
virtual void EraseUnRefEntries() = 0;
|
||||||
|
@ -734,6 +734,8 @@ class Cache {
|
||||||
MemoryAllocator* memory_allocator() const { return memory_allocator_.get(); }
|
MemoryAllocator* memory_allocator() const { return memory_allocator_.get(); }
|
||||||
|
|
||||||
// EXPERIMENTAL
|
// EXPERIMENTAL
|
||||||
|
// The following APIs are experimental and might change in the future.
|
||||||
|
|
||||||
// Release a mapping returned by a previous Lookup(). The "useful"
|
// Release a mapping returned by a previous Lookup(). The "useful"
|
||||||
// parameter specifies whether the data was actually used or not,
|
// parameter specifies whether the data was actually used or not,
|
||||||
// which may be used by the cache implementation to decide whether
|
// which may be used by the cache implementation to decide whether
|
||||||
|
@ -744,24 +746,21 @@ class Cache {
|
||||||
return Release(handle, erase_if_last_ref);
|
return Release(handle, erase_if_last_ref);
|
||||||
}
|
}
|
||||||
|
|
||||||
// EXPERIMENTAL
|
|
||||||
// Determines if the handle returned by Lookup() can give a value without
|
// Determines if the handle returned by Lookup() can give a value without
|
||||||
// blocking, though Wait()/WaitAll() might be required to publish it to
|
// blocking, though Wait()/WaitAll() might be required to publish it to
|
||||||
// Value(). See secondary cache compatible Lookup() above for details.
|
// Value(). See secondary cache compatible Lookup() above for details.
|
||||||
// This call is not thread safe on "pending" handles.
|
// This call is not thread safe on "pending" handles.
|
||||||
virtual bool IsReady(Handle* /*handle*/) { return true; }
|
virtual bool IsReady(Handle* /*handle*/) { return true; }
|
||||||
|
|
||||||
// EXPERIMENTAL
|
|
||||||
// Convert a "pending" handle into a full thread-shareable handle by
|
// Convert a "pending" handle into a full thread-shareable handle by
|
||||||
// * If necessary, wait until secondary cache finishes loading the value.
|
// * If necessary, wait until secondary cache finishes loading the value.
|
||||||
// * Construct the value for primary cache and set it in the handle.
|
// * Construct the object for primary cache and set it in the handle.
|
||||||
// Even after Wait() on a pending handle, the caller must check for
|
// Even after Wait() on a pending handle, the caller must check for
|
||||||
// Value() == nullptr in case of failure. This call is not thread-safe
|
// Value() == nullptr in case of failure. This call is not thread-safe
|
||||||
// on pending handles. This call has no effect on non-pending handles.
|
// on pending handles. This call has no effect on non-pending handles.
|
||||||
// See secondary cache compatible Lookup() above for details.
|
// See secondary cache compatible Lookup() above for details.
|
||||||
virtual void Wait(Handle* /*handle*/) {}
|
virtual void Wait(Handle* /*handle*/) {}
|
||||||
|
|
||||||
// EXPERIMENTAL
|
|
||||||
// Wait for a vector of handles to become ready. As with Wait(), the user
|
// Wait for a vector of handles to become ready. As with Wait(), the user
|
||||||
// should check the Value() of each handle for nullptr. This call is not
|
// should check the Value() of each handle for nullptr. This call is not
|
||||||
// thread-safe on pending handles.
|
// thread-safe on pending handles.
|
||||||
|
@ -771,5 +770,8 @@ class Cache {
|
||||||
std::shared_ptr<MemoryAllocator> memory_allocator_;
|
std::shared_ptr<MemoryAllocator> memory_allocator_;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
// Useful for cache entries requiring no clean-up, such as for cache
|
||||||
|
// reservations
|
||||||
|
inline constexpr Cache::CacheItemHelper kNoopCacheItemHelper{};
|
||||||
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
} // namespace ROCKSDB_NAMESPACE
|
||||||
|
|
|
@ -20,7 +20,7 @@ namespace ROCKSDB_NAMESPACE {
|
||||||
// A handle for lookup result. The handle may not be immediately ready or
|
// A handle for lookup result. The handle may not be immediately ready or
|
||||||
// have a valid value. The caller must call isReady() to determine if its
|
// have a valid value. The caller must call isReady() to determine if its
|
||||||
// ready, and call Wait() in order to block until it becomes ready.
|
// ready, and call Wait() in order to block until it becomes ready.
|
||||||
// The caller must call value() after it becomes ready to determine if the
|
// The caller must call Value() after it becomes ready to determine if the
|
||||||
// handle successfullly read the item.
|
// handle successfullly read the item.
|
||||||
class SecondaryCacheResultHandle {
|
class SecondaryCacheResultHandle {
|
||||||
public:
|
public:
|
||||||
|
@ -32,8 +32,9 @@ class SecondaryCacheResultHandle {
|
||||||
// Block until handle becomes ready
|
// Block until handle becomes ready
|
||||||
virtual void Wait() = 0;
|
virtual void Wait() = 0;
|
||||||
|
|
||||||
// Return the value. If nullptr, it means the lookup was unsuccessful
|
// Return the cache entry object (also known as value). If nullptr, it means
|
||||||
virtual void* Value() = 0;
|
// the lookup was unsuccessful.
|
||||||
|
virtual Cache::ObjectPtr Value() = 0;
|
||||||
|
|
||||||
// Return the size of value
|
// Return the size of value
|
||||||
virtual size_t Size() = 0;
|
virtual size_t Size() = 0;
|
||||||
|
@ -74,7 +75,7 @@ class SecondaryCache : public Customizable {
|
||||||
// Lookup() might return the same parsed value back. But more typically, if
|
// Lookup() might return the same parsed value back. But more typically, if
|
||||||
// the implementation only uses `value` for getting persistable data during
|
// the implementation only uses `value` for getting persistable data during
|
||||||
// the call, then the default implementation of `InsertSaved()` suffices.
|
// the call, then the default implementation of `InsertSaved()` suffices.
|
||||||
virtual Status Insert(const Slice& key, void* value,
|
virtual Status Insert(const Slice& key, Cache::ObjectPtr obj,
|
||||||
const Cache::CacheItemHelper* helper) = 0;
|
const Cache::CacheItemHelper* helper) = 0;
|
||||||
|
|
||||||
// Insert a value from its saved/persistable data (typically uncompressed
|
// Insert a value from its saved/persistable data (typically uncompressed
|
||||||
|
@ -101,8 +102,9 @@ class SecondaryCache : public Customizable {
|
||||||
// is_in_sec_cache is to indicate whether the handle is possibly erased
|
// is_in_sec_cache is to indicate whether the handle is possibly erased
|
||||||
// from the secondary cache after the Lookup.
|
// from the secondary cache after the Lookup.
|
||||||
virtual std::unique_ptr<SecondaryCacheResultHandle> Lookup(
|
virtual std::unique_ptr<SecondaryCacheResultHandle> Lookup(
|
||||||
const Slice& key, const Cache::CreateCallback& create_cb, bool wait,
|
const Slice& key, const Cache::CacheItemHelper* helper,
|
||||||
bool advise_erase, bool& is_in_sec_cache) = 0;
|
Cache::CreateContext* create_context, bool wait, bool advise_erase,
|
||||||
|
bool& is_in_sec_cache) = 0;
|
||||||
|
|
||||||
// Indicate whether a handle can be erased in this secondary cache.
|
// Indicate whether a handle can be erased in this secondary cache.
|
||||||
[[nodiscard]] virtual bool SupportForceErase() const = 0;
|
[[nodiscard]] virtual bool SupportForceErase() const = 0;
|
||||||
|
|
|
@ -6,6 +6,8 @@
|
||||||
|
|
||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
|
#include <algorithm>
|
||||||
|
|
||||||
#include "rocksdb/memory_allocator.h"
|
#include "rocksdb/memory_allocator.h"
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
@ -35,4 +37,11 @@ inline CacheAllocationPtr AllocateBlock(size_t size,
|
||||||
return CacheAllocationPtr(new char[size]);
|
return CacheAllocationPtr(new char[size]);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
inline CacheAllocationPtr AllocateAndCopyBlock(const Slice& data,
|
||||||
|
MemoryAllocator* allocator) {
|
||||||
|
CacheAllocationPtr cap = AllocateBlock(data.size(), allocator);
|
||||||
|
std::copy_n(data.data(), data.size(), cap.get());
|
||||||
|
return cap;
|
||||||
|
}
|
||||||
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
} // namespace ROCKSDB_NAMESPACE
|
||||||
|
|
|
@ -1324,13 +1324,14 @@ class TestSecondaryCache : public SecondaryCache {
|
||||||
public:
|
public:
|
||||||
static const char* kClassName() { return "Test"; }
|
static const char* kClassName() { return "Test"; }
|
||||||
const char* Name() const override { return kClassName(); }
|
const char* Name() const override { return kClassName(); }
|
||||||
Status Insert(const Slice& /*key*/, void* /*value*/,
|
Status Insert(const Slice& /*key*/, Cache::ObjectPtr /*value*/,
|
||||||
const Cache::CacheItemHelper* /*helper*/) override {
|
const Cache::CacheItemHelper* /*helper*/) override {
|
||||||
return Status::NotSupported();
|
return Status::NotSupported();
|
||||||
}
|
}
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> Lookup(
|
||||||
const Slice& /*key*/, const Cache::CreateCallback& /*create_cb*/,
|
const Slice& /*key*/, const Cache::CacheItemHelper* /*helper*/,
|
||||||
bool /*wait*/, bool /*advise_erase*/, bool& is_in_sec_cache) override {
|
Cache::CreateContext* /*create_context*/, bool /*wait*/,
|
||||||
|
bool /*advise_erase*/, bool& is_in_sec_cache) override {
|
||||||
is_in_sec_cache = true;
|
is_in_sec_cache = true;
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
|
|
2
src.mk
2
src.mk
|
@ -3,6 +3,7 @@ LIB_SOURCES = \
|
||||||
cache/cache.cc \
|
cache/cache.cc \
|
||||||
cache/cache_entry_roles.cc \
|
cache/cache_entry_roles.cc \
|
||||||
cache/cache_key.cc \
|
cache/cache_key.cc \
|
||||||
|
cache/cache_helpers.cc \
|
||||||
cache/cache_reservation_manager.cc \
|
cache/cache_reservation_manager.cc \
|
||||||
cache/charged_cache.cc \
|
cache/charged_cache.cc \
|
||||||
cache/clock_cache.cc \
|
cache/clock_cache.cc \
|
||||||
|
@ -171,6 +172,7 @@ LIB_SOURCES = \
|
||||||
table/block_based/block_based_table_iterator.cc \
|
table/block_based/block_based_table_iterator.cc \
|
||||||
table/block_based/block_based_table_reader.cc \
|
table/block_based/block_based_table_reader.cc \
|
||||||
table/block_based/block_builder.cc \
|
table/block_based/block_builder.cc \
|
||||||
|
table/block_based/block_cache.cc \
|
||||||
table/block_based/block_prefetcher.cc \
|
table/block_based/block_prefetcher.cc \
|
||||||
table/block_based/block_prefix_index.cc \
|
table/block_based/block_prefix_index.cc \
|
||||||
table/block_based/data_block_hash_index.cc \
|
table/block_based/data_block_hash_index.cc \
|
||||||
|
|
|
@ -236,6 +236,9 @@ class Block {
|
||||||
// Report an approximation of how much memory has been used.
|
// Report an approximation of how much memory has been used.
|
||||||
size_t ApproximateMemoryUsage() const;
|
size_t ApproximateMemoryUsage() const;
|
||||||
|
|
||||||
|
// For TypedCacheInterface
|
||||||
|
const Slice& ContentSlice() const { return contents_.data; }
|
||||||
|
|
||||||
private:
|
private:
|
||||||
BlockContents contents_;
|
BlockContents contents_;
|
||||||
const char* data_; // contents_.data.data()
|
const char* data_; // contents_.data.data()
|
||||||
|
|
|
@ -21,6 +21,7 @@
|
||||||
#include <unordered_map>
|
#include <unordered_map>
|
||||||
#include <utility>
|
#include <utility>
|
||||||
|
|
||||||
|
#include "block_cache.h"
|
||||||
#include "cache/cache_entry_roles.h"
|
#include "cache/cache_entry_roles.h"
|
||||||
#include "cache/cache_helpers.h"
|
#include "cache/cache_helpers.h"
|
||||||
#include "cache/cache_key.h"
|
#include "cache/cache_key.h"
|
||||||
|
@ -41,7 +42,6 @@
|
||||||
#include "table/block_based/block_based_table_factory.h"
|
#include "table/block_based/block_based_table_factory.h"
|
||||||
#include "table/block_based/block_based_table_reader.h"
|
#include "table/block_based/block_based_table_reader.h"
|
||||||
#include "table/block_based/block_builder.h"
|
#include "table/block_based/block_builder.h"
|
||||||
#include "table/block_based/block_like_traits.h"
|
|
||||||
#include "table/block_based/filter_block.h"
|
#include "table/block_based/filter_block.h"
|
||||||
#include "table/block_based/filter_policy_internal.h"
|
#include "table/block_based/filter_policy_internal.h"
|
||||||
#include "table/block_based/full_filter_block.h"
|
#include "table/block_based/full_filter_block.h"
|
||||||
|
@ -335,6 +335,7 @@ struct BlockBasedTableBuilder::Rep {
|
||||||
std::vector<std::unique_ptr<IntTblPropCollector>> table_properties_collectors;
|
std::vector<std::unique_ptr<IntTblPropCollector>> table_properties_collectors;
|
||||||
|
|
||||||
std::unique_ptr<ParallelCompressionRep> pc_rep;
|
std::unique_ptr<ParallelCompressionRep> pc_rep;
|
||||||
|
BlockCreateContext create_context;
|
||||||
|
|
||||||
uint64_t get_offset() { return offset.load(std::memory_order_relaxed); }
|
uint64_t get_offset() { return offset.load(std::memory_order_relaxed); }
|
||||||
void set_offset(uint64_t o) { offset.store(o, std::memory_order_relaxed); }
|
void set_offset(uint64_t o) { offset.store(o, std::memory_order_relaxed); }
|
||||||
|
@ -443,6 +444,9 @@ struct BlockBasedTableBuilder::Rep {
|
||||||
flush_block_policy(
|
flush_block_policy(
|
||||||
table_options.flush_block_policy_factory->NewFlushBlockPolicy(
|
table_options.flush_block_policy_factory->NewFlushBlockPolicy(
|
||||||
table_options, data_block)),
|
table_options, data_block)),
|
||||||
|
create_context(&table_options, ioptions.stats,
|
||||||
|
compression_type == kZSTD ||
|
||||||
|
compression_type == kZSTDNotFinalCompression),
|
||||||
status_ok(true),
|
status_ok(true),
|
||||||
io_status_ok(true) {
|
io_status_ok(true) {
|
||||||
if (tbo.target_file_size == 0) {
|
if (tbo.target_file_size == 0) {
|
||||||
|
@ -1240,6 +1244,10 @@ void BlockBasedTableBuilder::WriteMaybeCompressedBlock(
|
||||||
handle->set_size(block_contents.size());
|
handle->set_size(block_contents.size());
|
||||||
assert(status().ok());
|
assert(status().ok());
|
||||||
assert(io_status().ok());
|
assert(io_status().ok());
|
||||||
|
if (uncompressed_block_data == nullptr) {
|
||||||
|
uncompressed_block_data = &block_contents;
|
||||||
|
assert(type == kNoCompression);
|
||||||
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
IOStatus io_s = r->file->Append(block_contents);
|
IOStatus io_s = r->file->Append(block_contents);
|
||||||
|
@ -1291,12 +1299,8 @@ void BlockBasedTableBuilder::WriteMaybeCompressedBlock(
|
||||||
warm_cache = false;
|
warm_cache = false;
|
||||||
}
|
}
|
||||||
if (warm_cache) {
|
if (warm_cache) {
|
||||||
if (type == kNoCompression) {
|
s = InsertBlockInCacheHelper(*uncompressed_block_data, handle,
|
||||||
s = InsertBlockInCacheHelper(block_contents, handle, block_type);
|
block_type);
|
||||||
} else if (uncompressed_block_data != nullptr) {
|
|
||||||
s = InsertBlockInCacheHelper(*uncompressed_block_data, handle,
|
|
||||||
block_type);
|
|
||||||
}
|
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
r->SetStatus(s);
|
r->SetStatus(s);
|
||||||
return;
|
return;
|
||||||
|
@ -1425,13 +1429,14 @@ Status BlockBasedTableBuilder::InsertBlockInCompressedCache(
|
||||||
const Slice& block_contents, const CompressionType type,
|
const Slice& block_contents, const CompressionType type,
|
||||||
const BlockHandle* handle) {
|
const BlockHandle* handle) {
|
||||||
Rep* r = rep_;
|
Rep* r = rep_;
|
||||||
Cache* block_cache_compressed = r->table_options.block_cache_compressed.get();
|
CompressedBlockCacheInterface block_cache_compressed{
|
||||||
|
r->table_options.block_cache_compressed.get()};
|
||||||
Status s;
|
Status s;
|
||||||
if (type != kNoCompression && block_cache_compressed != nullptr) {
|
if (type != kNoCompression && block_cache_compressed) {
|
||||||
size_t size = block_contents.size();
|
size_t size = block_contents.size();
|
||||||
|
|
||||||
auto ubuf =
|
auto ubuf = AllocateBlock(size + 1,
|
||||||
AllocateBlock(size + 1, block_cache_compressed->memory_allocator());
|
block_cache_compressed.get()->memory_allocator());
|
||||||
memcpy(ubuf.get(), block_contents.data(), size);
|
memcpy(ubuf.get(), block_contents.data(), size);
|
||||||
ubuf[size] = type;
|
ubuf[size] = type;
|
||||||
|
|
||||||
|
@ -1443,10 +1448,9 @@ Status BlockBasedTableBuilder::InsertBlockInCompressedCache(
|
||||||
|
|
||||||
CacheKey key = BlockBasedTable::GetCacheKey(rep_->base_cache_key, *handle);
|
CacheKey key = BlockBasedTable::GetCacheKey(rep_->base_cache_key, *handle);
|
||||||
|
|
||||||
s = block_cache_compressed->Insert(
|
s = block_cache_compressed.Insert(
|
||||||
key.AsSlice(), block_contents_to_cache,
|
key.AsSlice(), block_contents_to_cache,
|
||||||
block_contents_to_cache->ApproximateMemoryUsage(),
|
block_contents_to_cache->ApproximateMemoryUsage());
|
||||||
&DeleteCacheEntry<BlockContents>);
|
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
RecordTick(rep_->ioptions.stats, BLOCK_CACHE_COMPRESSED_ADD);
|
RecordTick(rep_->ioptions.stats, BLOCK_CACHE_COMPRESSED_ADD);
|
||||||
} else {
|
} else {
|
||||||
|
@ -1462,65 +1466,19 @@ Status BlockBasedTableBuilder::InsertBlockInCompressedCache(
|
||||||
Status BlockBasedTableBuilder::InsertBlockInCacheHelper(
|
Status BlockBasedTableBuilder::InsertBlockInCacheHelper(
|
||||||
const Slice& block_contents, const BlockHandle* handle,
|
const Slice& block_contents, const BlockHandle* handle,
|
||||||
BlockType block_type) {
|
BlockType block_type) {
|
||||||
Status s;
|
|
||||||
switch (block_type) {
|
|
||||||
case BlockType::kData:
|
|
||||||
case BlockType::kIndex:
|
|
||||||
case BlockType::kFilterPartitionIndex:
|
|
||||||
s = InsertBlockInCache<Block>(block_contents, handle, block_type);
|
|
||||||
break;
|
|
||||||
case BlockType::kFilter:
|
|
||||||
s = InsertBlockInCache<ParsedFullFilterBlock>(block_contents, handle,
|
|
||||||
block_type);
|
|
||||||
break;
|
|
||||||
case BlockType::kCompressionDictionary:
|
|
||||||
s = InsertBlockInCache<UncompressionDict>(block_contents, handle,
|
|
||||||
block_type);
|
|
||||||
break;
|
|
||||||
default:
|
|
||||||
// no-op / not cached
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
return s;
|
|
||||||
}
|
|
||||||
|
|
||||||
template <typename TBlocklike>
|
|
||||||
Status BlockBasedTableBuilder::InsertBlockInCache(const Slice& block_contents,
|
|
||||||
const BlockHandle* handle,
|
|
||||||
BlockType block_type) {
|
|
||||||
// Uncompressed regular block cache
|
|
||||||
Cache* block_cache = rep_->table_options.block_cache.get();
|
Cache* block_cache = rep_->table_options.block_cache.get();
|
||||||
Status s;
|
Status s;
|
||||||
if (block_cache != nullptr) {
|
auto helper =
|
||||||
size_t size = block_contents.size();
|
GetCacheItemHelper(block_type, rep_->ioptions.lowest_used_cache_tier);
|
||||||
auto buf = AllocateBlock(size, block_cache->memory_allocator());
|
if (block_cache && helper && helper->create_cb) {
|
||||||
memcpy(buf.get(), block_contents.data(), size);
|
|
||||||
BlockContents results(std::move(buf), size);
|
|
||||||
|
|
||||||
CacheKey key = BlockBasedTable::GetCacheKey(rep_->base_cache_key, *handle);
|
CacheKey key = BlockBasedTable::GetCacheKey(rep_->base_cache_key, *handle);
|
||||||
|
size_t charge;
|
||||||
const size_t read_amp_bytes_per_bit =
|
s = WarmInCache(block_cache, key.AsSlice(), block_contents,
|
||||||
rep_->table_options.read_amp_bytes_per_bit;
|
&rep_->create_context, helper, Cache::Priority::LOW,
|
||||||
|
&charge);
|
||||||
// TODO akanksha:: Dedup below code by calling
|
|
||||||
// BlockBasedTable::PutDataBlockToCache.
|
|
||||||
std::unique_ptr<TBlocklike> block_holder(
|
|
||||||
BlocklikeTraits<TBlocklike>::Create(
|
|
||||||
std::move(results), read_amp_bytes_per_bit,
|
|
||||||
rep_->ioptions.statistics.get(),
|
|
||||||
false /*rep_->blocks_definitely_zstd_compressed*/,
|
|
||||||
rep_->table_options.filter_policy.get()));
|
|
||||||
|
|
||||||
assert(block_holder->own_bytes());
|
|
||||||
size_t charge = block_holder->ApproximateMemoryUsage();
|
|
||||||
s = block_cache->Insert(
|
|
||||||
key.AsSlice(), block_holder.get(),
|
|
||||||
BlocklikeTraits<TBlocklike>::GetCacheItemHelper(block_type), charge,
|
|
||||||
nullptr, Cache::Priority::LOW);
|
|
||||||
|
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
// Release ownership of block_holder.
|
|
||||||
block_holder.release();
|
|
||||||
BlockBasedTable::UpdateCacheInsertionMetrics(
|
BlockBasedTable::UpdateCacheInsertionMetrics(
|
||||||
block_type, nullptr /*get_context*/, charge, s.IsOkOverwritten(),
|
block_type, nullptr /*get_context*/, charge, s.IsOkOverwritten(),
|
||||||
rep_->ioptions.stats);
|
rep_->ioptions.stats);
|
||||||
|
|
|
@ -122,9 +122,9 @@ class BlockBasedTableBuilder : public TableBuilder {
|
||||||
void WriteBlock(const Slice& block_contents, BlockHandle* handle,
|
void WriteBlock(const Slice& block_contents, BlockHandle* handle,
|
||||||
BlockType block_type);
|
BlockType block_type);
|
||||||
// Directly write data to the file.
|
// Directly write data to the file.
|
||||||
void WriteMaybeCompressedBlock(const Slice& data, CompressionType,
|
void WriteMaybeCompressedBlock(
|
||||||
BlockHandle* handle, BlockType block_type,
|
const Slice& block_contents, CompressionType, BlockHandle* handle,
|
||||||
const Slice* raw_data = nullptr);
|
BlockType block_type, const Slice* uncompressed_block_data = nullptr);
|
||||||
|
|
||||||
void SetupCacheKeyPrefix(const TableBuilderOptions& tbo);
|
void SetupCacheKeyPrefix(const TableBuilderOptions& tbo);
|
||||||
|
|
||||||
|
|
|
@ -525,20 +525,24 @@ Status CheckCacheOptionCompatibility(const BlockBasedTableOptions& bbto) {
|
||||||
|
|
||||||
// More complex test of shared key space, in case the instances are wrappers
|
// More complex test of shared key space, in case the instances are wrappers
|
||||||
// for some shared underlying cache.
|
// for some shared underlying cache.
|
||||||
|
static Cache::CacheItemHelper kHelper{CacheEntryRole::kMisc};
|
||||||
CacheKey sentinel_key = CacheKey::CreateUniqueForProcessLifetime();
|
CacheKey sentinel_key = CacheKey::CreateUniqueForProcessLifetime();
|
||||||
static char kRegularBlockCacheMarker = 'b';
|
struct SentinelValue {
|
||||||
static char kCompressedBlockCacheMarker = 'c';
|
explicit SentinelValue(char _c) : c(_c) {}
|
||||||
static char kPersistentCacheMarker = 'p';
|
char c;
|
||||||
|
};
|
||||||
|
static SentinelValue kRegularBlockCacheMarker{'b'};
|
||||||
|
static SentinelValue kCompressedBlockCacheMarker{'c'};
|
||||||
|
static char kPersistentCacheMarker{'p'};
|
||||||
if (bbto.block_cache) {
|
if (bbto.block_cache) {
|
||||||
bbto.block_cache
|
bbto.block_cache
|
||||||
->Insert(sentinel_key.AsSlice(), &kRegularBlockCacheMarker, 1,
|
->Insert(sentinel_key.AsSlice(), &kRegularBlockCacheMarker, &kHelper, 1)
|
||||||
GetNoopDeleterForRole<CacheEntryRole::kMisc>())
|
|
||||||
.PermitUncheckedError();
|
.PermitUncheckedError();
|
||||||
}
|
}
|
||||||
if (bbto.block_cache_compressed) {
|
if (bbto.block_cache_compressed) {
|
||||||
bbto.block_cache_compressed
|
bbto.block_cache_compressed
|
||||||
->Insert(sentinel_key.AsSlice(), &kCompressedBlockCacheMarker, 1,
|
->Insert(sentinel_key.AsSlice(), &kCompressedBlockCacheMarker, &kHelper,
|
||||||
GetNoopDeleterForRole<CacheEntryRole::kMisc>())
|
1)
|
||||||
.PermitUncheckedError();
|
.PermitUncheckedError();
|
||||||
}
|
}
|
||||||
if (bbto.persistent_cache) {
|
if (bbto.persistent_cache) {
|
||||||
|
@ -552,8 +556,8 @@ Status CheckCacheOptionCompatibility(const BlockBasedTableOptions& bbto) {
|
||||||
if (bbto.block_cache) {
|
if (bbto.block_cache) {
|
||||||
auto handle = bbto.block_cache->Lookup(sentinel_key.AsSlice());
|
auto handle = bbto.block_cache->Lookup(sentinel_key.AsSlice());
|
||||||
if (handle) {
|
if (handle) {
|
||||||
auto v = static_cast<char*>(bbto.block_cache->Value(handle));
|
auto v = static_cast<SentinelValue*>(bbto.block_cache->Value(handle));
|
||||||
char c = *v;
|
char c = v->c;
|
||||||
bbto.block_cache->Release(handle);
|
bbto.block_cache->Release(handle);
|
||||||
if (v == &kCompressedBlockCacheMarker) {
|
if (v == &kCompressedBlockCacheMarker) {
|
||||||
return Status::InvalidArgument(
|
return Status::InvalidArgument(
|
||||||
|
@ -571,8 +575,9 @@ Status CheckCacheOptionCompatibility(const BlockBasedTableOptions& bbto) {
|
||||||
if (bbto.block_cache_compressed) {
|
if (bbto.block_cache_compressed) {
|
||||||
auto handle = bbto.block_cache_compressed->Lookup(sentinel_key.AsSlice());
|
auto handle = bbto.block_cache_compressed->Lookup(sentinel_key.AsSlice());
|
||||||
if (handle) {
|
if (handle) {
|
||||||
auto v = static_cast<char*>(bbto.block_cache_compressed->Value(handle));
|
auto v = static_cast<SentinelValue*>(
|
||||||
char c = *v;
|
bbto.block_cache_compressed->Value(handle));
|
||||||
|
char c = v->c;
|
||||||
bbto.block_cache_compressed->Release(handle);
|
bbto.block_cache_compressed->Release(handle);
|
||||||
if (v == &kRegularBlockCacheMarker) {
|
if (v == &kRegularBlockCacheMarker) {
|
||||||
return Status::InvalidArgument(
|
return Status::InvalidArgument(
|
||||||
|
@ -595,11 +600,11 @@ Status CheckCacheOptionCompatibility(const BlockBasedTableOptions& bbto) {
|
||||||
bbto.persistent_cache->Lookup(sentinel_key.AsSlice(), &data, &size)
|
bbto.persistent_cache->Lookup(sentinel_key.AsSlice(), &data, &size)
|
||||||
.PermitUncheckedError();
|
.PermitUncheckedError();
|
||||||
if (data && size > 0) {
|
if (data && size > 0) {
|
||||||
if (data[0] == kRegularBlockCacheMarker) {
|
if (data[0] == kRegularBlockCacheMarker.c) {
|
||||||
return Status::InvalidArgument(
|
return Status::InvalidArgument(
|
||||||
"persistent_cache and block_cache share the same key space, "
|
"persistent_cache and block_cache share the same key space, "
|
||||||
"which is not supported");
|
"which is not supported");
|
||||||
} else if (data[0] == kCompressedBlockCacheMarker) {
|
} else if (data[0] == kCompressedBlockCacheMarker.c) {
|
||||||
return Status::InvalidArgument(
|
return Status::InvalidArgument(
|
||||||
"persistent_cache and block_cache_compressed share the same key "
|
"persistent_cache and block_cache_compressed share the same key "
|
||||||
"space, "
|
"space, "
|
||||||
|
|
|
@ -19,6 +19,7 @@
|
||||||
#include <utility>
|
#include <utility>
|
||||||
#include <vector>
|
#include <vector>
|
||||||
|
|
||||||
|
#include "block_cache.h"
|
||||||
#include "cache/cache_entry_roles.h"
|
#include "cache/cache_entry_roles.h"
|
||||||
#include "cache/cache_key.h"
|
#include "cache/cache_key.h"
|
||||||
#include "db/compaction/compaction_picker.h"
|
#include "db/compaction/compaction_picker.h"
|
||||||
|
@ -29,6 +30,7 @@
|
||||||
#include "file/random_access_file_reader.h"
|
#include "file/random_access_file_reader.h"
|
||||||
#include "logging/logging.h"
|
#include "logging/logging.h"
|
||||||
#include "monitoring/perf_context_imp.h"
|
#include "monitoring/perf_context_imp.h"
|
||||||
|
#include "parsed_full_filter_block.h"
|
||||||
#include "port/lang.h"
|
#include "port/lang.h"
|
||||||
#include "rocksdb/cache.h"
|
#include "rocksdb/cache.h"
|
||||||
#include "rocksdb/comparator.h"
|
#include "rocksdb/comparator.h"
|
||||||
|
@ -48,7 +50,6 @@
|
||||||
#include "table/block_based/block.h"
|
#include "table/block_based/block.h"
|
||||||
#include "table/block_based/block_based_table_factory.h"
|
#include "table/block_based/block_based_table_factory.h"
|
||||||
#include "table/block_based/block_based_table_iterator.h"
|
#include "table/block_based/block_based_table_iterator.h"
|
||||||
#include "table/block_based/block_like_traits.h"
|
|
||||||
#include "table/block_based/block_prefix_index.h"
|
#include "table/block_based/block_prefix_index.h"
|
||||||
#include "table/block_based/block_type.h"
|
#include "table/block_based/block_type.h"
|
||||||
#include "table/block_based/filter_block.h"
|
#include "table/block_based/filter_block.h"
|
||||||
|
@ -83,6 +84,26 @@ CacheAllocationPtr CopyBufferToHeap(MemoryAllocator* allocator, Slice& buf) {
|
||||||
return heap_buf;
|
return heap_buf;
|
||||||
}
|
}
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
|
// Explicitly instantiate templates for each "blocklike" type we use (and
|
||||||
|
// before implicit specialization).
|
||||||
|
// This makes it possible to keep the template definitions in the .cc file.
|
||||||
|
#define INSTANTIATE_RETRIEVE_BLOCK(T) \
|
||||||
|
template Status BlockBasedTable::RetrieveBlock<T>( \
|
||||||
|
FilePrefetchBuffer * prefetch_buffer, const ReadOptions& ro, \
|
||||||
|
const BlockHandle& handle, const UncompressionDict& uncompression_dict, \
|
||||||
|
CachableEntry<T>* out_parsed_block, GetContext* get_context, \
|
||||||
|
BlockCacheLookupContext* lookup_context, bool for_compaction, \
|
||||||
|
bool use_cache, bool wait_for_cache, bool async_read) const;
|
||||||
|
|
||||||
|
INSTANTIATE_RETRIEVE_BLOCK(ParsedFullFilterBlock);
|
||||||
|
INSTANTIATE_RETRIEVE_BLOCK(UncompressionDict);
|
||||||
|
INSTANTIATE_RETRIEVE_BLOCK(Block_kData);
|
||||||
|
INSTANTIATE_RETRIEVE_BLOCK(Block_kIndex);
|
||||||
|
INSTANTIATE_RETRIEVE_BLOCK(Block_kFilterPartitionIndex);
|
||||||
|
INSTANTIATE_RETRIEVE_BLOCK(Block_kRangeDeletion);
|
||||||
|
INSTANTIATE_RETRIEVE_BLOCK(Block_kMetaIndex);
|
||||||
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
} // namespace ROCKSDB_NAMESPACE
|
||||||
|
|
||||||
// Generate the regular and coroutine versions of some methods by
|
// Generate the regular and coroutine versions of some methods by
|
||||||
|
@ -114,22 +135,22 @@ namespace {
|
||||||
// @param uncompression_dict Data for presetting the compression library's
|
// @param uncompression_dict Data for presetting the compression library's
|
||||||
// dictionary.
|
// dictionary.
|
||||||
template <typename TBlocklike>
|
template <typename TBlocklike>
|
||||||
Status ReadBlockFromFile(
|
Status ReadAndParseBlockFromFile(
|
||||||
RandomAccessFileReader* file, FilePrefetchBuffer* prefetch_buffer,
|
RandomAccessFileReader* file, FilePrefetchBuffer* prefetch_buffer,
|
||||||
const Footer& footer, const ReadOptions& options, const BlockHandle& handle,
|
const Footer& footer, const ReadOptions& options, const BlockHandle& handle,
|
||||||
std::unique_ptr<TBlocklike>* result, const ImmutableOptions& ioptions,
|
std::unique_ptr<TBlocklike>* result, const ImmutableOptions& ioptions,
|
||||||
bool do_uncompress, bool maybe_compressed, BlockType block_type,
|
BlockCreateContext& create_context, bool maybe_compressed,
|
||||||
const UncompressionDict& uncompression_dict,
|
const UncompressionDict& uncompression_dict,
|
||||||
const PersistentCacheOptions& cache_options, size_t read_amp_bytes_per_bit,
|
const PersistentCacheOptions& cache_options,
|
||||||
MemoryAllocator* memory_allocator, bool for_compaction, bool using_zstd,
|
MemoryAllocator* memory_allocator, bool for_compaction, bool async_read) {
|
||||||
const FilterPolicy* filter_policy, bool async_read) {
|
|
||||||
assert(result);
|
assert(result);
|
||||||
|
|
||||||
BlockContents contents;
|
BlockContents contents;
|
||||||
BlockFetcher block_fetcher(
|
BlockFetcher block_fetcher(
|
||||||
file, prefetch_buffer, footer, options, handle, &contents, ioptions,
|
file, prefetch_buffer, footer, options, handle, &contents, ioptions,
|
||||||
do_uncompress, maybe_compressed, block_type, uncompression_dict,
|
/*do_uncompress*/ maybe_compressed, maybe_compressed,
|
||||||
cache_options, memory_allocator, nullptr, for_compaction);
|
TBlocklike::kBlockType, uncompression_dict, cache_options,
|
||||||
|
memory_allocator, nullptr, for_compaction);
|
||||||
Status s;
|
Status s;
|
||||||
// If prefetch_buffer is not allocated, it will fallback to synchronous
|
// If prefetch_buffer is not allocated, it will fallback to synchronous
|
||||||
// reading of block contents.
|
// reading of block contents.
|
||||||
|
@ -142,11 +163,8 @@ Status ReadBlockFromFile(
|
||||||
s = block_fetcher.ReadBlockContents();
|
s = block_fetcher.ReadBlockContents();
|
||||||
}
|
}
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
result->reset(BlocklikeTraits<TBlocklike>::Create(
|
create_context.Create(result, std::move(contents));
|
||||||
std::move(contents), read_amp_bytes_per_bit, ioptions.stats, using_zstd,
|
|
||||||
filter_policy));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -171,6 +189,16 @@ inline bool PrefixExtractorChangedHelper(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template <typename TBlocklike>
|
||||||
|
uint32_t GetBlockNumRestarts(const TBlocklike& block) {
|
||||||
|
if constexpr (std::is_convertible_v<const TBlocklike&, const Block&>) {
|
||||||
|
const Block& b = block;
|
||||||
|
return b.NumRestarts();
|
||||||
|
} else {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
} // namespace
|
} // namespace
|
||||||
|
|
||||||
void BlockBasedTable::UpdateCacheHitMetrics(BlockType block_type,
|
void BlockBasedTable::UpdateCacheHitMetrics(BlockType block_type,
|
||||||
|
@ -377,56 +405,6 @@ void BlockBasedTable::UpdateCacheInsertionMetrics(
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Cache::Handle* BlockBasedTable::GetEntryFromCache(
|
|
||||||
const CacheTier& cache_tier, Cache* block_cache, const Slice& key,
|
|
||||||
BlockType block_type, const bool wait, GetContext* get_context,
|
|
||||||
const Cache::CacheItemHelper* cache_helper,
|
|
||||||
const Cache::CreateCallback& create_cb, Cache::Priority priority) const {
|
|
||||||
Cache::Handle* cache_handle = nullptr;
|
|
||||||
if (cache_tier == CacheTier::kNonVolatileBlockTier) {
|
|
||||||
cache_handle = block_cache->Lookup(key, cache_helper, create_cb, priority,
|
|
||||||
wait, rep_->ioptions.statistics.get());
|
|
||||||
} else {
|
|
||||||
cache_handle = block_cache->Lookup(key, rep_->ioptions.statistics.get());
|
|
||||||
}
|
|
||||||
|
|
||||||
// Avoid updating metrics here if the handle is not complete yet. This
|
|
||||||
// happens with MultiGet and secondary cache. So update the metrics only
|
|
||||||
// if its a miss, or a hit and value is ready
|
|
||||||
if (!cache_handle || block_cache->Value(cache_handle)) {
|
|
||||||
if (cache_handle != nullptr) {
|
|
||||||
UpdateCacheHitMetrics(block_type, get_context,
|
|
||||||
block_cache->GetUsage(cache_handle));
|
|
||||||
} else {
|
|
||||||
UpdateCacheMissMetrics(block_type, get_context);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return cache_handle;
|
|
||||||
}
|
|
||||||
|
|
||||||
template <typename TBlocklike>
|
|
||||||
Status BlockBasedTable::InsertEntryToCache(
|
|
||||||
const CacheTier& cache_tier, Cache* block_cache, const Slice& key,
|
|
||||||
const Cache::CacheItemHelper* cache_helper,
|
|
||||||
std::unique_ptr<TBlocklike>&& block_holder, size_t charge,
|
|
||||||
Cache::Handle** cache_handle, Cache::Priority priority) const {
|
|
||||||
Status s = Status::OK();
|
|
||||||
if (cache_tier == CacheTier::kNonVolatileBlockTier) {
|
|
||||||
s = block_cache->Insert(key, block_holder.get(), cache_helper, charge,
|
|
||||||
cache_handle, priority);
|
|
||||||
} else {
|
|
||||||
s = block_cache->Insert(key, block_holder.get(), charge,
|
|
||||||
cache_helper->del_cb, cache_handle, priority);
|
|
||||||
}
|
|
||||||
if (s.ok()) {
|
|
||||||
// Cache took ownership
|
|
||||||
block_holder.release();
|
|
||||||
}
|
|
||||||
s.MustCheck();
|
|
||||||
return s;
|
|
||||||
}
|
|
||||||
|
|
||||||
namespace {
|
namespace {
|
||||||
// Return True if table_properties has `user_prop_name` has a `true` value
|
// Return True if table_properties has `user_prop_name` has a `true` value
|
||||||
// or it doesn't contain this property (for backward compatible).
|
// or it doesn't contain this property (for backward compatible).
|
||||||
|
@ -687,6 +665,17 @@ Status BlockBasedTable::Open(
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Populate BlockCreateContext
|
||||||
|
bool blocks_definitely_zstd_compressed =
|
||||||
|
rep->table_properties &&
|
||||||
|
(rep->table_properties->compression_name ==
|
||||||
|
CompressionTypeToString(kZSTD) ||
|
||||||
|
rep->table_properties->compression_name ==
|
||||||
|
CompressionTypeToString(kZSTDNotFinalCompression));
|
||||||
|
rep->create_context =
|
||||||
|
BlockCreateContext(&rep->table_options, rep->ioptions.stats,
|
||||||
|
blocks_definitely_zstd_compressed);
|
||||||
|
|
||||||
// Check expected unique id if provided
|
// Check expected unique id if provided
|
||||||
if (expected_unique_id != kNullUniqueId64x2) {
|
if (expected_unique_id != kNullUniqueId64x2) {
|
||||||
auto props = rep->table_properties;
|
auto props = rep->table_properties;
|
||||||
|
@ -903,11 +892,6 @@ Status BlockBasedTable::ReadPropertiesBlock(
|
||||||
rep_->blocks_maybe_compressed =
|
rep_->blocks_maybe_compressed =
|
||||||
rep_->table_properties->compression_name !=
|
rep_->table_properties->compression_name !=
|
||||||
CompressionTypeToString(kNoCompression);
|
CompressionTypeToString(kNoCompression);
|
||||||
rep_->blocks_definitely_zstd_compressed =
|
|
||||||
(rep_->table_properties->compression_name ==
|
|
||||||
CompressionTypeToString(kZSTD) ||
|
|
||||||
rep_->table_properties->compression_name ==
|
|
||||||
CompressionTypeToString(kZSTDNotFinalCompression));
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
ROCKS_LOG_ERROR(rep_->ioptions.logger,
|
ROCKS_LOG_ERROR(rep_->ioptions.logger,
|
||||||
|
@ -1247,15 +1231,14 @@ Status BlockBasedTable::ReadMetaIndexBlock(
|
||||||
std::unique_ptr<InternalIterator>* iter) {
|
std::unique_ptr<InternalIterator>* iter) {
|
||||||
// TODO(sanjay): Skip this if footer.metaindex_handle() size indicates
|
// TODO(sanjay): Skip this if footer.metaindex_handle() size indicates
|
||||||
// it is an empty block.
|
// it is an empty block.
|
||||||
std::unique_ptr<Block> metaindex;
|
std::unique_ptr<Block_kMetaIndex> metaindex;
|
||||||
Status s = ReadBlockFromFile(
|
Status s = ReadAndParseBlockFromFile(
|
||||||
rep_->file.get(), prefetch_buffer, rep_->footer, ro,
|
rep_->file.get(), prefetch_buffer, rep_->footer, ro,
|
||||||
rep_->footer.metaindex_handle(), &metaindex, rep_->ioptions,
|
rep_->footer.metaindex_handle(), &metaindex, rep_->ioptions,
|
||||||
true /* decompress */, true /*maybe_compressed*/, BlockType::kMetaIndex,
|
rep_->create_context, true /*maybe_compressed*/,
|
||||||
UncompressionDict::GetEmptyDict(), rep_->persistent_cache_options,
|
UncompressionDict::GetEmptyDict(), rep_->persistent_cache_options,
|
||||||
0 /* read_amp_bytes_per_bit */, GetMemoryAllocator(rep_->table_options),
|
GetMemoryAllocator(rep_->table_options), false /* for_compaction */,
|
||||||
false /* for_compaction */, rep_->blocks_definitely_zstd_compressed,
|
false /* async_read */);
|
||||||
nullptr /* filter_policy */, false /* async_read */);
|
|
||||||
|
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
ROCKS_LOG_ERROR(rep_->ioptions.logger,
|
ROCKS_LOG_ERROR(rep_->ioptions.logger,
|
||||||
|
@ -1272,16 +1255,13 @@ Status BlockBasedTable::ReadMetaIndexBlock(
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename TBlocklike>
|
template <typename TBlocklike>
|
||||||
Status BlockBasedTable::GetDataBlockFromCache(
|
WithBlocklikeCheck<Status, TBlocklike> BlockBasedTable::GetDataBlockFromCache(
|
||||||
const Slice& cache_key, Cache* block_cache, Cache* block_cache_compressed,
|
const Slice& cache_key, BlockCacheInterface<TBlocklike> block_cache,
|
||||||
|
CompressedBlockCacheInterface block_cache_compressed,
|
||||||
const ReadOptions& read_options,
|
const ReadOptions& read_options,
|
||||||
CachableEntry<TBlocklike>* out_parsed_block,
|
CachableEntry<TBlocklike>* out_parsed_block,
|
||||||
const UncompressionDict& uncompression_dict, BlockType block_type,
|
const UncompressionDict& uncompression_dict, const bool wait,
|
||||||
const bool wait, GetContext* get_context) const {
|
GetContext* get_context) const {
|
||||||
const size_t read_amp_bytes_per_bit =
|
|
||||||
block_type == BlockType::kData
|
|
||||||
? rep_->table_options.read_amp_bytes_per_bit
|
|
||||||
: 0;
|
|
||||||
assert(out_parsed_block);
|
assert(out_parsed_block);
|
||||||
assert(out_parsed_block->IsEmpty());
|
assert(out_parsed_block->IsEmpty());
|
||||||
// Here we treat the legacy name "...index_and_filter_blocks..." to mean all
|
// Here we treat the legacy name "...index_and_filter_blocks..." to mean all
|
||||||
|
@ -1292,33 +1272,33 @@ Status BlockBasedTable::GetDataBlockFromCache(
|
||||||
// high-priority treatment if it should go into BlockCache.
|
// high-priority treatment if it should go into BlockCache.
|
||||||
const Cache::Priority priority =
|
const Cache::Priority priority =
|
||||||
rep_->table_options.cache_index_and_filter_blocks_with_high_priority &&
|
rep_->table_options.cache_index_and_filter_blocks_with_high_priority &&
|
||||||
block_type != BlockType::kData &&
|
TBlocklike::kBlockType != BlockType::kData &&
|
||||||
block_type != BlockType::kProperties
|
TBlocklike::kBlockType != BlockType::kProperties
|
||||||
? Cache::Priority::HIGH
|
? Cache::Priority::HIGH
|
||||||
: Cache::Priority::LOW;
|
: Cache::Priority::LOW;
|
||||||
|
|
||||||
Status s;
|
Status s;
|
||||||
BlockContents* compressed_block = nullptr;
|
|
||||||
Cache::Handle* block_cache_compressed_handle = nullptr;
|
|
||||||
Statistics* statistics = rep_->ioptions.statistics.get();
|
Statistics* statistics = rep_->ioptions.statistics.get();
|
||||||
bool using_zstd = rep_->blocks_definitely_zstd_compressed;
|
|
||||||
const FilterPolicy* filter_policy = rep_->filter_policy;
|
|
||||||
Cache::CreateCallback create_cb = GetCreateCallback<TBlocklike>(
|
|
||||||
read_amp_bytes_per_bit, statistics, using_zstd, filter_policy);
|
|
||||||
|
|
||||||
// Lookup uncompressed cache first
|
// Lookup uncompressed cache first
|
||||||
if (block_cache != nullptr) {
|
if (block_cache) {
|
||||||
assert(!cache_key.empty());
|
assert(!cache_key.empty());
|
||||||
Cache::Handle* cache_handle = nullptr;
|
auto cache_handle = block_cache.LookupFull(
|
||||||
cache_handle = GetEntryFromCache(
|
cache_key, &rep_->create_context, priority, wait, statistics,
|
||||||
rep_->ioptions.lowest_used_cache_tier, block_cache, cache_key,
|
rep_->ioptions.lowest_used_cache_tier);
|
||||||
block_type, wait, get_context,
|
|
||||||
BlocklikeTraits<TBlocklike>::GetCacheItemHelper(block_type), create_cb,
|
// Avoid updating metrics here if the handle is not complete yet. This
|
||||||
priority);
|
// happens with MultiGet and secondary cache. So update the metrics only
|
||||||
if (cache_handle != nullptr) {
|
// if its a miss, or a hit and value is ready
|
||||||
out_parsed_block->SetCachedValue(
|
if (!cache_handle) {
|
||||||
reinterpret_cast<TBlocklike*>(block_cache->Value(cache_handle)),
|
UpdateCacheMissMetrics(TBlocklike::kBlockType, get_context);
|
||||||
block_cache, cache_handle);
|
} else {
|
||||||
|
TBlocklike* value = block_cache.Value(cache_handle);
|
||||||
|
if (value) {
|
||||||
|
UpdateCacheHitMetrics(TBlocklike::kBlockType, get_context,
|
||||||
|
block_cache.get()->GetUsage(cache_handle));
|
||||||
|
}
|
||||||
|
out_parsed_block->SetCachedValue(value, block_cache.get(), cache_handle);
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1326,14 +1306,14 @@ Status BlockBasedTable::GetDataBlockFromCache(
|
||||||
// If not found, search from the compressed block cache.
|
// If not found, search from the compressed block cache.
|
||||||
assert(out_parsed_block->IsEmpty());
|
assert(out_parsed_block->IsEmpty());
|
||||||
|
|
||||||
if (block_cache_compressed == nullptr) {
|
if (!block_cache_compressed) {
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
assert(!cache_key.empty());
|
assert(!cache_key.empty());
|
||||||
BlockContents contents;
|
BlockContents contents;
|
||||||
block_cache_compressed_handle =
|
auto block_cache_compressed_handle =
|
||||||
block_cache_compressed->Lookup(cache_key, statistics);
|
block_cache_compressed.Lookup(cache_key, statistics);
|
||||||
|
|
||||||
// if we found in the compressed cache, then uncompress and insert into
|
// if we found in the compressed cache, then uncompress and insert into
|
||||||
// uncompressed cache
|
// uncompressed cache
|
||||||
|
@ -1344,8 +1324,8 @@ Status BlockBasedTable::GetDataBlockFromCache(
|
||||||
|
|
||||||
// found compressed block
|
// found compressed block
|
||||||
RecordTick(statistics, BLOCK_CACHE_COMPRESSED_HIT);
|
RecordTick(statistics, BLOCK_CACHE_COMPRESSED_HIT);
|
||||||
compressed_block = reinterpret_cast<BlockContents*>(
|
BlockContents* compressed_block =
|
||||||
block_cache_compressed->Value(block_cache_compressed_handle));
|
block_cache_compressed.Value(block_cache_compressed_handle);
|
||||||
CompressionType compression_type = GetBlockCompressionType(*compressed_block);
|
CompressionType compression_type = GetBlockCompressionType(*compressed_block);
|
||||||
assert(compression_type != kNoCompression);
|
assert(compression_type != kNoCompression);
|
||||||
|
|
||||||
|
@ -1360,27 +1340,21 @@ Status BlockBasedTable::GetDataBlockFromCache(
|
||||||
// Insert parsed block into block cache, the priority is based on the
|
// Insert parsed block into block cache, the priority is based on the
|
||||||
// data block type.
|
// data block type.
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
std::unique_ptr<TBlocklike> block_holder(
|
std::unique_ptr<TBlocklike> block_holder;
|
||||||
BlocklikeTraits<TBlocklike>::Create(
|
rep_->create_context.Create(&block_holder, std::move(contents));
|
||||||
std::move(contents), read_amp_bytes_per_bit, statistics,
|
|
||||||
rep_->blocks_definitely_zstd_compressed,
|
|
||||||
rep_->table_options.filter_policy.get()));
|
|
||||||
|
|
||||||
if (block_cache != nullptr && block_holder->own_bytes() &&
|
if (block_cache && block_holder->own_bytes() && read_options.fill_cache) {
|
||||||
read_options.fill_cache) {
|
|
||||||
size_t charge = block_holder->ApproximateMemoryUsage();
|
size_t charge = block_holder->ApproximateMemoryUsage();
|
||||||
Cache::Handle* cache_handle = nullptr;
|
BlockCacheTypedHandle<TBlocklike>* cache_handle = nullptr;
|
||||||
auto block_holder_raw_ptr = block_holder.get();
|
s = block_cache.InsertFull(cache_key, block_holder.get(), charge,
|
||||||
s = InsertEntryToCache(
|
&cache_handle, priority,
|
||||||
rep_->ioptions.lowest_used_cache_tier, block_cache, cache_key,
|
rep_->ioptions.lowest_used_cache_tier);
|
||||||
BlocklikeTraits<TBlocklike>::GetCacheItemHelper(block_type),
|
|
||||||
std::move(block_holder), charge, &cache_handle, priority);
|
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
assert(cache_handle != nullptr);
|
assert(cache_handle != nullptr);
|
||||||
out_parsed_block->SetCachedValue(block_holder_raw_ptr, block_cache,
|
out_parsed_block->SetCachedValue(block_holder.release(),
|
||||||
cache_handle);
|
block_cache.get(), cache_handle);
|
||||||
|
|
||||||
UpdateCacheInsertionMetrics(block_type, get_context, charge,
|
UpdateCacheInsertionMetrics(TBlocklike::kBlockType, get_context, charge,
|
||||||
s.IsOkOverwritten(), rep_->ioptions.stats);
|
s.IsOkOverwritten(), rep_->ioptions.stats);
|
||||||
} else {
|
} else {
|
||||||
RecordTick(statistics, BLOCK_CACHE_ADD_FAILURES);
|
RecordTick(statistics, BLOCK_CACHE_ADD_FAILURES);
|
||||||
|
@ -1391,27 +1365,23 @@ Status BlockBasedTable::GetDataBlockFromCache(
|
||||||
}
|
}
|
||||||
|
|
||||||
// Release hold on compressed cache entry
|
// Release hold on compressed cache entry
|
||||||
block_cache_compressed->Release(block_cache_compressed_handle);
|
block_cache_compressed.Release(block_cache_compressed_handle);
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename TBlocklike>
|
template <typename TBlocklike>
|
||||||
Status BlockBasedTable::PutDataBlockToCache(
|
WithBlocklikeCheck<Status, TBlocklike> BlockBasedTable::PutDataBlockToCache(
|
||||||
const Slice& cache_key, Cache* block_cache, Cache* block_cache_compressed,
|
const Slice& cache_key, BlockCacheInterface<TBlocklike> block_cache,
|
||||||
|
CompressedBlockCacheInterface block_cache_compressed,
|
||||||
CachableEntry<TBlocklike>* out_parsed_block, BlockContents&& block_contents,
|
CachableEntry<TBlocklike>* out_parsed_block, BlockContents&& block_contents,
|
||||||
CompressionType block_comp_type,
|
CompressionType block_comp_type,
|
||||||
const UncompressionDict& uncompression_dict,
|
const UncompressionDict& uncompression_dict,
|
||||||
MemoryAllocator* memory_allocator, BlockType block_type,
|
MemoryAllocator* memory_allocator, GetContext* get_context) const {
|
||||||
GetContext* get_context) const {
|
|
||||||
const ImmutableOptions& ioptions = rep_->ioptions;
|
const ImmutableOptions& ioptions = rep_->ioptions;
|
||||||
const uint32_t format_version = rep_->table_options.format_version;
|
const uint32_t format_version = rep_->table_options.format_version;
|
||||||
const size_t read_amp_bytes_per_bit =
|
|
||||||
block_type == BlockType::kData
|
|
||||||
? rep_->table_options.read_amp_bytes_per_bit
|
|
||||||
: 0;
|
|
||||||
const Cache::Priority priority =
|
const Cache::Priority priority =
|
||||||
rep_->table_options.cache_index_and_filter_blocks_with_high_priority &&
|
rep_->table_options.cache_index_and_filter_blocks_with_high_priority &&
|
||||||
block_type != BlockType::kData
|
TBlocklike::kBlockType != BlockType::kData
|
||||||
? Cache::Priority::HIGH
|
? Cache::Priority::HIGH
|
||||||
: Cache::Priority::LOW;
|
: Cache::Priority::LOW;
|
||||||
assert(out_parsed_block);
|
assert(out_parsed_block);
|
||||||
|
@ -1433,21 +1403,15 @@ Status BlockBasedTable::PutDataBlockToCache(
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
rep_->create_context.Create(&block_holder,
|
||||||
block_holder.reset(BlocklikeTraits<TBlocklike>::Create(
|
std::move(uncompressed_block_contents));
|
||||||
std::move(uncompressed_block_contents), read_amp_bytes_per_bit,
|
|
||||||
statistics, rep_->blocks_definitely_zstd_compressed,
|
|
||||||
rep_->table_options.filter_policy.get()));
|
|
||||||
} else {
|
} else {
|
||||||
block_holder.reset(BlocklikeTraits<TBlocklike>::Create(
|
rep_->create_context.Create(&block_holder, std::move(block_contents));
|
||||||
std::move(block_contents), read_amp_bytes_per_bit, statistics,
|
|
||||||
rep_->blocks_definitely_zstd_compressed,
|
|
||||||
rep_->table_options.filter_policy.get()));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Insert compressed block into compressed block cache.
|
// Insert compressed block into compressed block cache.
|
||||||
// Release the hold on the compressed cache entry immediately.
|
// Release the hold on the compressed cache entry immediately.
|
||||||
if (block_cache_compressed != nullptr && block_comp_type != kNoCompression &&
|
if (block_cache_compressed && block_comp_type != kNoCompression &&
|
||||||
block_contents.own_bytes()) {
|
block_contents.own_bytes()) {
|
||||||
assert(block_contents.has_trailer);
|
assert(block_contents.has_trailer);
|
||||||
assert(!cache_key.empty());
|
assert(!cache_key.empty());
|
||||||
|
@ -1458,10 +1422,9 @@ Status BlockBasedTable::PutDataBlockToCache(
|
||||||
std::make_unique<BlockContents>(std::move(block_contents));
|
std::make_unique<BlockContents>(std::move(block_contents));
|
||||||
size_t charge = block_cont_for_comp_cache->ApproximateMemoryUsage();
|
size_t charge = block_cont_for_comp_cache->ApproximateMemoryUsage();
|
||||||
|
|
||||||
s = block_cache_compressed->Insert(
|
s = block_cache_compressed.Insert(cache_key,
|
||||||
cache_key, block_cont_for_comp_cache.get(), charge,
|
block_cont_for_comp_cache.get(), charge,
|
||||||
&DeleteCacheEntry<BlockContents>, nullptr /*handle*/,
|
nullptr /*handle*/, Cache::Priority::LOW);
|
||||||
Cache::Priority::LOW);
|
|
||||||
|
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
// Cache took ownership
|
// Cache took ownership
|
||||||
|
@ -1473,20 +1436,19 @@ Status BlockBasedTable::PutDataBlockToCache(
|
||||||
}
|
}
|
||||||
|
|
||||||
// insert into uncompressed block cache
|
// insert into uncompressed block cache
|
||||||
if (block_cache != nullptr && block_holder->own_bytes()) {
|
if (block_cache && block_holder->own_bytes()) {
|
||||||
size_t charge = block_holder->ApproximateMemoryUsage();
|
size_t charge = block_holder->ApproximateMemoryUsage();
|
||||||
auto block_holder_raw_ptr = block_holder.get();
|
BlockCacheTypedHandle<TBlocklike>* cache_handle = nullptr;
|
||||||
Cache::Handle* cache_handle = nullptr;
|
s = block_cache.InsertFull(cache_key, block_holder.get(), charge,
|
||||||
s = InsertEntryToCache(
|
&cache_handle, priority,
|
||||||
rep_->ioptions.lowest_used_cache_tier, block_cache, cache_key,
|
rep_->ioptions.lowest_used_cache_tier);
|
||||||
BlocklikeTraits<TBlocklike>::GetCacheItemHelper(block_type),
|
|
||||||
std::move(block_holder), charge, &cache_handle, priority);
|
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
assert(cache_handle != nullptr);
|
assert(cache_handle != nullptr);
|
||||||
out_parsed_block->SetCachedValue(block_holder_raw_ptr, block_cache,
|
out_parsed_block->SetCachedValue(block_holder.release(),
|
||||||
cache_handle);
|
block_cache.get(), cache_handle);
|
||||||
|
|
||||||
UpdateCacheInsertionMetrics(block_type, get_context, charge,
|
UpdateCacheInsertionMetrics(TBlocklike::kBlockType, get_context, charge,
|
||||||
s.IsOkOverwritten(), rep_->ioptions.stats);
|
s.IsOkOverwritten(), rep_->ioptions.stats);
|
||||||
} else {
|
} else {
|
||||||
RecordTick(statistics, BLOCK_CACHE_ADD_FAILURES);
|
RecordTick(statistics, BLOCK_CACHE_ADD_FAILURES);
|
||||||
|
@ -1542,6 +1504,7 @@ InternalIteratorBase<IndexValue>* BlockBasedTable::NewIndexIterator(
|
||||||
lookup_context);
|
lookup_context);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TODO?
|
||||||
template <>
|
template <>
|
||||||
DataBlockIter* BlockBasedTable::InitBlockIterator<DataBlockIter>(
|
DataBlockIter* BlockBasedTable::InitBlockIterator<DataBlockIter>(
|
||||||
const Rep* rep, Block* block, BlockType block_type,
|
const Rep* rep, Block* block, BlockType block_type,
|
||||||
|
@ -1551,6 +1514,7 @@ DataBlockIter* BlockBasedTable::InitBlockIterator<DataBlockIter>(
|
||||||
rep->ioptions.stats, block_contents_pinned);
|
rep->ioptions.stats, block_contents_pinned);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// TODO?
|
||||||
template <>
|
template <>
|
||||||
IndexBlockIter* BlockBasedTable::InitBlockIterator<IndexBlockIter>(
|
IndexBlockIter* BlockBasedTable::InitBlockIterator<IndexBlockIter>(
|
||||||
const Rep* rep, Block* block, BlockType block_type,
|
const Rep* rep, Block* block, BlockType block_type,
|
||||||
|
@ -1569,18 +1533,20 @@ IndexBlockIter* BlockBasedTable::InitBlockIterator<IndexBlockIter>(
|
||||||
// the caller has already read it. In both cases, if ro.fill_cache is true,
|
// the caller has already read it. In both cases, if ro.fill_cache is true,
|
||||||
// it inserts the block into the block cache.
|
// it inserts the block into the block cache.
|
||||||
template <typename TBlocklike>
|
template <typename TBlocklike>
|
||||||
Status BlockBasedTable::MaybeReadBlockAndLoadToCache(
|
WithBlocklikeCheck<Status, TBlocklike>
|
||||||
|
BlockBasedTable::MaybeReadBlockAndLoadToCache(
|
||||||
FilePrefetchBuffer* prefetch_buffer, const ReadOptions& ro,
|
FilePrefetchBuffer* prefetch_buffer, const ReadOptions& ro,
|
||||||
const BlockHandle& handle, const UncompressionDict& uncompression_dict,
|
const BlockHandle& handle, const UncompressionDict& uncompression_dict,
|
||||||
const bool wait, const bool for_compaction,
|
const bool wait, const bool for_compaction,
|
||||||
CachableEntry<TBlocklike>* out_parsed_block, BlockType block_type,
|
CachableEntry<TBlocklike>* out_parsed_block, GetContext* get_context,
|
||||||
GetContext* get_context, BlockCacheLookupContext* lookup_context,
|
BlockCacheLookupContext* lookup_context, BlockContents* contents,
|
||||||
BlockContents* contents, bool async_read) const {
|
bool async_read) const {
|
||||||
assert(out_parsed_block != nullptr);
|
assert(out_parsed_block != nullptr);
|
||||||
const bool no_io = (ro.read_tier == kBlockCacheTier);
|
const bool no_io = (ro.read_tier == kBlockCacheTier);
|
||||||
Cache* block_cache = rep_->table_options.block_cache.get();
|
BlockCacheInterface<TBlocklike> block_cache{
|
||||||
Cache* block_cache_compressed =
|
rep_->table_options.block_cache.get()};
|
||||||
rep_->table_options.block_cache_compressed.get();
|
CompressedBlockCacheInterface block_cache_compressed{
|
||||||
|
rep_->table_options.block_cache_compressed.get()};
|
||||||
|
|
||||||
// First, try to get the block from the cache
|
// First, try to get the block from the cache
|
||||||
//
|
//
|
||||||
|
@ -1589,15 +1555,15 @@ Status BlockBasedTable::MaybeReadBlockAndLoadToCache(
|
||||||
CacheKey key_data;
|
CacheKey key_data;
|
||||||
Slice key;
|
Slice key;
|
||||||
bool is_cache_hit = false;
|
bool is_cache_hit = false;
|
||||||
if (block_cache != nullptr || block_cache_compressed != nullptr) {
|
if (block_cache || block_cache_compressed) {
|
||||||
// create key for block cache
|
// create key for block cache
|
||||||
key_data = GetCacheKey(rep_->base_cache_key, handle);
|
key_data = GetCacheKey(rep_->base_cache_key, handle);
|
||||||
key = key_data.AsSlice();
|
key = key_data.AsSlice();
|
||||||
|
|
||||||
if (!contents) {
|
if (!contents) {
|
||||||
s = GetDataBlockFromCache(key, block_cache, block_cache_compressed, ro,
|
s = GetDataBlockFromCache(key, block_cache, block_cache_compressed, ro,
|
||||||
out_parsed_block, uncompression_dict,
|
out_parsed_block, uncompression_dict, wait,
|
||||||
block_type, wait, get_context);
|
get_context);
|
||||||
// Value could still be null at this point, so check the cache handle
|
// Value could still be null at this point, so check the cache handle
|
||||||
// and update the read pattern for prefetching
|
// and update the read pattern for prefetching
|
||||||
if (out_parsed_block->GetValue() || out_parsed_block->GetCacheHandle()) {
|
if (out_parsed_block->GetValue() || out_parsed_block->GetCacheHandle()) {
|
||||||
|
@ -1622,8 +1588,8 @@ Status BlockBasedTable::MaybeReadBlockAndLoadToCache(
|
||||||
ro.fill_cache) {
|
ro.fill_cache) {
|
||||||
Statistics* statistics = rep_->ioptions.stats;
|
Statistics* statistics = rep_->ioptions.stats;
|
||||||
const bool maybe_compressed =
|
const bool maybe_compressed =
|
||||||
block_type != BlockType::kFilter &&
|
TBlocklike::kBlockType != BlockType::kFilter &&
|
||||||
block_type != BlockType::kCompressionDictionary &&
|
TBlocklike::kBlockType != BlockType::kCompressionDictionary &&
|
||||||
rep_->blocks_maybe_compressed;
|
rep_->blocks_maybe_compressed;
|
||||||
const bool do_uncompress = maybe_compressed && !block_cache_compressed;
|
const bool do_uncompress = maybe_compressed && !block_cache_compressed;
|
||||||
CompressionType contents_comp_type;
|
CompressionType contents_comp_type;
|
||||||
|
@ -1636,7 +1602,8 @@ Status BlockBasedTable::MaybeReadBlockAndLoadToCache(
|
||||||
BlockFetcher block_fetcher(
|
BlockFetcher block_fetcher(
|
||||||
rep_->file.get(), prefetch_buffer, rep_->footer, ro, handle,
|
rep_->file.get(), prefetch_buffer, rep_->footer, ro, handle,
|
||||||
&tmp_contents, rep_->ioptions, do_uncompress, maybe_compressed,
|
&tmp_contents, rep_->ioptions, do_uncompress, maybe_compressed,
|
||||||
block_type, uncompression_dict, rep_->persistent_cache_options,
|
TBlocklike::kBlockType, uncompression_dict,
|
||||||
|
rep_->persistent_cache_options,
|
||||||
GetMemoryAllocator(rep_->table_options),
|
GetMemoryAllocator(rep_->table_options),
|
||||||
GetMemoryAllocatorForCompressedBlock(rep_->table_options));
|
GetMemoryAllocatorForCompressedBlock(rep_->table_options));
|
||||||
|
|
||||||
|
@ -1654,7 +1621,7 @@ Status BlockBasedTable::MaybeReadBlockAndLoadToCache(
|
||||||
contents_comp_type = block_fetcher.get_compression_type();
|
contents_comp_type = block_fetcher.get_compression_type();
|
||||||
contents = &tmp_contents;
|
contents = &tmp_contents;
|
||||||
if (get_context) {
|
if (get_context) {
|
||||||
switch (block_type) {
|
switch (TBlocklike::kBlockType) {
|
||||||
case BlockType::kIndex:
|
case BlockType::kIndex:
|
||||||
++get_context->get_context_stats_.num_index_read;
|
++get_context->get_context_stats_.num_index_read;
|
||||||
break;
|
break;
|
||||||
|
@ -1676,7 +1643,7 @@ Status BlockBasedTable::MaybeReadBlockAndLoadToCache(
|
||||||
s = PutDataBlockToCache(
|
s = PutDataBlockToCache(
|
||||||
key, block_cache, block_cache_compressed, out_parsed_block,
|
key, block_cache, block_cache_compressed, out_parsed_block,
|
||||||
std::move(*contents), contents_comp_type, uncompression_dict,
|
std::move(*contents), contents_comp_type, uncompression_dict,
|
||||||
GetMemoryAllocator(rep_->table_options), block_type, get_context);
|
GetMemoryAllocator(rep_->table_options), get_context);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -1688,13 +1655,13 @@ Status BlockBasedTable::MaybeReadBlockAndLoadToCache(
|
||||||
uint64_t nkeys = 0;
|
uint64_t nkeys = 0;
|
||||||
if (out_parsed_block->GetValue()) {
|
if (out_parsed_block->GetValue()) {
|
||||||
// Approximate the number of keys in the block using restarts.
|
// Approximate the number of keys in the block using restarts.
|
||||||
|
// FIXME: Should this only apply to data blocks?
|
||||||
nkeys = rep_->table_options.block_restart_interval *
|
nkeys = rep_->table_options.block_restart_interval *
|
||||||
BlocklikeTraits<TBlocklike>::GetNumRestarts(
|
GetBlockNumRestarts(*out_parsed_block->GetValue());
|
||||||
*out_parsed_block->GetValue());
|
|
||||||
usage = out_parsed_block->GetValue()->ApproximateMemoryUsage();
|
usage = out_parsed_block->GetValue()->ApproximateMemoryUsage();
|
||||||
}
|
}
|
||||||
TraceType trace_block_type = TraceType::kTraceMax;
|
TraceType trace_block_type = TraceType::kTraceMax;
|
||||||
switch (block_type) {
|
switch (TBlocklike::kBlockType) {
|
||||||
case BlockType::kData:
|
case BlockType::kData:
|
||||||
trace_block_type = TraceType::kBlockTraceDataBlock;
|
trace_block_type = TraceType::kBlockTraceDataBlock;
|
||||||
break;
|
break;
|
||||||
|
@ -1750,24 +1717,22 @@ Status BlockBasedTable::MaybeReadBlockAndLoadToCache(
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename TBlocklike>
|
template <typename TBlocklike /*, auto*/>
|
||||||
Status BlockBasedTable::RetrieveBlock(
|
WithBlocklikeCheck<Status, TBlocklike> BlockBasedTable::RetrieveBlock(
|
||||||
FilePrefetchBuffer* prefetch_buffer, const ReadOptions& ro,
|
FilePrefetchBuffer* prefetch_buffer, const ReadOptions& ro,
|
||||||
const BlockHandle& handle, const UncompressionDict& uncompression_dict,
|
const BlockHandle& handle, const UncompressionDict& uncompression_dict,
|
||||||
CachableEntry<TBlocklike>* out_parsed_block, BlockType block_type,
|
CachableEntry<TBlocklike>* out_parsed_block, GetContext* get_context,
|
||||||
GetContext* get_context, BlockCacheLookupContext* lookup_context,
|
BlockCacheLookupContext* lookup_context, bool for_compaction,
|
||||||
bool for_compaction, bool use_cache, bool wait_for_cache,
|
bool use_cache, bool wait_for_cache, bool async_read) const {
|
||||||
bool async_read) const {
|
|
||||||
assert(out_parsed_block);
|
assert(out_parsed_block);
|
||||||
assert(out_parsed_block->IsEmpty());
|
assert(out_parsed_block->IsEmpty());
|
||||||
|
|
||||||
Status s;
|
Status s;
|
||||||
if (use_cache) {
|
if (use_cache) {
|
||||||
s = MaybeReadBlockAndLoadToCache(prefetch_buffer, ro, handle,
|
s = MaybeReadBlockAndLoadToCache(
|
||||||
uncompression_dict, wait_for_cache,
|
prefetch_buffer, ro, handle, uncompression_dict, wait_for_cache,
|
||||||
for_compaction, out_parsed_block,
|
for_compaction, out_parsed_block, get_context, lookup_context,
|
||||||
block_type, get_context, lookup_context,
|
/*contents=*/nullptr, async_read);
|
||||||
/*contents=*/nullptr, async_read);
|
|
||||||
|
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
return s;
|
return s;
|
||||||
|
@ -1788,29 +1753,23 @@ Status BlockBasedTable::RetrieveBlock(
|
||||||
}
|
}
|
||||||
|
|
||||||
const bool maybe_compressed =
|
const bool maybe_compressed =
|
||||||
block_type != BlockType::kFilter &&
|
TBlocklike::kBlockType != BlockType::kFilter &&
|
||||||
block_type != BlockType::kCompressionDictionary &&
|
TBlocklike::kBlockType != BlockType::kCompressionDictionary &&
|
||||||
rep_->blocks_maybe_compressed;
|
rep_->blocks_maybe_compressed;
|
||||||
const bool do_uncompress = maybe_compressed;
|
|
||||||
std::unique_ptr<TBlocklike> block;
|
std::unique_ptr<TBlocklike> block;
|
||||||
|
|
||||||
{
|
{
|
||||||
Histograms histogram =
|
Histograms histogram =
|
||||||
for_compaction ? READ_BLOCK_COMPACTION_MICROS : READ_BLOCK_GET_MICROS;
|
for_compaction ? READ_BLOCK_COMPACTION_MICROS : READ_BLOCK_GET_MICROS;
|
||||||
StopWatch sw(rep_->ioptions.clock, rep_->ioptions.stats, histogram);
|
StopWatch sw(rep_->ioptions.clock, rep_->ioptions.stats, histogram);
|
||||||
s = ReadBlockFromFile(
|
s = ReadAndParseBlockFromFile(
|
||||||
rep_->file.get(), prefetch_buffer, rep_->footer, ro, handle, &block,
|
rep_->file.get(), prefetch_buffer, rep_->footer, ro, handle, &block,
|
||||||
rep_->ioptions, do_uncompress, maybe_compressed, block_type,
|
rep_->ioptions, rep_->create_context, maybe_compressed,
|
||||||
uncompression_dict, rep_->persistent_cache_options,
|
uncompression_dict, rep_->persistent_cache_options,
|
||||||
block_type == BlockType::kData
|
GetMemoryAllocator(rep_->table_options), for_compaction, async_read);
|
||||||
? rep_->table_options.read_amp_bytes_per_bit
|
|
||||||
: 0,
|
|
||||||
GetMemoryAllocator(rep_->table_options), for_compaction,
|
|
||||||
rep_->blocks_definitely_zstd_compressed,
|
|
||||||
rep_->table_options.filter_policy.get(), async_read);
|
|
||||||
|
|
||||||
if (get_context) {
|
if (get_context) {
|
||||||
switch (block_type) {
|
switch (TBlocklike::kBlockType) {
|
||||||
case BlockType::kIndex:
|
case BlockType::kIndex:
|
||||||
++(get_context->get_context_stats_.num_index_read);
|
++(get_context->get_context_stats_.num_index_read);
|
||||||
break;
|
break;
|
||||||
|
@ -1834,32 +1793,6 @@ Status BlockBasedTable::RetrieveBlock(
|
||||||
return s;
|
return s;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Explicitly instantiate templates for each "blocklike" type we use.
|
|
||||||
// This makes it possible to keep the template definitions in the .cc file.
|
|
||||||
template Status BlockBasedTable::RetrieveBlock<ParsedFullFilterBlock>(
|
|
||||||
FilePrefetchBuffer* prefetch_buffer, const ReadOptions& ro,
|
|
||||||
const BlockHandle& handle, const UncompressionDict& uncompression_dict,
|
|
||||||
CachableEntry<ParsedFullFilterBlock>* out_parsed_block,
|
|
||||||
BlockType block_type, GetContext* get_context,
|
|
||||||
BlockCacheLookupContext* lookup_context, bool for_compaction,
|
|
||||||
bool use_cache, bool wait_for_cache, bool async_read) const;
|
|
||||||
|
|
||||||
template Status BlockBasedTable::RetrieveBlock<Block>(
|
|
||||||
FilePrefetchBuffer* prefetch_buffer, const ReadOptions& ro,
|
|
||||||
const BlockHandle& handle, const UncompressionDict& uncompression_dict,
|
|
||||||
CachableEntry<Block>* out_parsed_block, BlockType block_type,
|
|
||||||
GetContext* get_context, BlockCacheLookupContext* lookup_context,
|
|
||||||
bool for_compaction, bool use_cache, bool wait_for_cache,
|
|
||||||
bool async_read) const;
|
|
||||||
|
|
||||||
template Status BlockBasedTable::RetrieveBlock<UncompressionDict>(
|
|
||||||
FilePrefetchBuffer* prefetch_buffer, const ReadOptions& ro,
|
|
||||||
const BlockHandle& handle, const UncompressionDict& uncompression_dict,
|
|
||||||
CachableEntry<UncompressionDict>* out_parsed_block, BlockType block_type,
|
|
||||||
GetContext* get_context, BlockCacheLookupContext* lookup_context,
|
|
||||||
bool for_compaction, bool use_cache, bool wait_for_cache,
|
|
||||||
bool async_read) const;
|
|
||||||
|
|
||||||
BlockBasedTable::PartitionedIndexIteratorState::PartitionedIndexIteratorState(
|
BlockBasedTable::PartitionedIndexIteratorState::PartitionedIndexIteratorState(
|
||||||
const BlockBasedTable* table,
|
const BlockBasedTable* table,
|
||||||
UnorderedMap<uint64_t, CachableEntry<Block>>* block_map)
|
UnorderedMap<uint64_t, CachableEntry<Block>>* block_map)
|
||||||
|
|
|
@ -21,6 +21,7 @@
|
||||||
#include "rocksdb/table_properties.h"
|
#include "rocksdb/table_properties.h"
|
||||||
#include "table/block_based/block.h"
|
#include "table/block_based/block.h"
|
||||||
#include "table/block_based/block_based_table_factory.h"
|
#include "table/block_based/block_based_table_factory.h"
|
||||||
|
#include "table/block_based/block_cache.h"
|
||||||
#include "table/block_based/block_type.h"
|
#include "table/block_based/block_type.h"
|
||||||
#include "table/block_based/cachable_entry.h"
|
#include "table/block_based/cachable_entry.h"
|
||||||
#include "table/block_based/filter_block.h"
|
#include "table/block_based/filter_block.h"
|
||||||
|
@ -315,22 +316,6 @@ class BlockBasedTable : public TableReader {
|
||||||
void UpdateCacheMissMetrics(BlockType block_type,
|
void UpdateCacheMissMetrics(BlockType block_type,
|
||||||
GetContext* get_context) const;
|
GetContext* get_context) const;
|
||||||
|
|
||||||
Cache::Handle* GetEntryFromCache(const CacheTier& cache_tier,
|
|
||||||
Cache* block_cache, const Slice& key,
|
|
||||||
BlockType block_type, const bool wait,
|
|
||||||
GetContext* get_context,
|
|
||||||
const Cache::CacheItemHelper* cache_helper,
|
|
||||||
const Cache::CreateCallback& create_cb,
|
|
||||||
Cache::Priority priority) const;
|
|
||||||
|
|
||||||
template <typename TBlocklike>
|
|
||||||
Status InsertEntryToCache(const CacheTier& cache_tier, Cache* block_cache,
|
|
||||||
const Slice& key,
|
|
||||||
const Cache::CacheItemHelper* cache_helper,
|
|
||||||
std::unique_ptr<TBlocklike>&& block_holder,
|
|
||||||
size_t charge, Cache::Handle** cache_handle,
|
|
||||||
Cache::Priority priority) const;
|
|
||||||
|
|
||||||
// Either Block::NewDataIterator() or Block::NewIndexIterator().
|
// Either Block::NewDataIterator() or Block::NewIndexIterator().
|
||||||
template <typename TBlockIter>
|
template <typename TBlockIter>
|
||||||
static TBlockIter* InitBlockIterator(const Rep* rep, Block* block,
|
static TBlockIter* InitBlockIterator(const Rep* rep, Block* block,
|
||||||
|
@ -348,26 +333,24 @@ class BlockBasedTable : public TableReader {
|
||||||
// in uncompressed block cache, also sets cache_handle to reference that
|
// in uncompressed block cache, also sets cache_handle to reference that
|
||||||
// block.
|
// block.
|
||||||
template <typename TBlocklike>
|
template <typename TBlocklike>
|
||||||
Status MaybeReadBlockAndLoadToCache(
|
WithBlocklikeCheck<Status, TBlocklike> MaybeReadBlockAndLoadToCache(
|
||||||
FilePrefetchBuffer* prefetch_buffer, const ReadOptions& ro,
|
FilePrefetchBuffer* prefetch_buffer, const ReadOptions& ro,
|
||||||
const BlockHandle& handle, const UncompressionDict& uncompression_dict,
|
const BlockHandle& handle, const UncompressionDict& uncompression_dict,
|
||||||
const bool wait, const bool for_compaction,
|
const bool wait, const bool for_compaction,
|
||||||
CachableEntry<TBlocklike>* block_entry, BlockType block_type,
|
CachableEntry<TBlocklike>* block_entry, GetContext* get_context,
|
||||||
GetContext* get_context, BlockCacheLookupContext* lookup_context,
|
BlockCacheLookupContext* lookup_context, BlockContents* contents,
|
||||||
BlockContents* contents, bool async_read) const;
|
bool async_read) const;
|
||||||
|
|
||||||
// Similar to the above, with one crucial difference: it will retrieve the
|
// Similar to the above, with one crucial difference: it will retrieve the
|
||||||
// block from the file even if there are no caches configured (assuming the
|
// block from the file even if there are no caches configured (assuming the
|
||||||
// read options allow I/O).
|
// read options allow I/O).
|
||||||
template <typename TBlocklike>
|
template <typename TBlocklike>
|
||||||
Status RetrieveBlock(FilePrefetchBuffer* prefetch_buffer,
|
WithBlocklikeCheck<Status, TBlocklike> RetrieveBlock(
|
||||||
const ReadOptions& ro, const BlockHandle& handle,
|
FilePrefetchBuffer* prefetch_buffer, const ReadOptions& ro,
|
||||||
const UncompressionDict& uncompression_dict,
|
const BlockHandle& handle, const UncompressionDict& uncompression_dict,
|
||||||
CachableEntry<TBlocklike>* block_entry,
|
CachableEntry<TBlocklike>* block_entry, GetContext* get_context,
|
||||||
BlockType block_type, GetContext* get_context,
|
BlockCacheLookupContext* lookup_context, bool for_compaction,
|
||||||
BlockCacheLookupContext* lookup_context,
|
bool use_cache, bool wait_for_cache, bool async_read) const;
|
||||||
bool for_compaction, bool use_cache, bool wait_for_cache,
|
|
||||||
bool async_read) const;
|
|
||||||
|
|
||||||
DECLARE_SYNC_AND_ASYNC_CONST(
|
DECLARE_SYNC_AND_ASYNC_CONST(
|
||||||
void, RetrieveMultipleBlocks, const ReadOptions& options,
|
void, RetrieveMultipleBlocks, const ReadOptions& options,
|
||||||
|
@ -403,13 +386,12 @@ class BlockBasedTable : public TableReader {
|
||||||
// @param uncompression_dict Data for presetting the compression library's
|
// @param uncompression_dict Data for presetting the compression library's
|
||||||
// dictionary.
|
// dictionary.
|
||||||
template <typename TBlocklike>
|
template <typename TBlocklike>
|
||||||
Status GetDataBlockFromCache(const Slice& cache_key, Cache* block_cache,
|
WithBlocklikeCheck<Status, TBlocklike> GetDataBlockFromCache(
|
||||||
Cache* block_cache_compressed,
|
const Slice& cache_key, BlockCacheInterface<TBlocklike> block_cache,
|
||||||
const ReadOptions& read_options,
|
CompressedBlockCacheInterface block_cache_compressed,
|
||||||
CachableEntry<TBlocklike>* block,
|
const ReadOptions& read_options, CachableEntry<TBlocklike>* block,
|
||||||
const UncompressionDict& uncompression_dict,
|
const UncompressionDict& uncompression_dict, const bool wait,
|
||||||
BlockType block_type, const bool wait,
|
GetContext* get_context) const;
|
||||||
GetContext* get_context) const;
|
|
||||||
|
|
||||||
// Put a maybe compressed block to the corresponding block caches.
|
// Put a maybe compressed block to the corresponding block caches.
|
||||||
// This method will perform decompression against block_contents if needed
|
// This method will perform decompression against block_contents if needed
|
||||||
|
@ -422,15 +404,13 @@ class BlockBasedTable : public TableReader {
|
||||||
// @param uncompression_dict Data for presetting the compression library's
|
// @param uncompression_dict Data for presetting the compression library's
|
||||||
// dictionary.
|
// dictionary.
|
||||||
template <typename TBlocklike>
|
template <typename TBlocklike>
|
||||||
Status PutDataBlockToCache(const Slice& cache_key, Cache* block_cache,
|
WithBlocklikeCheck<Status, TBlocklike> PutDataBlockToCache(
|
||||||
Cache* block_cache_compressed,
|
const Slice& cache_key, BlockCacheInterface<TBlocklike> block_cache,
|
||||||
CachableEntry<TBlocklike>* cached_block,
|
CompressedBlockCacheInterface block_cache_compressed,
|
||||||
BlockContents&& block_contents,
|
CachableEntry<TBlocklike>* cached_block, BlockContents&& block_contents,
|
||||||
CompressionType block_comp_type,
|
CompressionType block_comp_type,
|
||||||
const UncompressionDict& uncompression_dict,
|
const UncompressionDict& uncompression_dict,
|
||||||
MemoryAllocator* memory_allocator,
|
MemoryAllocator* memory_allocator, GetContext* get_context) const;
|
||||||
BlockType block_type,
|
|
||||||
GetContext* get_context) const;
|
|
||||||
|
|
||||||
// Calls (*handle_result)(arg, ...) repeatedly, starting with the entry found
|
// Calls (*handle_result)(arg, ...) repeatedly, starting with the entry found
|
||||||
// after a call to Seek(key), until handle_result returns false.
|
// after a call to Seek(key), until handle_result returns false.
|
||||||
|
@ -599,6 +579,13 @@ struct BlockBasedTable::Rep {
|
||||||
|
|
||||||
std::shared_ptr<FragmentedRangeTombstoneList> fragmented_range_dels;
|
std::shared_ptr<FragmentedRangeTombstoneList> fragmented_range_dels;
|
||||||
|
|
||||||
|
// FIXME
|
||||||
|
// If true, data blocks in this file are definitely ZSTD compressed. If false
|
||||||
|
// they might not be. When false we skip creating a ZSTD digested
|
||||||
|
// uncompression dictionary. Even if we get a false negative, things should
|
||||||
|
// still work, just not as quickly.
|
||||||
|
BlockCreateContext create_context;
|
||||||
|
|
||||||
// If global_seqno is used, all Keys in this file will have the same
|
// If global_seqno is used, all Keys in this file will have the same
|
||||||
// seqno with value `global_seqno`.
|
// seqno with value `global_seqno`.
|
||||||
//
|
//
|
||||||
|
@ -617,12 +604,6 @@ struct BlockBasedTable::Rep {
|
||||||
// before reading individual blocks enables certain optimizations.
|
// before reading individual blocks enables certain optimizations.
|
||||||
bool blocks_maybe_compressed = true;
|
bool blocks_maybe_compressed = true;
|
||||||
|
|
||||||
// If true, data blocks in this file are definitely ZSTD compressed. If false
|
|
||||||
// they might not be. When false we skip creating a ZSTD digested
|
|
||||||
// uncompression dictionary. Even if we get a false negative, things should
|
|
||||||
// still work, just not as quickly.
|
|
||||||
bool blocks_definitely_zstd_compressed = false;
|
|
||||||
|
|
||||||
// These describe how index is encoded.
|
// These describe how index is encoded.
|
||||||
bool index_has_first_key = false;
|
bool index_has_first_key = false;
|
||||||
bool index_key_includes_seq = true;
|
bool index_key_includes_seq = true;
|
||||||
|
|
|
@ -7,6 +7,10 @@
|
||||||
// Use of this source code is governed by a BSD-style license that can be
|
// Use of this source code is governed by a BSD-style license that can be
|
||||||
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
||||||
#pragma once
|
#pragma once
|
||||||
|
#include <type_traits>
|
||||||
|
|
||||||
|
#include "block.h"
|
||||||
|
#include "block_cache.h"
|
||||||
#include "table/block_based/block_based_table_reader.h"
|
#include "table/block_based/block_based_table_reader.h"
|
||||||
#include "table/block_based/reader_common.h"
|
#include "table/block_based/reader_common.h"
|
||||||
|
|
||||||
|
@ -16,6 +20,25 @@
|
||||||
// are templates.
|
// are templates.
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
namespace {
|
||||||
|
using IterPlaceholderCacheInterface =
|
||||||
|
PlaceholderCacheInterface<CacheEntryRole::kMisc>;
|
||||||
|
|
||||||
|
template <typename TBlockIter>
|
||||||
|
struct IterTraits {};
|
||||||
|
|
||||||
|
template <>
|
||||||
|
struct IterTraits<DataBlockIter> {
|
||||||
|
using IterBlocklike = Block_kData;
|
||||||
|
};
|
||||||
|
|
||||||
|
template <>
|
||||||
|
struct IterTraits<IndexBlockIter> {
|
||||||
|
using IterBlocklike = Block_kIndex;
|
||||||
|
};
|
||||||
|
|
||||||
|
} // namespace
|
||||||
|
|
||||||
// Convert an index iterator value (i.e., an encoded BlockHandle)
|
// Convert an index iterator value (i.e., an encoded BlockHandle)
|
||||||
// into an iterator over the contents of the corresponding block.
|
// into an iterator over the contents of the corresponding block.
|
||||||
// If input_iter is null, new a iterator
|
// If input_iter is null, new a iterator
|
||||||
|
@ -27,6 +50,7 @@ TBlockIter* BlockBasedTable::NewDataBlockIterator(
|
||||||
BlockCacheLookupContext* lookup_context,
|
BlockCacheLookupContext* lookup_context,
|
||||||
FilePrefetchBuffer* prefetch_buffer, bool for_compaction, bool async_read,
|
FilePrefetchBuffer* prefetch_buffer, bool for_compaction, bool async_read,
|
||||||
Status& s) const {
|
Status& s) const {
|
||||||
|
using IterBlocklike = typename IterTraits<TBlockIter>::IterBlocklike;
|
||||||
PERF_TIMER_GUARD(new_table_block_iter_nanos);
|
PERF_TIMER_GUARD(new_table_block_iter_nanos);
|
||||||
|
|
||||||
TBlockIter* iter = input_iter != nullptr ? input_iter : new TBlockIter;
|
TBlockIter* iter = input_iter != nullptr ? input_iter : new TBlockIter;
|
||||||
|
@ -53,14 +77,14 @@ TBlockIter* BlockBasedTable::NewDataBlockIterator(
|
||||||
const UncompressionDict& dict = uncompression_dict.GetValue()
|
const UncompressionDict& dict = uncompression_dict.GetValue()
|
||||||
? *uncompression_dict.GetValue()
|
? *uncompression_dict.GetValue()
|
||||||
: UncompressionDict::GetEmptyDict();
|
: UncompressionDict::GetEmptyDict();
|
||||||
s = RetrieveBlock(prefetch_buffer, ro, handle, dict, &block, block_type,
|
s = RetrieveBlock(
|
||||||
get_context, lookup_context, for_compaction,
|
prefetch_buffer, ro, handle, dict, &block.As<IterBlocklike>(),
|
||||||
/* use_cache */ true, /* wait_for_cache */ true,
|
get_context, lookup_context, for_compaction,
|
||||||
async_read);
|
/* use_cache */ true, /* wait_for_cache */ true, async_read);
|
||||||
} else {
|
} else {
|
||||||
s = RetrieveBlock(
|
s = RetrieveBlock(
|
||||||
prefetch_buffer, ro, handle, UncompressionDict::GetEmptyDict(), &block,
|
prefetch_buffer, ro, handle, UncompressionDict::GetEmptyDict(),
|
||||||
block_type, get_context, lookup_context, for_compaction,
|
&block.As<IterBlocklike>(), get_context, lookup_context, for_compaction,
|
||||||
/* use_cache */ true, /* wait_for_cache */ true, async_read);
|
/* use_cache */ true, /* wait_for_cache */ true, async_read);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -91,18 +115,20 @@ TBlockIter* BlockBasedTable::NewDataBlockIterator(
|
||||||
|
|
||||||
if (!block.IsCached()) {
|
if (!block.IsCached()) {
|
||||||
if (!ro.fill_cache) {
|
if (!ro.fill_cache) {
|
||||||
Cache* const block_cache = rep_->table_options.block_cache.get();
|
IterPlaceholderCacheInterface block_cache{
|
||||||
|
rep_->table_options.block_cache.get()};
|
||||||
if (block_cache) {
|
if (block_cache) {
|
||||||
// insert a dummy record to block cache to track the memory usage
|
// insert a dummy record to block cache to track the memory usage
|
||||||
Cache::Handle* cache_handle = nullptr;
|
Cache::Handle* cache_handle = nullptr;
|
||||||
CacheKey key = CacheKey::CreateUniqueForCacheLifetime(block_cache);
|
CacheKey key =
|
||||||
s = block_cache->Insert(key.AsSlice(), nullptr,
|
CacheKey::CreateUniqueForCacheLifetime(block_cache.get());
|
||||||
block.GetValue()->ApproximateMemoryUsage(),
|
s = block_cache.Insert(key.AsSlice(),
|
||||||
nullptr, &cache_handle);
|
block.GetValue()->ApproximateMemoryUsage(),
|
||||||
|
&cache_handle);
|
||||||
|
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
assert(cache_handle != nullptr);
|
assert(cache_handle != nullptr);
|
||||||
iter->RegisterCleanup(&ForceReleaseCachedEntry, block_cache,
|
iter->RegisterCleanup(&ForceReleaseCachedEntry, block_cache.get(),
|
||||||
cache_handle);
|
cache_handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -149,18 +175,20 @@ TBlockIter* BlockBasedTable::NewDataBlockIterator(const ReadOptions& ro,
|
||||||
|
|
||||||
if (!block.IsCached()) {
|
if (!block.IsCached()) {
|
||||||
if (!ro.fill_cache) {
|
if (!ro.fill_cache) {
|
||||||
Cache* const block_cache = rep_->table_options.block_cache.get();
|
IterPlaceholderCacheInterface block_cache{
|
||||||
|
rep_->table_options.block_cache.get()};
|
||||||
if (block_cache) {
|
if (block_cache) {
|
||||||
// insert a dummy record to block cache to track the memory usage
|
// insert a dummy record to block cache to track the memory usage
|
||||||
Cache::Handle* cache_handle = nullptr;
|
Cache::Handle* cache_handle = nullptr;
|
||||||
CacheKey key = CacheKey::CreateUniqueForCacheLifetime(block_cache);
|
CacheKey key =
|
||||||
s = block_cache->Insert(key.AsSlice(), nullptr,
|
CacheKey::CreateUniqueForCacheLifetime(block_cache.get());
|
||||||
block.GetValue()->ApproximateMemoryUsage(),
|
s = block_cache.Insert(key.AsSlice(),
|
||||||
nullptr, &cache_handle);
|
block.GetValue()->ApproximateMemoryUsage(),
|
||||||
|
&cache_handle);
|
||||||
|
|
||||||
if (s.ok()) {
|
if (s.ok()) {
|
||||||
assert(cache_handle != nullptr);
|
assert(cache_handle != nullptr);
|
||||||
iter->RegisterCleanup(&ForceReleaseCachedEntry, block_cache,
|
iter->RegisterCleanup(&ForceReleaseCachedEntry, block_cache.get(),
|
||||||
cache_handle);
|
cache_handle);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -54,7 +54,7 @@ DEFINE_SYNC_AND_ASYNC(void, BlockBasedTable::RetrieveMultipleBlocks)
|
||||||
|
|
||||||
(*statuses)[idx_in_batch] =
|
(*statuses)[idx_in_batch] =
|
||||||
RetrieveBlock(nullptr, options, handle, uncompression_dict,
|
RetrieveBlock(nullptr, options, handle, uncompression_dict,
|
||||||
&(*results)[idx_in_batch], BlockType::kData,
|
&(*results)[idx_in_batch].As<Block_kData>(),
|
||||||
mget_iter->get_context, &lookup_data_block_context,
|
mget_iter->get_context, &lookup_data_block_context,
|
||||||
/* for_compaction */ false, /* use_cache */ true,
|
/* for_compaction */ false, /* use_cache */ true,
|
||||||
/* wait_for_cache */ true, /* async_read */ false);
|
/* wait_for_cache */ true, /* async_read */ false);
|
||||||
|
@ -269,7 +269,7 @@ DEFINE_SYNC_AND_ASYNC(void, BlockBasedTable::RetrieveMultipleBlocks)
|
||||||
// will avoid looking up the block cache
|
// will avoid looking up the block cache
|
||||||
s = MaybeReadBlockAndLoadToCache(
|
s = MaybeReadBlockAndLoadToCache(
|
||||||
nullptr, options, handle, uncompression_dict, /*wait=*/true,
|
nullptr, options, handle, uncompression_dict, /*wait=*/true,
|
||||||
/*for_compaction=*/false, block_entry, BlockType::kData,
|
/*for_compaction=*/false, &block_entry->As<Block_kData>(),
|
||||||
mget_iter->get_context, &lookup_data_block_context,
|
mget_iter->get_context, &lookup_data_block_context,
|
||||||
&serialized_block, /*async_read=*/false);
|
&serialized_block, /*async_read=*/false);
|
||||||
|
|
||||||
|
@ -441,7 +441,7 @@ DEFINE_SYNC_AND_ASYNC(void, BlockBasedTable::MultiGet)
|
||||||
? *uncompression_dict.GetValue()
|
? *uncompression_dict.GetValue()
|
||||||
: UncompressionDict::GetEmptyDict();
|
: UncompressionDict::GetEmptyDict();
|
||||||
Status s = RetrieveBlock(
|
Status s = RetrieveBlock(
|
||||||
nullptr, ro, handle, dict, &(results.back()), BlockType::kData,
|
nullptr, ro, handle, dict, &(results.back()).As<Block_kData>(),
|
||||||
miter->get_context, &lookup_data_block_context,
|
miter->get_context, &lookup_data_block_context,
|
||||||
/* for_compaction */ false, /* use_cache */ true,
|
/* for_compaction */ false, /* use_cache */ true,
|
||||||
/* wait_for_cache */ false, /* async_read */ false);
|
/* wait_for_cache */ false, /* async_read */ false);
|
||||||
|
|
|
@ -0,0 +1,96 @@
|
||||||
|
// Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
// This source code is licensed under both the GPLv2 (found in the
|
||||||
|
// COPYING file in the root directory) and Apache 2.0 License
|
||||||
|
// (found in the LICENSE.Apache file in the root directory).
|
||||||
|
|
||||||
|
#include "table/block_based/block_cache.h"
|
||||||
|
|
||||||
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
|
void BlockCreateContext::Create(std::unique_ptr<Block_kData>* parsed_out,
|
||||||
|
BlockContents&& block) {
|
||||||
|
parsed_out->reset(new Block_kData(
|
||||||
|
std::move(block), table_options->read_amp_bytes_per_bit, statistics));
|
||||||
|
}
|
||||||
|
void BlockCreateContext::Create(std::unique_ptr<Block_kIndex>* parsed_out,
|
||||||
|
BlockContents&& block) {
|
||||||
|
parsed_out->reset(new Block_kIndex(std::move(block),
|
||||||
|
/*read_amp_bytes_per_bit*/ 0, statistics));
|
||||||
|
}
|
||||||
|
void BlockCreateContext::Create(
|
||||||
|
std::unique_ptr<Block_kFilterPartitionIndex>* parsed_out,
|
||||||
|
BlockContents&& block) {
|
||||||
|
parsed_out->reset(new Block_kFilterPartitionIndex(
|
||||||
|
std::move(block), /*read_amp_bytes_per_bit*/ 0, statistics));
|
||||||
|
}
|
||||||
|
void BlockCreateContext::Create(
|
||||||
|
std::unique_ptr<Block_kRangeDeletion>* parsed_out, BlockContents&& block) {
|
||||||
|
parsed_out->reset(new Block_kRangeDeletion(
|
||||||
|
std::move(block), /*read_amp_bytes_per_bit*/ 0, statistics));
|
||||||
|
}
|
||||||
|
void BlockCreateContext::Create(std::unique_ptr<Block_kMetaIndex>* parsed_out,
|
||||||
|
BlockContents&& block) {
|
||||||
|
parsed_out->reset(new Block_kMetaIndex(
|
||||||
|
std::move(block), /*read_amp_bytes_per_bit*/ 0, statistics));
|
||||||
|
}
|
||||||
|
|
||||||
|
void BlockCreateContext::Create(
|
||||||
|
std::unique_ptr<ParsedFullFilterBlock>* parsed_out, BlockContents&& block) {
|
||||||
|
parsed_out->reset(new ParsedFullFilterBlock(
|
||||||
|
table_options->filter_policy.get(), std::move(block)));
|
||||||
|
}
|
||||||
|
|
||||||
|
void BlockCreateContext::Create(std::unique_ptr<UncompressionDict>* parsed_out,
|
||||||
|
BlockContents&& block) {
|
||||||
|
parsed_out->reset(new UncompressionDict(
|
||||||
|
block.data, std::move(block.allocation), using_zstd));
|
||||||
|
}
|
||||||
|
|
||||||
|
namespace {
|
||||||
|
// For getting SecondaryCache-compatible helpers from a BlockType. This is
|
||||||
|
// useful for accessing block cache in untyped contexts, such as for generic
|
||||||
|
// cache warming in table builder.
|
||||||
|
constexpr std::array<const Cache::CacheItemHelper*,
|
||||||
|
static_cast<unsigned>(BlockType::kInvalid) + 1>
|
||||||
|
kCacheItemFullHelperForBlockType{{
|
||||||
|
&BlockCacheInterface<Block_kData>::kFullHelper,
|
||||||
|
&BlockCacheInterface<ParsedFullFilterBlock>::kFullHelper,
|
||||||
|
&BlockCacheInterface<Block_kFilterPartitionIndex>::kFullHelper,
|
||||||
|
nullptr, // kProperties
|
||||||
|
&BlockCacheInterface<UncompressionDict>::kFullHelper,
|
||||||
|
&BlockCacheInterface<Block_kRangeDeletion>::kFullHelper,
|
||||||
|
nullptr, // kHashIndexPrefixes
|
||||||
|
nullptr, // kHashIndexMetadata
|
||||||
|
nullptr, // kMetaIndex (not yet stored in block cache)
|
||||||
|
&BlockCacheInterface<Block_kIndex>::kFullHelper,
|
||||||
|
nullptr, // kInvalid
|
||||||
|
}};
|
||||||
|
|
||||||
|
// For getting basic helpers from a BlockType (no SecondaryCache support)
|
||||||
|
constexpr std::array<const Cache::CacheItemHelper*,
|
||||||
|
static_cast<unsigned>(BlockType::kInvalid) + 1>
|
||||||
|
kCacheItemBasicHelperForBlockType{{
|
||||||
|
&BlockCacheInterface<Block_kData>::kBasicHelper,
|
||||||
|
&BlockCacheInterface<ParsedFullFilterBlock>::kBasicHelper,
|
||||||
|
&BlockCacheInterface<Block_kFilterPartitionIndex>::kBasicHelper,
|
||||||
|
nullptr, // kProperties
|
||||||
|
&BlockCacheInterface<UncompressionDict>::kBasicHelper,
|
||||||
|
&BlockCacheInterface<Block_kRangeDeletion>::kBasicHelper,
|
||||||
|
nullptr, // kHashIndexPrefixes
|
||||||
|
nullptr, // kHashIndexMetadata
|
||||||
|
nullptr, // kMetaIndex (not yet stored in block cache)
|
||||||
|
&BlockCacheInterface<Block_kIndex>::kBasicHelper,
|
||||||
|
nullptr, // kInvalid
|
||||||
|
}};
|
||||||
|
} // namespace
|
||||||
|
|
||||||
|
const Cache::CacheItemHelper* GetCacheItemHelper(
|
||||||
|
BlockType block_type, CacheTier lowest_used_cache_tier) {
|
||||||
|
if (lowest_used_cache_tier == CacheTier::kNonVolatileBlockTier) {
|
||||||
|
return kCacheItemFullHelperForBlockType[static_cast<unsigned>(block_type)];
|
||||||
|
} else {
|
||||||
|
return kCacheItemBasicHelperForBlockType[static_cast<unsigned>(block_type)];
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
} // namespace ROCKSDB_NAMESPACE
|
|
@ -0,0 +1,132 @@
|
||||||
|
// Copyright (c) Meta Platforms, Inc. and affiliates.
|
||||||
|
// This source code is licensed under both the GPLv2 (found in the
|
||||||
|
// COPYING file in the root directory) and Apache 2.0 License
|
||||||
|
// (found in the LICENSE.Apache file in the root directory).
|
||||||
|
|
||||||
|
// Code supporting block cache (Cache) access for block-based table, based on
|
||||||
|
// the convenient APIs in typed_cache.h
|
||||||
|
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <type_traits>
|
||||||
|
|
||||||
|
#include "cache/typed_cache.h"
|
||||||
|
#include "port/lang.h"
|
||||||
|
#include "table/block_based/block.h"
|
||||||
|
#include "table/block_based/block_type.h"
|
||||||
|
#include "table/block_based/parsed_full_filter_block.h"
|
||||||
|
#include "table/format.h"
|
||||||
|
|
||||||
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
|
// Metaprogramming wrappers for Block, to give each type a single role when
|
||||||
|
// used with FullTypedCacheInterface.
|
||||||
|
// (NOTE: previous attempts to create actual derived classes of Block with
|
||||||
|
// virtual calls resulted in performance regression)
|
||||||
|
|
||||||
|
class Block_kData : public Block {
|
||||||
|
public:
|
||||||
|
using Block::Block;
|
||||||
|
|
||||||
|
static constexpr CacheEntryRole kCacheEntryRole = CacheEntryRole::kDataBlock;
|
||||||
|
static constexpr BlockType kBlockType = BlockType::kData;
|
||||||
|
};
|
||||||
|
|
||||||
|
class Block_kIndex : public Block {
|
||||||
|
public:
|
||||||
|
using Block::Block;
|
||||||
|
|
||||||
|
static constexpr CacheEntryRole kCacheEntryRole = CacheEntryRole::kIndexBlock;
|
||||||
|
static constexpr BlockType kBlockType = BlockType::kIndex;
|
||||||
|
};
|
||||||
|
|
||||||
|
class Block_kFilterPartitionIndex : public Block {
|
||||||
|
public:
|
||||||
|
using Block::Block;
|
||||||
|
|
||||||
|
static constexpr CacheEntryRole kCacheEntryRole =
|
||||||
|
CacheEntryRole::kFilterMetaBlock;
|
||||||
|
static constexpr BlockType kBlockType = BlockType::kFilterPartitionIndex;
|
||||||
|
};
|
||||||
|
|
||||||
|
class Block_kRangeDeletion : public Block {
|
||||||
|
public:
|
||||||
|
using Block::Block;
|
||||||
|
|
||||||
|
static constexpr CacheEntryRole kCacheEntryRole = CacheEntryRole::kOtherBlock;
|
||||||
|
static constexpr BlockType kBlockType = BlockType::kRangeDeletion;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Useful for creating the Block even though meta index blocks are not
|
||||||
|
// yet stored in block cache
|
||||||
|
class Block_kMetaIndex : public Block {
|
||||||
|
public:
|
||||||
|
using Block::Block;
|
||||||
|
|
||||||
|
static constexpr CacheEntryRole kCacheEntryRole = CacheEntryRole::kOtherBlock;
|
||||||
|
static constexpr BlockType kBlockType = BlockType::kMetaIndex;
|
||||||
|
};
|
||||||
|
|
||||||
|
struct BlockCreateContext : public Cache::CreateContext {
|
||||||
|
BlockCreateContext() {}
|
||||||
|
BlockCreateContext(const BlockBasedTableOptions* _table_options,
|
||||||
|
Statistics* _statistics, bool _using_zstd)
|
||||||
|
: table_options(_table_options),
|
||||||
|
statistics(_statistics),
|
||||||
|
using_zstd(_using_zstd) {}
|
||||||
|
|
||||||
|
const BlockBasedTableOptions* table_options = nullptr;
|
||||||
|
Statistics* statistics = nullptr;
|
||||||
|
bool using_zstd = false;
|
||||||
|
|
||||||
|
// For TypedCacheInterface
|
||||||
|
template <typename TBlocklike>
|
||||||
|
inline void Create(std::unique_ptr<TBlocklike>* parsed_out,
|
||||||
|
size_t* charge_out, const Slice& data,
|
||||||
|
MemoryAllocator* alloc) {
|
||||||
|
Create(parsed_out,
|
||||||
|
BlockContents(AllocateAndCopyBlock(data, alloc), data.size()));
|
||||||
|
*charge_out = parsed_out->get()->ApproximateMemoryUsage();
|
||||||
|
}
|
||||||
|
|
||||||
|
void Create(std::unique_ptr<Block_kData>* parsed_out, BlockContents&& block);
|
||||||
|
void Create(std::unique_ptr<Block_kIndex>* parsed_out, BlockContents&& block);
|
||||||
|
void Create(std::unique_ptr<Block_kFilterPartitionIndex>* parsed_out,
|
||||||
|
BlockContents&& block);
|
||||||
|
void Create(std::unique_ptr<Block_kRangeDeletion>* parsed_out,
|
||||||
|
BlockContents&& block);
|
||||||
|
void Create(std::unique_ptr<Block_kMetaIndex>* parsed_out,
|
||||||
|
BlockContents&& block);
|
||||||
|
void Create(std::unique_ptr<ParsedFullFilterBlock>* parsed_out,
|
||||||
|
BlockContents&& block);
|
||||||
|
void Create(std::unique_ptr<UncompressionDict>* parsed_out,
|
||||||
|
BlockContents&& block);
|
||||||
|
};
|
||||||
|
|
||||||
|
// Convenient cache interface to use with block_cache_compressed
|
||||||
|
using CompressedBlockCacheInterface =
|
||||||
|
BasicTypedCacheInterface<BlockContents, CacheEntryRole::kOtherBlock>;
|
||||||
|
|
||||||
|
// Convenient cache interface to use for block_cache, with support for
|
||||||
|
// SecondaryCache.
|
||||||
|
template <typename TBlocklike>
|
||||||
|
using BlockCacheInterface =
|
||||||
|
FullTypedCacheInterface<TBlocklike, BlockCreateContext>;
|
||||||
|
|
||||||
|
// Shortcut name for cache handles under BlockCacheInterface
|
||||||
|
template <typename TBlocklike>
|
||||||
|
using BlockCacheTypedHandle =
|
||||||
|
typename BlockCacheInterface<TBlocklike>::TypedHandle;
|
||||||
|
|
||||||
|
// Selects the right helper based on BlockType and CacheTier
|
||||||
|
const Cache::CacheItemHelper* GetCacheItemHelper(
|
||||||
|
BlockType block_type,
|
||||||
|
CacheTier lowest_used_cache_tier = CacheTier::kNonVolatileBlockTier);
|
||||||
|
|
||||||
|
// For SFINAE check that a type is "blocklike" with a kCacheEntryRole member.
|
||||||
|
// Can get difficult compiler/linker errors without a good check like this.
|
||||||
|
template <typename TUse, typename TBlocklike>
|
||||||
|
using WithBlocklikeCheck = std::enable_if_t<
|
||||||
|
TBlocklike::kCacheEntryRole == CacheEntryRole::kMisc || true, TUse>;
|
||||||
|
|
||||||
|
} // namespace ROCKSDB_NAMESPACE
|
|
@ -1,182 +0,0 @@
|
||||||
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
|
||||||
// This source code is licensed under both the GPLv2 (found in the
|
|
||||||
// COPYING file in the root directory) and Apache 2.0 License
|
|
||||||
// (found in the LICENSE.Apache file in the root directory).
|
|
||||||
|
|
||||||
#pragma once
|
|
||||||
|
|
||||||
#include "cache/cache_entry_roles.h"
|
|
||||||
#include "port/lang.h"
|
|
||||||
#include "table/block_based/block.h"
|
|
||||||
#include "table/block_based/block_type.h"
|
|
||||||
#include "table/block_based/parsed_full_filter_block.h"
|
|
||||||
#include "table/format.h"
|
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
|
||||||
|
|
||||||
template <typename TBlocklike>
|
|
||||||
class BlocklikeTraits;
|
|
||||||
|
|
||||||
template <typename T, CacheEntryRole R>
|
|
||||||
Cache::CacheItemHelper* GetCacheItemHelperForRole();
|
|
||||||
|
|
||||||
template <typename TBlocklike>
|
|
||||||
Cache::CreateCallback GetCreateCallback(size_t read_amp_bytes_per_bit,
|
|
||||||
Statistics* statistics, bool using_zstd,
|
|
||||||
const FilterPolicy* filter_policy) {
|
|
||||||
return [read_amp_bytes_per_bit, statistics, using_zstd, filter_policy](
|
|
||||||
const void* buf, size_t size, void** out_obj,
|
|
||||||
size_t* charge) -> Status {
|
|
||||||
assert(buf != nullptr);
|
|
||||||
std::unique_ptr<char[]> buf_data(new char[size]());
|
|
||||||
memcpy(buf_data.get(), buf, size);
|
|
||||||
BlockContents bc = BlockContents(std::move(buf_data), size);
|
|
||||||
TBlocklike* ucd_ptr = BlocklikeTraits<TBlocklike>::Create(
|
|
||||||
std::move(bc), read_amp_bytes_per_bit, statistics, using_zstd,
|
|
||||||
filter_policy);
|
|
||||||
*out_obj = reinterpret_cast<void*>(ucd_ptr);
|
|
||||||
*charge = size;
|
|
||||||
return Status::OK();
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
template <>
|
|
||||||
class BlocklikeTraits<ParsedFullFilterBlock> {
|
|
||||||
public:
|
|
||||||
static ParsedFullFilterBlock* Create(BlockContents&& contents,
|
|
||||||
size_t /* read_amp_bytes_per_bit */,
|
|
||||||
Statistics* /* statistics */,
|
|
||||||
bool /* using_zstd */,
|
|
||||||
const FilterPolicy* filter_policy) {
|
|
||||||
return new ParsedFullFilterBlock(filter_policy, std::move(contents));
|
|
||||||
}
|
|
||||||
|
|
||||||
static uint32_t GetNumRestarts(const ParsedFullFilterBlock& /* block */) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static size_t SizeCallback(void* obj) {
|
|
||||||
assert(obj != nullptr);
|
|
||||||
ParsedFullFilterBlock* ptr = static_cast<ParsedFullFilterBlock*>(obj);
|
|
||||||
return ptr->GetBlockContentsData().size();
|
|
||||||
}
|
|
||||||
|
|
||||||
static Status SaveToCallback(void* from_obj, size_t from_offset,
|
|
||||||
size_t length, void* out) {
|
|
||||||
assert(from_obj != nullptr);
|
|
||||||
ParsedFullFilterBlock* ptr = static_cast<ParsedFullFilterBlock*>(from_obj);
|
|
||||||
const char* buf = ptr->GetBlockContentsData().data();
|
|
||||||
assert(length == ptr->GetBlockContentsData().size());
|
|
||||||
(void)from_offset;
|
|
||||||
memcpy(out, buf, length);
|
|
||||||
return Status::OK();
|
|
||||||
}
|
|
||||||
|
|
||||||
static Cache::CacheItemHelper* GetCacheItemHelper(BlockType block_type) {
|
|
||||||
(void)block_type;
|
|
||||||
assert(block_type == BlockType::kFilter);
|
|
||||||
return GetCacheItemHelperForRole<ParsedFullFilterBlock,
|
|
||||||
CacheEntryRole::kFilterBlock>();
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
template <>
|
|
||||||
class BlocklikeTraits<Block> {
|
|
||||||
public:
|
|
||||||
static Block* Create(BlockContents&& contents, size_t read_amp_bytes_per_bit,
|
|
||||||
Statistics* statistics, bool /* using_zstd */,
|
|
||||||
const FilterPolicy* /* filter_policy */) {
|
|
||||||
return new Block(std::move(contents), read_amp_bytes_per_bit, statistics);
|
|
||||||
}
|
|
||||||
|
|
||||||
static uint32_t GetNumRestarts(const Block& block) {
|
|
||||||
return block.NumRestarts();
|
|
||||||
}
|
|
||||||
|
|
||||||
static size_t SizeCallback(void* obj) {
|
|
||||||
assert(obj != nullptr);
|
|
||||||
Block* ptr = static_cast<Block*>(obj);
|
|
||||||
return ptr->size();
|
|
||||||
}
|
|
||||||
|
|
||||||
static Status SaveToCallback(void* from_obj, size_t from_offset,
|
|
||||||
size_t length, void* out) {
|
|
||||||
assert(from_obj != nullptr);
|
|
||||||
Block* ptr = static_cast<Block*>(from_obj);
|
|
||||||
const char* buf = ptr->data();
|
|
||||||
assert(length == ptr->size());
|
|
||||||
(void)from_offset;
|
|
||||||
memcpy(out, buf, length);
|
|
||||||
return Status::OK();
|
|
||||||
}
|
|
||||||
|
|
||||||
static Cache::CacheItemHelper* GetCacheItemHelper(BlockType block_type) {
|
|
||||||
switch (block_type) {
|
|
||||||
case BlockType::kData:
|
|
||||||
return GetCacheItemHelperForRole<Block, CacheEntryRole::kDataBlock>();
|
|
||||||
case BlockType::kIndex:
|
|
||||||
return GetCacheItemHelperForRole<Block, CacheEntryRole::kIndexBlock>();
|
|
||||||
case BlockType::kFilterPartitionIndex:
|
|
||||||
return GetCacheItemHelperForRole<Block,
|
|
||||||
CacheEntryRole::kFilterMetaBlock>();
|
|
||||||
default:
|
|
||||||
// Not a recognized combination
|
|
||||||
assert(false);
|
|
||||||
FALLTHROUGH_INTENDED;
|
|
||||||
case BlockType::kRangeDeletion:
|
|
||||||
return GetCacheItemHelperForRole<Block, CacheEntryRole::kOtherBlock>();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
template <>
|
|
||||||
class BlocklikeTraits<UncompressionDict> {
|
|
||||||
public:
|
|
||||||
static UncompressionDict* Create(BlockContents&& contents,
|
|
||||||
size_t /* read_amp_bytes_per_bit */,
|
|
||||||
Statistics* /* statistics */,
|
|
||||||
bool using_zstd,
|
|
||||||
const FilterPolicy* /* filter_policy */) {
|
|
||||||
return new UncompressionDict(contents.data, std::move(contents.allocation),
|
|
||||||
using_zstd);
|
|
||||||
}
|
|
||||||
|
|
||||||
static uint32_t GetNumRestarts(const UncompressionDict& /* dict */) {
|
|
||||||
return 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
static size_t SizeCallback(void* obj) {
|
|
||||||
assert(obj != nullptr);
|
|
||||||
UncompressionDict* ptr = static_cast<UncompressionDict*>(obj);
|
|
||||||
return ptr->slice_.size();
|
|
||||||
}
|
|
||||||
|
|
||||||
static Status SaveToCallback(void* from_obj, size_t from_offset,
|
|
||||||
size_t length, void* out) {
|
|
||||||
assert(from_obj != nullptr);
|
|
||||||
UncompressionDict* ptr = static_cast<UncompressionDict*>(from_obj);
|
|
||||||
const char* buf = ptr->slice_.data();
|
|
||||||
assert(length == ptr->slice_.size());
|
|
||||||
(void)from_offset;
|
|
||||||
memcpy(out, buf, length);
|
|
||||||
return Status::OK();
|
|
||||||
}
|
|
||||||
|
|
||||||
static Cache::CacheItemHelper* GetCacheItemHelper(BlockType block_type) {
|
|
||||||
(void)block_type;
|
|
||||||
assert(block_type == BlockType::kCompressionDictionary);
|
|
||||||
return GetCacheItemHelperForRole<UncompressionDict,
|
|
||||||
CacheEntryRole::kOtherBlock>();
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
// Get an CacheItemHelper pointer for value type T and role R.
|
|
||||||
template <typename T, CacheEntryRole R>
|
|
||||||
Cache::CacheItemHelper* GetCacheItemHelperForRole() {
|
|
||||||
static Cache::CacheItemHelper cache_helper(
|
|
||||||
BlocklikeTraits<T>::SizeCallback, BlocklikeTraits<T>::SaveToCallback,
|
|
||||||
GetCacheEntryDeleterForRole<T, R>());
|
|
||||||
return &cache_helper;
|
|
||||||
}
|
|
||||||
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
|
|
@ -10,6 +10,7 @@
|
||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <cassert>
|
#include <cassert>
|
||||||
|
#include <type_traits>
|
||||||
|
|
||||||
#include "port/likely.h"
|
#include "port/likely.h"
|
||||||
#include "rocksdb/cache.h"
|
#include "rocksdb/cache.h"
|
||||||
|
@ -191,6 +192,29 @@ class CachableEntry {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Since this class is essentially an elaborate pointer, it's sometimes
|
||||||
|
// useful to be able to upcast or downcast the base type of the pointer,
|
||||||
|
// especially when interacting with typed_cache.h.
|
||||||
|
template <class TWrapper>
|
||||||
|
std::enable_if_t<sizeof(TWrapper) == sizeof(T) &&
|
||||||
|
(std::is_base_of_v<TWrapper, T> ||
|
||||||
|
std::is_base_of_v<T, TWrapper>),
|
||||||
|
/* Actual return type */
|
||||||
|
CachableEntry<TWrapper>&>
|
||||||
|
As() {
|
||||||
|
CachableEntry<TWrapper>* result_ptr =
|
||||||
|
reinterpret_cast<CachableEntry<TWrapper>*>(this);
|
||||||
|
// Ensure no weirdness in template instantiations
|
||||||
|
assert(static_cast<void*>(&this->value_) ==
|
||||||
|
static_cast<void*>(&result_ptr->value_));
|
||||||
|
assert(&this->cache_handle_ == &result_ptr->cache_handle_);
|
||||||
|
// This function depends on no arithmetic involved in the pointer
|
||||||
|
// conversion, which is not statically checkable.
|
||||||
|
assert(static_cast<void*>(this->value_) ==
|
||||||
|
static_cast<void*>(result_ptr->value_));
|
||||||
|
return *result_ptr;
|
||||||
|
}
|
||||||
|
|
||||||
private:
|
private:
|
||||||
void ReleaseResource() noexcept {
|
void ReleaseResource() noexcept {
|
||||||
if (LIKELY(cache_handle_ != nullptr)) {
|
if (LIKELY(cache_handle_ != nullptr)) {
|
||||||
|
@ -223,6 +247,10 @@ class CachableEntry {
|
||||||
}
|
}
|
||||||
|
|
||||||
private:
|
private:
|
||||||
|
// Have to be your own best friend
|
||||||
|
template <class TT>
|
||||||
|
friend class CachableEntry;
|
||||||
|
|
||||||
T* value_ = nullptr;
|
T* value_ = nullptr;
|
||||||
Cache* cache_ = nullptr;
|
Cache* cache_ = nullptr;
|
||||||
Cache::Handle* cache_handle_ = nullptr;
|
Cache::Handle* cache_handle_ = nullptr;
|
||||||
|
|
|
@ -6,6 +6,7 @@
|
||||||
|
|
||||||
#include "table/block_based/filter_block_reader_common.h"
|
#include "table/block_based/filter_block_reader_common.h"
|
||||||
|
|
||||||
|
#include "block_cache.h"
|
||||||
#include "monitoring/perf_context_imp.h"
|
#include "monitoring/perf_context_imp.h"
|
||||||
#include "table/block_based/block_based_table_reader.h"
|
#include "table/block_based/block_based_table_reader.h"
|
||||||
#include "table/block_based/parsed_full_filter_block.h"
|
#include "table/block_based/parsed_full_filter_block.h"
|
||||||
|
@ -17,7 +18,7 @@ Status FilterBlockReaderCommon<TBlocklike>::ReadFilterBlock(
|
||||||
const BlockBasedTable* table, FilePrefetchBuffer* prefetch_buffer,
|
const BlockBasedTable* table, FilePrefetchBuffer* prefetch_buffer,
|
||||||
const ReadOptions& read_options, bool use_cache, GetContext* get_context,
|
const ReadOptions& read_options, bool use_cache, GetContext* get_context,
|
||||||
BlockCacheLookupContext* lookup_context,
|
BlockCacheLookupContext* lookup_context,
|
||||||
CachableEntry<TBlocklike>* filter_block, BlockType block_type) {
|
CachableEntry<TBlocklike>* filter_block) {
|
||||||
PERF_TIMER_GUARD(read_filter_block_nanos);
|
PERF_TIMER_GUARD(read_filter_block_nanos);
|
||||||
|
|
||||||
assert(table);
|
assert(table);
|
||||||
|
@ -30,7 +31,7 @@ Status FilterBlockReaderCommon<TBlocklike>::ReadFilterBlock(
|
||||||
const Status s =
|
const Status s =
|
||||||
table->RetrieveBlock(prefetch_buffer, read_options, rep->filter_handle,
|
table->RetrieveBlock(prefetch_buffer, read_options, rep->filter_handle,
|
||||||
UncompressionDict::GetEmptyDict(), filter_block,
|
UncompressionDict::GetEmptyDict(), filter_block,
|
||||||
block_type, get_context, lookup_context,
|
get_context, lookup_context,
|
||||||
/* for_compaction */ false, use_cache,
|
/* for_compaction */ false, use_cache,
|
||||||
/* wait_for_cache */ true, /* async_read */ false);
|
/* wait_for_cache */ true, /* async_read */ false);
|
||||||
|
|
||||||
|
@ -68,7 +69,7 @@ template <typename TBlocklike>
|
||||||
Status FilterBlockReaderCommon<TBlocklike>::GetOrReadFilterBlock(
|
Status FilterBlockReaderCommon<TBlocklike>::GetOrReadFilterBlock(
|
||||||
bool no_io, GetContext* get_context,
|
bool no_io, GetContext* get_context,
|
||||||
BlockCacheLookupContext* lookup_context,
|
BlockCacheLookupContext* lookup_context,
|
||||||
CachableEntry<TBlocklike>* filter_block, BlockType block_type,
|
CachableEntry<TBlocklike>* filter_block,
|
||||||
Env::IOPriority rate_limiter_priority) const {
|
Env::IOPriority rate_limiter_priority) const {
|
||||||
assert(filter_block);
|
assert(filter_block);
|
||||||
|
|
||||||
|
@ -85,7 +86,7 @@ Status FilterBlockReaderCommon<TBlocklike>::GetOrReadFilterBlock(
|
||||||
|
|
||||||
return ReadFilterBlock(table_, nullptr /* prefetch_buffer */, read_options,
|
return ReadFilterBlock(table_, nullptr /* prefetch_buffer */, read_options,
|
||||||
cache_filter_blocks(), get_context, lookup_context,
|
cache_filter_blocks(), get_context, lookup_context,
|
||||||
filter_block, block_type);
|
filter_block);
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename TBlocklike>
|
template <typename TBlocklike>
|
||||||
|
@ -158,7 +159,7 @@ bool FilterBlockReaderCommon<TBlocklike>::IsFilterCompatible(
|
||||||
|
|
||||||
// Explicitly instantiate templates for both "blocklike" types we use.
|
// Explicitly instantiate templates for both "blocklike" types we use.
|
||||||
// This makes it possible to keep the template definitions in the .cc file.
|
// This makes it possible to keep the template definitions in the .cc file.
|
||||||
template class FilterBlockReaderCommon<Block>;
|
template class FilterBlockReaderCommon<Block_kFilterPartitionIndex>;
|
||||||
template class FilterBlockReaderCommon<ParsedFullFilterBlock>;
|
template class FilterBlockReaderCommon<ParsedFullFilterBlock>;
|
||||||
|
|
||||||
} // namespace ROCKSDB_NAMESPACE
|
} // namespace ROCKSDB_NAMESPACE
|
||||||
|
|
|
@ -8,7 +8,6 @@
|
||||||
|
|
||||||
#include <cassert>
|
#include <cassert>
|
||||||
|
|
||||||
#include "block_type.h"
|
|
||||||
#include "table/block_based/cachable_entry.h"
|
#include "table/block_based/cachable_entry.h"
|
||||||
#include "table/block_based/filter_block.h"
|
#include "table/block_based/filter_block.h"
|
||||||
|
|
||||||
|
@ -49,8 +48,7 @@ class FilterBlockReaderCommon : public FilterBlockReader {
|
||||||
const ReadOptions& read_options, bool use_cache,
|
const ReadOptions& read_options, bool use_cache,
|
||||||
GetContext* get_context,
|
GetContext* get_context,
|
||||||
BlockCacheLookupContext* lookup_context,
|
BlockCacheLookupContext* lookup_context,
|
||||||
CachableEntry<TBlocklike>* filter_block,
|
CachableEntry<TBlocklike>* filter_block);
|
||||||
BlockType block_type);
|
|
||||||
|
|
||||||
const BlockBasedTable* table() const { return table_; }
|
const BlockBasedTable* table() const { return table_; }
|
||||||
const SliceTransform* table_prefix_extractor() const;
|
const SliceTransform* table_prefix_extractor() const;
|
||||||
|
@ -60,7 +58,6 @@ class FilterBlockReaderCommon : public FilterBlockReader {
|
||||||
Status GetOrReadFilterBlock(bool no_io, GetContext* get_context,
|
Status GetOrReadFilterBlock(bool no_io, GetContext* get_context,
|
||||||
BlockCacheLookupContext* lookup_context,
|
BlockCacheLookupContext* lookup_context,
|
||||||
CachableEntry<TBlocklike>* filter_block,
|
CachableEntry<TBlocklike>* filter_block,
|
||||||
BlockType block_type,
|
|
||||||
Env::IOPriority rate_limiter_priority) const;
|
Env::IOPriority rate_limiter_priority) const;
|
||||||
|
|
||||||
size_t ApproximateFilterBlockMemoryUsage() const;
|
size_t ApproximateFilterBlockMemoryUsage() const;
|
||||||
|
|
|
@ -147,7 +147,7 @@ std::unique_ptr<FilterBlockReader> FullFilterBlockReader::Create(
|
||||||
if (prefetch || !use_cache) {
|
if (prefetch || !use_cache) {
|
||||||
const Status s = ReadFilterBlock(table, prefetch_buffer, ro, use_cache,
|
const Status s = ReadFilterBlock(table, prefetch_buffer, ro, use_cache,
|
||||||
nullptr /* get_context */, lookup_context,
|
nullptr /* get_context */, lookup_context,
|
||||||
&filter_block, BlockType::kFilter);
|
&filter_block);
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
IGNORE_STATUS_IF_ERROR(s);
|
IGNORE_STATUS_IF_ERROR(s);
|
||||||
return std::unique_ptr<FilterBlockReader>();
|
return std::unique_ptr<FilterBlockReader>();
|
||||||
|
@ -177,9 +177,8 @@ bool FullFilterBlockReader::MayMatch(
|
||||||
Env::IOPriority rate_limiter_priority) const {
|
Env::IOPriority rate_limiter_priority) const {
|
||||||
CachableEntry<ParsedFullFilterBlock> filter_block;
|
CachableEntry<ParsedFullFilterBlock> filter_block;
|
||||||
|
|
||||||
const Status s =
|
const Status s = GetOrReadFilterBlock(no_io, get_context, lookup_context,
|
||||||
GetOrReadFilterBlock(no_io, get_context, lookup_context, &filter_block,
|
&filter_block, rate_limiter_priority);
|
||||||
BlockType::kFilter, rate_limiter_priority);
|
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
IGNORE_STATUS_IF_ERROR(s);
|
IGNORE_STATUS_IF_ERROR(s);
|
||||||
return true;
|
return true;
|
||||||
|
@ -228,9 +227,9 @@ void FullFilterBlockReader::MayMatch(
|
||||||
Env::IOPriority rate_limiter_priority) const {
|
Env::IOPriority rate_limiter_priority) const {
|
||||||
CachableEntry<ParsedFullFilterBlock> filter_block;
|
CachableEntry<ParsedFullFilterBlock> filter_block;
|
||||||
|
|
||||||
const Status s = GetOrReadFilterBlock(
|
const Status s =
|
||||||
no_io, range->begin()->get_context, lookup_context, &filter_block,
|
GetOrReadFilterBlock(no_io, range->begin()->get_context, lookup_context,
|
||||||
BlockType::kFilter, rate_limiter_priority);
|
&filter_block, rate_limiter_priority);
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
IGNORE_STATUS_IF_ERROR(s);
|
IGNORE_STATUS_IF_ERROR(s);
|
||||||
return;
|
return;
|
||||||
|
|
|
@ -8,6 +8,8 @@
|
||||||
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
||||||
#include "table/block_based/index_reader_common.h"
|
#include "table/block_based/index_reader_common.h"
|
||||||
|
|
||||||
|
#include "block_cache.h"
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
namespace ROCKSDB_NAMESPACE {
|
||||||
Status BlockBasedTable::IndexReaderCommon::ReadIndexBlock(
|
Status BlockBasedTable::IndexReaderCommon::ReadIndexBlock(
|
||||||
const BlockBasedTable* table, FilePrefetchBuffer* prefetch_buffer,
|
const BlockBasedTable* table, FilePrefetchBuffer* prefetch_buffer,
|
||||||
|
@ -25,7 +27,7 @@ Status BlockBasedTable::IndexReaderCommon::ReadIndexBlock(
|
||||||
|
|
||||||
const Status s = table->RetrieveBlock(
|
const Status s = table->RetrieveBlock(
|
||||||
prefetch_buffer, read_options, rep->footer.index_handle(),
|
prefetch_buffer, read_options, rep->footer.index_handle(),
|
||||||
UncompressionDict::GetEmptyDict(), index_block, BlockType::kIndex,
|
UncompressionDict::GetEmptyDict(), &index_block->As<Block_kIndex>(),
|
||||||
get_context, lookup_context, /* for_compaction */ false, use_cache,
|
get_context, lookup_context, /* for_compaction */ false, use_cache,
|
||||||
/* wait_for_cache */ true, /* async_read */ false);
|
/* wait_for_cache */ true, /* async_read */ false);
|
||||||
|
|
||||||
|
|
|
@ -7,6 +7,7 @@
|
||||||
|
|
||||||
#include <memory>
|
#include <memory>
|
||||||
|
|
||||||
|
#include "table/block_based/block_type.h"
|
||||||
#include "table/format.h"
|
#include "table/format.h"
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
@ -32,7 +33,11 @@ class ParsedFullFilterBlock {
|
||||||
|
|
||||||
bool own_bytes() const { return block_contents_.own_bytes(); }
|
bool own_bytes() const { return block_contents_.own_bytes(); }
|
||||||
|
|
||||||
const Slice GetBlockContentsData() const { return block_contents_.data; }
|
// For TypedCacheInterface
|
||||||
|
const Slice& ContentSlice() const { return block_contents_.data; }
|
||||||
|
static constexpr CacheEntryRole kCacheEntryRole =
|
||||||
|
CacheEntryRole::kFilterBlock;
|
||||||
|
static constexpr BlockType kBlockType = BlockType::kFilter;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
BlockContents block_contents_;
|
BlockContents block_contents_;
|
||||||
|
|
|
@ -7,6 +7,7 @@
|
||||||
|
|
||||||
#include <utility>
|
#include <utility>
|
||||||
|
|
||||||
|
#include "block_cache.h"
|
||||||
#include "block_type.h"
|
#include "block_type.h"
|
||||||
#include "file/random_access_file_reader.h"
|
#include "file/random_access_file_reader.h"
|
||||||
#include "logging/logging.h"
|
#include "logging/logging.h"
|
||||||
|
@ -185,7 +186,8 @@ Slice PartitionedFilterBlockBuilder::Finish(
|
||||||
}
|
}
|
||||||
|
|
||||||
PartitionedFilterBlockReader::PartitionedFilterBlockReader(
|
PartitionedFilterBlockReader::PartitionedFilterBlockReader(
|
||||||
const BlockBasedTable* t, CachableEntry<Block>&& filter_block)
|
const BlockBasedTable* t,
|
||||||
|
CachableEntry<Block_kFilterPartitionIndex>&& filter_block)
|
||||||
: FilterBlockReaderCommon(t, std::move(filter_block)) {}
|
: FilterBlockReaderCommon(t, std::move(filter_block)) {}
|
||||||
|
|
||||||
std::unique_ptr<FilterBlockReader> PartitionedFilterBlockReader::Create(
|
std::unique_ptr<FilterBlockReader> PartitionedFilterBlockReader::Create(
|
||||||
|
@ -196,11 +198,11 @@ std::unique_ptr<FilterBlockReader> PartitionedFilterBlockReader::Create(
|
||||||
assert(table->get_rep());
|
assert(table->get_rep());
|
||||||
assert(!pin || prefetch);
|
assert(!pin || prefetch);
|
||||||
|
|
||||||
CachableEntry<Block> filter_block;
|
CachableEntry<Block_kFilterPartitionIndex> filter_block;
|
||||||
if (prefetch || !use_cache) {
|
if (prefetch || !use_cache) {
|
||||||
const Status s = ReadFilterBlock(
|
const Status s = ReadFilterBlock(table, prefetch_buffer, ro, use_cache,
|
||||||
table, prefetch_buffer, ro, use_cache, nullptr /* get_context */,
|
nullptr /* get_context */, lookup_context,
|
||||||
lookup_context, &filter_block, BlockType::kFilterPartitionIndex);
|
&filter_block);
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
IGNORE_STATUS_IF_ERROR(s);
|
IGNORE_STATUS_IF_ERROR(s);
|
||||||
return std::unique_ptr<FilterBlockReader>();
|
return std::unique_ptr<FilterBlockReader>();
|
||||||
|
@ -260,7 +262,8 @@ void PartitionedFilterBlockReader::PrefixesMayMatch(
|
||||||
}
|
}
|
||||||
|
|
||||||
BlockHandle PartitionedFilterBlockReader::GetFilterPartitionHandle(
|
BlockHandle PartitionedFilterBlockReader::GetFilterPartitionHandle(
|
||||||
const CachableEntry<Block>& filter_block, const Slice& entry) const {
|
const CachableEntry<Block_kFilterPartitionIndex>& filter_block,
|
||||||
|
const Slice& entry) const {
|
||||||
IndexBlockIter iter;
|
IndexBlockIter iter;
|
||||||
const InternalKeyComparator* const comparator = internal_comparator();
|
const InternalKeyComparator* const comparator = internal_comparator();
|
||||||
Statistics* kNullStats = nullptr;
|
Statistics* kNullStats = nullptr;
|
||||||
|
@ -313,7 +316,7 @@ Status PartitionedFilterBlockReader::GetFilterPartitionBlock(
|
||||||
const Status s =
|
const Status s =
|
||||||
table()->RetrieveBlock(prefetch_buffer, read_options, fltr_blk_handle,
|
table()->RetrieveBlock(prefetch_buffer, read_options, fltr_blk_handle,
|
||||||
UncompressionDict::GetEmptyDict(), filter_block,
|
UncompressionDict::GetEmptyDict(), filter_block,
|
||||||
BlockType::kFilter, get_context, lookup_context,
|
get_context, lookup_context,
|
||||||
/* for_compaction */ false, /* use_cache */ true,
|
/* for_compaction */ false, /* use_cache */ true,
|
||||||
/* wait_for_cache */ true, /* async_read */ false);
|
/* wait_for_cache */ true, /* async_read */ false);
|
||||||
|
|
||||||
|
@ -325,10 +328,9 @@ bool PartitionedFilterBlockReader::MayMatch(
|
||||||
GetContext* get_context, BlockCacheLookupContext* lookup_context,
|
GetContext* get_context, BlockCacheLookupContext* lookup_context,
|
||||||
Env::IOPriority rate_limiter_priority,
|
Env::IOPriority rate_limiter_priority,
|
||||||
FilterFunction filter_function) const {
|
FilterFunction filter_function) const {
|
||||||
CachableEntry<Block> filter_block;
|
CachableEntry<Block_kFilterPartitionIndex> filter_block;
|
||||||
Status s = GetOrReadFilterBlock(
|
Status s = GetOrReadFilterBlock(no_io, get_context, lookup_context,
|
||||||
no_io, get_context, lookup_context, &filter_block,
|
&filter_block, rate_limiter_priority);
|
||||||
BlockType::kFilterPartitionIndex, rate_limiter_priority);
|
|
||||||
if (UNLIKELY(!s.ok())) {
|
if (UNLIKELY(!s.ok())) {
|
||||||
IGNORE_STATUS_IF_ERROR(s);
|
IGNORE_STATUS_IF_ERROR(s);
|
||||||
return true;
|
return true;
|
||||||
|
@ -364,10 +366,10 @@ void PartitionedFilterBlockReader::MayMatch(
|
||||||
BlockCacheLookupContext* lookup_context,
|
BlockCacheLookupContext* lookup_context,
|
||||||
Env::IOPriority rate_limiter_priority,
|
Env::IOPriority rate_limiter_priority,
|
||||||
FilterManyFunction filter_function) const {
|
FilterManyFunction filter_function) const {
|
||||||
CachableEntry<Block> filter_block;
|
CachableEntry<Block_kFilterPartitionIndex> filter_block;
|
||||||
Status s = GetOrReadFilterBlock(
|
Status s =
|
||||||
no_io, range->begin()->get_context, lookup_context, &filter_block,
|
GetOrReadFilterBlock(no_io, range->begin()->get_context, lookup_context,
|
||||||
BlockType::kFilterPartitionIndex, rate_limiter_priority);
|
&filter_block, rate_limiter_priority);
|
||||||
if (UNLIKELY(!s.ok())) {
|
if (UNLIKELY(!s.ok())) {
|
||||||
IGNORE_STATUS_IF_ERROR(s);
|
IGNORE_STATUS_IF_ERROR(s);
|
||||||
return; // Any/all may match
|
return; // Any/all may match
|
||||||
|
@ -455,11 +457,10 @@ Status PartitionedFilterBlockReader::CacheDependencies(const ReadOptions& ro,
|
||||||
|
|
||||||
BlockCacheLookupContext lookup_context{TableReaderCaller::kPrefetch};
|
BlockCacheLookupContext lookup_context{TableReaderCaller::kPrefetch};
|
||||||
|
|
||||||
CachableEntry<Block> filter_block;
|
CachableEntry<Block_kFilterPartitionIndex> filter_block;
|
||||||
|
|
||||||
Status s = GetOrReadFilterBlock(false /* no_io */, nullptr /* get_context */,
|
Status s = GetOrReadFilterBlock(false /* no_io */, nullptr /* get_context */,
|
||||||
&lookup_context, &filter_block,
|
&lookup_context, &filter_block,
|
||||||
BlockType::kFilterPartitionIndex,
|
|
||||||
ro.rate_limiter_priority);
|
ro.rate_limiter_priority);
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
ROCKS_LOG_ERROR(rep->ioptions.logger,
|
ROCKS_LOG_ERROR(rep->ioptions.logger,
|
||||||
|
@ -517,7 +518,7 @@ Status PartitionedFilterBlockReader::CacheDependencies(const ReadOptions& ro,
|
||||||
// filter blocks
|
// filter blocks
|
||||||
s = table()->MaybeReadBlockAndLoadToCache(
|
s = table()->MaybeReadBlockAndLoadToCache(
|
||||||
prefetch_buffer.get(), ro, handle, UncompressionDict::GetEmptyDict(),
|
prefetch_buffer.get(), ro, handle, UncompressionDict::GetEmptyDict(),
|
||||||
/* wait */ true, /* for_compaction */ false, &block, BlockType::kFilter,
|
/* wait */ true, /* for_compaction */ false, &block,
|
||||||
nullptr /* get_context */, &lookup_context, nullptr /* contents */,
|
nullptr /* get_context */, &lookup_context, nullptr /* contents */,
|
||||||
false);
|
false);
|
||||||
if (!s.ok()) {
|
if (!s.ok()) {
|
||||||
|
|
|
@ -10,6 +10,7 @@
|
||||||
#include <string>
|
#include <string>
|
||||||
#include <unordered_map>
|
#include <unordered_map>
|
||||||
|
|
||||||
|
#include "block_cache.h"
|
||||||
#include "rocksdb/options.h"
|
#include "rocksdb/options.h"
|
||||||
#include "rocksdb/slice.h"
|
#include "rocksdb/slice.h"
|
||||||
#include "rocksdb/slice_transform.h"
|
#include "rocksdb/slice_transform.h"
|
||||||
|
@ -99,10 +100,12 @@ class PartitionedFilterBlockBuilder : public FullFilterBlockBuilder {
|
||||||
BlockHandle last_encoded_handle_;
|
BlockHandle last_encoded_handle_;
|
||||||
};
|
};
|
||||||
|
|
||||||
class PartitionedFilterBlockReader : public FilterBlockReaderCommon<Block> {
|
class PartitionedFilterBlockReader
|
||||||
|
: public FilterBlockReaderCommon<Block_kFilterPartitionIndex> {
|
||||||
public:
|
public:
|
||||||
PartitionedFilterBlockReader(const BlockBasedTable* t,
|
PartitionedFilterBlockReader(
|
||||||
CachableEntry<Block>&& filter_block);
|
const BlockBasedTable* t,
|
||||||
|
CachableEntry<Block_kFilterPartitionIndex>&& filter_block);
|
||||||
|
|
||||||
static std::unique_ptr<FilterBlockReader> Create(
|
static std::unique_ptr<FilterBlockReader> Create(
|
||||||
const BlockBasedTable* table, const ReadOptions& ro,
|
const BlockBasedTable* table, const ReadOptions& ro,
|
||||||
|
@ -131,8 +134,9 @@ class PartitionedFilterBlockReader : public FilterBlockReaderCommon<Block> {
|
||||||
size_t ApproximateMemoryUsage() const override;
|
size_t ApproximateMemoryUsage() const override;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
BlockHandle GetFilterPartitionHandle(const CachableEntry<Block>& filter_block,
|
BlockHandle GetFilterPartitionHandle(
|
||||||
const Slice& entry) const;
|
const CachableEntry<Block_kFilterPartitionIndex>& filter_block,
|
||||||
|
const Slice& entry) const;
|
||||||
Status GetFilterPartitionBlock(
|
Status GetFilterPartitionBlock(
|
||||||
FilePrefetchBuffer* prefetch_buffer, const BlockHandle& handle,
|
FilePrefetchBuffer* prefetch_buffer, const BlockHandle& handle,
|
||||||
bool no_io, GetContext* get_context,
|
bool no_io, GetContext* get_context,
|
||||||
|
|
|
@ -7,6 +7,7 @@
|
||||||
|
|
||||||
#include <map>
|
#include <map>
|
||||||
|
|
||||||
|
#include "block_cache.h"
|
||||||
#include "index_builder.h"
|
#include "index_builder.h"
|
||||||
#include "rocksdb/filter_policy.h"
|
#include "rocksdb/filter_policy.h"
|
||||||
#include "table/block_based/block_based_table_reader.h"
|
#include "table/block_based/block_based_table_reader.h"
|
||||||
|
@ -35,7 +36,8 @@ class MyPartitionedFilterBlockReader : public PartitionedFilterBlockReader {
|
||||||
public:
|
public:
|
||||||
MyPartitionedFilterBlockReader(BlockBasedTable* t,
|
MyPartitionedFilterBlockReader(BlockBasedTable* t,
|
||||||
CachableEntry<Block>&& filter_block)
|
CachableEntry<Block>&& filter_block)
|
||||||
: PartitionedFilterBlockReader(t, std::move(filter_block)) {
|
: PartitionedFilterBlockReader(
|
||||||
|
t, std::move(filter_block.As<Block_kFilterPartitionIndex>())) {
|
||||||
for (const auto& pair : blooms) {
|
for (const auto& pair : blooms) {
|
||||||
const uint64_t offset = pair.first;
|
const uint64_t offset = pair.first;
|
||||||
const std::string& bloom = pair.second;
|
const std::string& bloom = pair.second;
|
||||||
|
|
|
@ -8,6 +8,7 @@
|
||||||
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
||||||
#include "table/block_based/partitioned_index_reader.h"
|
#include "table/block_based/partitioned_index_reader.h"
|
||||||
|
|
||||||
|
#include "block_cache.h"
|
||||||
#include "file/random_access_file_reader.h"
|
#include "file/random_access_file_reader.h"
|
||||||
#include "table/block_based/block_based_table_reader.h"
|
#include "table/block_based/block_based_table_reader.h"
|
||||||
#include "table/block_based/partitioned_index_iterator.h"
|
#include "table/block_based/partitioned_index_iterator.h"
|
||||||
|
@ -186,7 +187,7 @@ Status PartitionIndexReader::CacheDependencies(const ReadOptions& ro,
|
||||||
// filter blocks
|
// filter blocks
|
||||||
Status s = table()->MaybeReadBlockAndLoadToCache(
|
Status s = table()->MaybeReadBlockAndLoadToCache(
|
||||||
prefetch_buffer.get(), ro, handle, UncompressionDict::GetEmptyDict(),
|
prefetch_buffer.get(), ro, handle, UncompressionDict::GetEmptyDict(),
|
||||||
/*wait=*/true, /*for_compaction=*/false, &block, BlockType::kIndex,
|
/*wait=*/true, /*for_compaction=*/false, &block.As<Block_kIndex>(),
|
||||||
/*get_context=*/nullptr, &lookup_context, /*contents=*/nullptr,
|
/*get_context=*/nullptr, &lookup_context, /*contents=*/nullptr,
|
||||||
/*async_read=*/false);
|
/*async_read=*/false);
|
||||||
|
|
||||||
|
|
|
@ -60,8 +60,8 @@ Status UncompressionDictReader::ReadUncompressionDictionary(
|
||||||
|
|
||||||
const Status s = table->RetrieveBlock(
|
const Status s = table->RetrieveBlock(
|
||||||
prefetch_buffer, read_options, rep->compression_dict_handle,
|
prefetch_buffer, read_options, rep->compression_dict_handle,
|
||||||
UncompressionDict::GetEmptyDict(), uncompression_dict,
|
UncompressionDict::GetEmptyDict(), uncompression_dict, get_context,
|
||||||
BlockType::kCompressionDictionary, get_context, lookup_context,
|
lookup_context,
|
||||||
/* for_compaction */ false, use_cache, /* wait_for_cache */ true,
|
/* for_compaction */ false, use_cache, /* wait_for_cache */ true,
|
||||||
/* async_read */ false);
|
/* async_read */ false);
|
||||||
|
|
||||||
|
|
|
@ -276,7 +276,7 @@ uint32_t ComputeBuiltinChecksumWithLastByte(ChecksumType type, const char* data,
|
||||||
// decompression function.
|
// decompression function.
|
||||||
// * "Parsed block" - an in-memory form of a block in block cache, as it is
|
// * "Parsed block" - an in-memory form of a block in block cache, as it is
|
||||||
// used by the table reader. Different C++ types are used depending on the
|
// used by the table reader. Different C++ types are used depending on the
|
||||||
// block type (see block_like_traits.h). Only trivially parsable block types
|
// block type (see block_cache.h). Only trivially parsable block types
|
||||||
// use BlockContents as the parsed form.
|
// use BlockContents as the parsed form.
|
||||||
//
|
//
|
||||||
struct BlockContents {
|
struct BlockContents {
|
||||||
|
|
|
@ -23,6 +23,7 @@
|
||||||
#include "memory/memory_allocator.h"
|
#include "memory/memory_allocator.h"
|
||||||
#include "rocksdb/options.h"
|
#include "rocksdb/options.h"
|
||||||
#include "rocksdb/table.h"
|
#include "rocksdb/table.h"
|
||||||
|
#include "table/block_based/block_type.h"
|
||||||
#include "test_util/sync_point.h"
|
#include "test_util/sync_point.h"
|
||||||
#include "util/coding.h"
|
#include "util/coding.h"
|
||||||
#include "util/compression_context_cache.h"
|
#include "util/compression_context_cache.h"
|
||||||
|
@ -321,6 +322,11 @@ struct UncompressionDict {
|
||||||
|
|
||||||
const Slice& GetRawDict() const { return slice_; }
|
const Slice& GetRawDict() const { return slice_; }
|
||||||
|
|
||||||
|
// For TypedCacheInterface
|
||||||
|
const Slice& ContentSlice() const { return slice_; }
|
||||||
|
static constexpr CacheEntryRole kCacheEntryRole = CacheEntryRole::kOtherBlock;
|
||||||
|
static constexpr BlockType kBlockType = BlockType::kCompressionDictionary;
|
||||||
|
|
||||||
#ifdef ROCKSDB_ZSTD_DDICT
|
#ifdef ROCKSDB_ZSTD_DDICT
|
||||||
const ZSTD_DDict* GetDigestedZstdDDict() const { return zstd_ddict_; }
|
const ZSTD_DDict* GetDigestedZstdDDict() const { return zstd_ddict_; }
|
||||||
#endif // ROCKSDB_ZSTD_DDICT
|
#endif // ROCKSDB_ZSTD_DDICT
|
||||||
|
|
|
@ -67,8 +67,7 @@ IOStatus CacheDumperImpl::DumpCacheEntriesToWriter() {
|
||||||
return IOStatus::InvalidArgument("System clock is null");
|
return IOStatus::InvalidArgument("System clock is null");
|
||||||
}
|
}
|
||||||
clock_ = options_.clock;
|
clock_ = options_.clock;
|
||||||
// We copy the Cache Deleter Role Map as its member.
|
|
||||||
role_map_ = CopyCacheDeleterRoleMap();
|
|
||||||
// Set the sequence number
|
// Set the sequence number
|
||||||
sequence_num_ = 0;
|
sequence_num_ = 0;
|
||||||
|
|
||||||
|
@ -80,7 +79,8 @@ IOStatus CacheDumperImpl::DumpCacheEntriesToWriter() {
|
||||||
|
|
||||||
// Then, we iterate the block cache and dump out the blocks that are not
|
// Then, we iterate the block cache and dump out the blocks that are not
|
||||||
// filtered out.
|
// filtered out.
|
||||||
cache_->ApplyToAllEntries(DumpOneBlockCallBack(), {});
|
std::string buf;
|
||||||
|
cache_->ApplyToAllEntries(DumpOneBlockCallBack(buf), {});
|
||||||
|
|
||||||
// Finally, write the footer
|
// Finally, write the footer
|
||||||
io_s = WriteFooter();
|
io_s = WriteFooter();
|
||||||
|
@ -105,77 +105,57 @@ bool CacheDumperImpl::ShouldFilterOut(const Slice& key) {
|
||||||
// This is the callback function which will be applied to
|
// This is the callback function which will be applied to
|
||||||
// Cache::ApplyToAllEntries. In this callback function, we will get the block
|
// Cache::ApplyToAllEntries. In this callback function, we will get the block
|
||||||
// type, decide if the block needs to be dumped based on the filter, and write
|
// type, decide if the block needs to be dumped based on the filter, and write
|
||||||
// the block through the provided writer.
|
// the block through the provided writer. `buf` is passed in for efficiennt
|
||||||
std::function<void(const Slice&, void*, size_t, Cache::DeleterFn)>
|
// reuse.
|
||||||
CacheDumperImpl::DumpOneBlockCallBack() {
|
std::function<void(const Slice&, Cache::ObjectPtr, size_t,
|
||||||
return [&](const Slice& key, void* value, size_t /*charge*/,
|
const Cache::CacheItemHelper*)>
|
||||||
Cache::DeleterFn deleter) {
|
CacheDumperImpl::DumpOneBlockCallBack(std::string& buf) {
|
||||||
// Step 1: get the type of the block from role_map_
|
return [&](const Slice& key, Cache::ObjectPtr value, size_t /*charge*/,
|
||||||
auto e = role_map_.find(deleter);
|
const Cache::CacheItemHelper* helper) {
|
||||||
CacheEntryRole role;
|
if (helper == nullptr || helper->size_cb == nullptr ||
|
||||||
|
helper->saveto_cb == nullptr) {
|
||||||
|
// Not compatible with dumping. Skip this entry.
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
CacheEntryRole role = helper->role;
|
||||||
CacheDumpUnitType type = CacheDumpUnitType::kBlockTypeMax;
|
CacheDumpUnitType type = CacheDumpUnitType::kBlockTypeMax;
|
||||||
if (e == role_map_.end()) {
|
|
||||||
role = CacheEntryRole::kMisc;
|
|
||||||
} else {
|
|
||||||
role = e->second;
|
|
||||||
}
|
|
||||||
bool filter_out = false;
|
|
||||||
|
|
||||||
// Step 2: based on the key prefix, check if the block should be filter out.
|
|
||||||
if (ShouldFilterOut(key)) {
|
|
||||||
filter_out = true;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: based on the block type, get the block raw pointer and length.
|
|
||||||
const char* block_start = nullptr;
|
|
||||||
size_t block_len = 0;
|
|
||||||
switch (role) {
|
switch (role) {
|
||||||
case CacheEntryRole::kDataBlock:
|
case CacheEntryRole::kDataBlock:
|
||||||
type = CacheDumpUnitType::kData;
|
type = CacheDumpUnitType::kData;
|
||||||
block_start = (static_cast<Block*>(value))->data();
|
|
||||||
block_len = (static_cast<Block*>(value))->size();
|
|
||||||
break;
|
break;
|
||||||
case CacheEntryRole::kFilterBlock:
|
case CacheEntryRole::kFilterBlock:
|
||||||
type = CacheDumpUnitType::kFilter;
|
type = CacheDumpUnitType::kFilter;
|
||||||
block_start = (static_cast<ParsedFullFilterBlock*>(value))
|
|
||||||
->GetBlockContentsData()
|
|
||||||
.data();
|
|
||||||
block_len = (static_cast<ParsedFullFilterBlock*>(value))
|
|
||||||
->GetBlockContentsData()
|
|
||||||
.size();
|
|
||||||
break;
|
break;
|
||||||
case CacheEntryRole::kFilterMetaBlock:
|
case CacheEntryRole::kFilterMetaBlock:
|
||||||
type = CacheDumpUnitType::kFilterMetaBlock;
|
type = CacheDumpUnitType::kFilterMetaBlock;
|
||||||
block_start = (static_cast<Block*>(value))->data();
|
|
||||||
block_len = (static_cast<Block*>(value))->size();
|
|
||||||
break;
|
break;
|
||||||
case CacheEntryRole::kIndexBlock:
|
case CacheEntryRole::kIndexBlock:
|
||||||
type = CacheDumpUnitType::kIndex;
|
type = CacheDumpUnitType::kIndex;
|
||||||
block_start = (static_cast<Block*>(value))->data();
|
|
||||||
block_len = (static_cast<Block*>(value))->size();
|
|
||||||
break;
|
|
||||||
case CacheEntryRole::kDeprecatedFilterBlock:
|
|
||||||
// Obsolete
|
|
||||||
filter_out = true;
|
|
||||||
break;
|
|
||||||
case CacheEntryRole::kMisc:
|
|
||||||
filter_out = true;
|
|
||||||
break;
|
|
||||||
case CacheEntryRole::kOtherBlock:
|
|
||||||
filter_out = true;
|
|
||||||
break;
|
|
||||||
case CacheEntryRole::kWriteBuffer:
|
|
||||||
filter_out = true;
|
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
filter_out = true;
|
// Filter out other entries
|
||||||
|
// FIXME? Do we need the CacheDumpUnitTypes? UncompressionDict?
|
||||||
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
// Step 4: if the block should not be filter out, write the block to the
|
// based on the key prefix, check if the block should be filter out.
|
||||||
// CacheDumpWriter
|
if (ShouldFilterOut(key)) {
|
||||||
if (!filter_out && block_start != nullptr) {
|
return;
|
||||||
WriteBlock(type, key, Slice(block_start, block_len))
|
}
|
||||||
.PermitUncheckedError();
|
|
||||||
|
assert(type != CacheDumpUnitType::kBlockTypeMax);
|
||||||
|
|
||||||
|
// Use cache item helper to get persistable data
|
||||||
|
// FIXME: reduce copying
|
||||||
|
size_t len = helper->size_cb(value);
|
||||||
|
buf.assign(len, '\0');
|
||||||
|
Status s = helper->saveto_cb(value, /*start*/ 0, len, buf.data());
|
||||||
|
|
||||||
|
if (s.ok()) {
|
||||||
|
// Write it out
|
||||||
|
WriteBlock(type, key, buf).PermitUncheckedError();
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
@ -264,8 +244,6 @@ IOStatus CacheDumpedLoaderImpl::RestoreCacheEntriesToSecondaryCache() {
|
||||||
if (reader_ == nullptr) {
|
if (reader_ == nullptr) {
|
||||||
return IOStatus::InvalidArgument("CacheDumpReader is null");
|
return IOStatus::InvalidArgument("CacheDumpReader is null");
|
||||||
}
|
}
|
||||||
// we copy the Cache Deleter Role Map as its member.
|
|
||||||
role_map_ = CopyCacheDeleterRoleMap();
|
|
||||||
|
|
||||||
// Step 2: read the header
|
// Step 2: read the header
|
||||||
// TODO: we need to check the cache dump format version and RocksDB version
|
// TODO: we need to check the cache dump format version and RocksDB version
|
||||||
|
|
|
@ -12,11 +12,11 @@
|
||||||
#include "file/writable_file_writer.h"
|
#include "file/writable_file_writer.h"
|
||||||
#include "rocksdb/utilities/cache_dump_load.h"
|
#include "rocksdb/utilities/cache_dump_load.h"
|
||||||
#include "table/block_based/block.h"
|
#include "table/block_based/block.h"
|
||||||
#include "table/block_based/block_like_traits.h"
|
|
||||||
#include "table/block_based/block_type.h"
|
#include "table/block_based/block_type.h"
|
||||||
#include "table/block_based/cachable_entry.h"
|
#include "table/block_based/cachable_entry.h"
|
||||||
#include "table/block_based/parsed_full_filter_block.h"
|
#include "table/block_based/parsed_full_filter_block.h"
|
||||||
#include "table/block_based/reader_common.h"
|
#include "table/block_based/reader_common.h"
|
||||||
|
#include "util/hash_containers.h"
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
||||||
|
@ -108,13 +108,13 @@ class CacheDumperImpl : public CacheDumper {
|
||||||
IOStatus WriteHeader();
|
IOStatus WriteHeader();
|
||||||
IOStatus WriteFooter();
|
IOStatus WriteFooter();
|
||||||
bool ShouldFilterOut(const Slice& key);
|
bool ShouldFilterOut(const Slice& key);
|
||||||
std::function<void(const Slice&, void*, size_t, Cache::DeleterFn)>
|
std::function<void(const Slice&, Cache::ObjectPtr, size_t,
|
||||||
DumpOneBlockCallBack();
|
const Cache::CacheItemHelper*)>
|
||||||
|
DumpOneBlockCallBack(std::string& buf);
|
||||||
|
|
||||||
CacheDumpOptions options_;
|
CacheDumpOptions options_;
|
||||||
std::shared_ptr<Cache> cache_;
|
std::shared_ptr<Cache> cache_;
|
||||||
std::unique_ptr<CacheDumpWriter> writer_;
|
std::unique_ptr<CacheDumpWriter> writer_;
|
||||||
UnorderedMap<Cache::DeleterFn, CacheEntryRole> role_map_;
|
|
||||||
SystemClock* clock_;
|
SystemClock* clock_;
|
||||||
uint32_t sequence_num_;
|
uint32_t sequence_num_;
|
||||||
// The cache key prefix filter. Currently, we use db_session_id as the prefix,
|
// The cache key prefix filter. Currently, we use db_session_id as the prefix,
|
||||||
|
@ -146,7 +146,6 @@ class CacheDumpedLoaderImpl : public CacheDumpedLoader {
|
||||||
CacheDumpOptions options_;
|
CacheDumpOptions options_;
|
||||||
std::shared_ptr<SecondaryCache> secondary_cache_;
|
std::shared_ptr<SecondaryCache> secondary_cache_;
|
||||||
std::unique_ptr<CacheDumpReader> reader_;
|
std::unique_ptr<CacheDumpReader> reader_;
|
||||||
UnorderedMap<Cache::DeleterFn, CacheEntryRole> role_map_;
|
|
||||||
};
|
};
|
||||||
|
|
||||||
// The default implementation of CacheDumpWriter. We write the blocks to a file
|
// The default implementation of CacheDumpWriter. We write the blocks to a file
|
||||||
|
|
|
@ -36,7 +36,9 @@ void FaultInjectionSecondaryCache::ResultHandle::Wait() {
|
||||||
UpdateHandleValue(this);
|
UpdateHandleValue(this);
|
||||||
}
|
}
|
||||||
|
|
||||||
void* FaultInjectionSecondaryCache::ResultHandle::Value() { return value_; }
|
Cache::ObjectPtr FaultInjectionSecondaryCache::ResultHandle::Value() {
|
||||||
|
return value_;
|
||||||
|
}
|
||||||
|
|
||||||
size_t FaultInjectionSecondaryCache::ResultHandle::Size() { return size_; }
|
size_t FaultInjectionSecondaryCache::ResultHandle::Size() { return size_; }
|
||||||
|
|
||||||
|
@ -75,7 +77,8 @@ FaultInjectionSecondaryCache::GetErrorContext() {
|
||||||
}
|
}
|
||||||
|
|
||||||
Status FaultInjectionSecondaryCache::Insert(
|
Status FaultInjectionSecondaryCache::Insert(
|
||||||
const Slice& key, void* value, const Cache::CacheItemHelper* helper) {
|
const Slice& key, Cache::ObjectPtr value,
|
||||||
|
const Cache::CacheItemHelper* helper) {
|
||||||
ErrorContext* ctx = GetErrorContext();
|
ErrorContext* ctx = GetErrorContext();
|
||||||
if (ctx->rand.OneIn(prob_)) {
|
if (ctx->rand.OneIn(prob_)) {
|
||||||
return Status::IOError();
|
return Status::IOError();
|
||||||
|
@ -86,7 +89,8 @@ Status FaultInjectionSecondaryCache::Insert(
|
||||||
|
|
||||||
std::unique_ptr<SecondaryCacheResultHandle>
|
std::unique_ptr<SecondaryCacheResultHandle>
|
||||||
FaultInjectionSecondaryCache::Lookup(const Slice& key,
|
FaultInjectionSecondaryCache::Lookup(const Slice& key,
|
||||||
const Cache::CreateCallback& create_cb,
|
const Cache::CacheItemHelper* helper,
|
||||||
|
Cache::CreateContext* create_context,
|
||||||
bool wait, bool advise_erase,
|
bool wait, bool advise_erase,
|
||||||
bool& is_in_sec_cache) {
|
bool& is_in_sec_cache) {
|
||||||
ErrorContext* ctx = GetErrorContext();
|
ErrorContext* ctx = GetErrorContext();
|
||||||
|
@ -94,11 +98,12 @@ FaultInjectionSecondaryCache::Lookup(const Slice& key,
|
||||||
if (ctx->rand.OneIn(prob_)) {
|
if (ctx->rand.OneIn(prob_)) {
|
||||||
return nullptr;
|
return nullptr;
|
||||||
} else {
|
} else {
|
||||||
return base_->Lookup(key, create_cb, wait, advise_erase, is_in_sec_cache);
|
return base_->Lookup(key, helper, create_context, wait, advise_erase,
|
||||||
|
is_in_sec_cache);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> hdl =
|
std::unique_ptr<SecondaryCacheResultHandle> hdl = base_->Lookup(
|
||||||
base_->Lookup(key, create_cb, wait, advise_erase, is_in_sec_cache);
|
key, helper, create_context, wait, advise_erase, is_in_sec_cache);
|
||||||
if (wait && ctx->rand.OneIn(prob_)) {
|
if (wait && ctx->rand.OneIn(prob_)) {
|
||||||
hdl.reset();
|
hdl.reset();
|
||||||
}
|
}
|
||||||
|
|
|
@ -31,12 +31,13 @@ class FaultInjectionSecondaryCache : public SecondaryCache {
|
||||||
|
|
||||||
const char* Name() const override { return "FaultInjectionSecondaryCache"; }
|
const char* Name() const override { return "FaultInjectionSecondaryCache"; }
|
||||||
|
|
||||||
Status Insert(const Slice& key, void* value,
|
Status Insert(const Slice& key, Cache::ObjectPtr value,
|
||||||
const Cache::CacheItemHelper* helper) override;
|
const Cache::CacheItemHelper* helper) override;
|
||||||
|
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> Lookup(
|
std::unique_ptr<SecondaryCacheResultHandle> Lookup(
|
||||||
const Slice& key, const Cache::CreateCallback& create_cb, bool wait,
|
const Slice& key, const Cache::CacheItemHelper* helper,
|
||||||
bool advise_erase, bool& is_in_sec_cache) override;
|
Cache::CreateContext* create_context, bool wait, bool advise_erase,
|
||||||
|
bool& is_in_sec_cache) override;
|
||||||
|
|
||||||
bool SupportForceErase() const override { return base_->SupportForceErase(); }
|
bool SupportForceErase() const override { return base_->SupportForceErase(); }
|
||||||
|
|
||||||
|
@ -69,7 +70,7 @@ class FaultInjectionSecondaryCache : public SecondaryCache {
|
||||||
|
|
||||||
void Wait() override;
|
void Wait() override;
|
||||||
|
|
||||||
void* Value() override;
|
Cache::ObjectPtr Value() override;
|
||||||
|
|
||||||
size_t Size() override;
|
size_t Size() override;
|
||||||
|
|
||||||
|
@ -81,7 +82,7 @@ class FaultInjectionSecondaryCache : public SecondaryCache {
|
||||||
|
|
||||||
FaultInjectionSecondaryCache* cache_;
|
FaultInjectionSecondaryCache* cache_;
|
||||||
std::unique_ptr<SecondaryCacheResultHandle> base_;
|
std::unique_ptr<SecondaryCacheResultHandle> base_;
|
||||||
void* value_;
|
Cache::ObjectPtr value_;
|
||||||
size_t size_;
|
size_t size_;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
|
@ -6,7 +6,6 @@
|
||||||
#pragma once
|
#pragma once
|
||||||
|
|
||||||
#include <atomic>
|
#include <atomic>
|
||||||
|
|
||||||
#include "rocksdb/memory_allocator.h"
|
#include "rocksdb/memory_allocator.h"
|
||||||
|
|
||||||
namespace ROCKSDB_NAMESPACE {
|
namespace ROCKSDB_NAMESPACE {
|
||||||
|
|
|
@ -26,8 +26,8 @@ bool GhostCache::Admit(const Slice& lookup_key) {
|
||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
// TODO: Should we check for errors here?
|
// TODO: Should we check for errors here?
|
||||||
auto s = sim_cache_->Insert(lookup_key, /*value=*/nullptr, lookup_key.size(),
|
auto s = sim_cache_->Insert(lookup_key, /*obj=*/nullptr,
|
||||||
/*deleter=*/nullptr);
|
&kNoopCacheItemHelper, lookup_key.size());
|
||||||
s.PermitUncheckedError();
|
s.PermitUncheckedError();
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
@ -51,9 +51,8 @@ void CacheSimulator::Access(const BlockCacheTraceRecord& access) {
|
||||||
} else {
|
} else {
|
||||||
if (!access.no_insert && admit && access.block_size > 0) {
|
if (!access.no_insert && admit && access.block_size > 0) {
|
||||||
// Ignore errors on insert
|
// Ignore errors on insert
|
||||||
auto s = sim_cache_->Insert(access.block_key, /*value=*/nullptr,
|
auto s = sim_cache_->Insert(access.block_key, /*obj=*/nullptr,
|
||||||
access.block_size,
|
&kNoopCacheItemHelper, access.block_size);
|
||||||
/*deleter=*/nullptr);
|
|
||||||
s.PermitUncheckedError();
|
s.PermitUncheckedError();
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -109,8 +108,8 @@ void PrioritizedCacheSimulator::AccessKVPair(
|
||||||
*is_cache_miss = false;
|
*is_cache_miss = false;
|
||||||
} else if (!no_insert && *admitted && value_size > 0) {
|
} else if (!no_insert && *admitted && value_size > 0) {
|
||||||
// TODO: Should we check for an error here?
|
// TODO: Should we check for an error here?
|
||||||
auto s = sim_cache_->Insert(key, /*value=*/nullptr, value_size,
|
auto s = sim_cache_->Insert(key, /*obj=*/nullptr, &kNoopCacheItemHelper,
|
||||||
/*deleter=*/nullptr,
|
value_size,
|
||||||
/*handle=*/nullptr, priority);
|
/*handle=*/nullptr, priority);
|
||||||
s.PermitUncheckedError();
|
s.PermitUncheckedError();
|
||||||
}
|
}
|
||||||
|
@ -188,10 +187,10 @@ void HybridRowBlockCacheSimulator::Access(const BlockCacheTraceRecord& access) {
|
||||||
/*update_metrics=*/true);
|
/*update_metrics=*/true);
|
||||||
if (access.referenced_data_size > 0 && inserted == InsertResult::ADMITTED) {
|
if (access.referenced_data_size > 0 && inserted == InsertResult::ADMITTED) {
|
||||||
// TODO: Should we check for an error here?
|
// TODO: Should we check for an error here?
|
||||||
auto s = sim_cache_->Insert(row_key, /*value=*/nullptr,
|
auto s =
|
||||||
access.referenced_data_size,
|
sim_cache_->Insert(row_key, /*obj=*/nullptr, &kNoopCacheItemHelper,
|
||||||
/*deleter=*/nullptr,
|
access.referenced_data_size,
|
||||||
/*handle=*/nullptr, Cache::Priority::HIGH);
|
/*handle=*/nullptr, Cache::Priority::HIGH);
|
||||||
s.PermitUncheckedError();
|
s.PermitUncheckedError();
|
||||||
status.row_key_status[row_key] = InsertResult::INSERTED;
|
status.row_key_status[row_key] = InsertResult::INSERTED;
|
||||||
}
|
}
|
||||||
|
|
|
@ -165,8 +165,8 @@ class SimCacheImpl : public SimCache {
|
||||||
}
|
}
|
||||||
|
|
||||||
using Cache::Insert;
|
using Cache::Insert;
|
||||||
Status Insert(const Slice& key, void* value, size_t charge,
|
Status Insert(const Slice& key, Cache::ObjectPtr value,
|
||||||
void (*deleter)(const Slice& key, void* value), Handle** handle,
|
const CacheItemHelper* helper, size_t charge, Handle** handle,
|
||||||
Priority priority) override {
|
Priority priority) override {
|
||||||
// The handle and value passed in are for real cache, so we pass nullptr
|
// The handle and value passed in are for real cache, so we pass nullptr
|
||||||
// to key_only_cache_ for both instead. Also, the deleter function pointer
|
// to key_only_cache_ for both instead. Also, the deleter function pointer
|
||||||
|
@ -176,9 +176,8 @@ class SimCacheImpl : public SimCache {
|
||||||
Handle* h = key_only_cache_->Lookup(key);
|
Handle* h = key_only_cache_->Lookup(key);
|
||||||
if (h == nullptr) {
|
if (h == nullptr) {
|
||||||
// TODO: Check for error here?
|
// TODO: Check for error here?
|
||||||
auto s = key_only_cache_->Insert(
|
auto s = key_only_cache_->Insert(key, nullptr, &kNoopCacheItemHelper,
|
||||||
key, nullptr, charge, [](const Slice& /*k*/, void* /*v*/) {}, nullptr,
|
charge, nullptr, priority);
|
||||||
priority);
|
|
||||||
s.PermitUncheckedError();
|
s.PermitUncheckedError();
|
||||||
} else {
|
} else {
|
||||||
key_only_cache_->Release(h);
|
key_only_cache_->Release(h);
|
||||||
|
@ -188,26 +187,18 @@ class SimCacheImpl : public SimCache {
|
||||||
if (!cache_) {
|
if (!cache_) {
|
||||||
return Status::OK();
|
return Status::OK();
|
||||||
}
|
}
|
||||||
return cache_->Insert(key, value, charge, deleter, handle, priority);
|
return cache_->Insert(key, value, helper, charge, handle, priority);
|
||||||
}
|
}
|
||||||
|
|
||||||
using Cache::Lookup;
|
Handle* Lookup(const Slice& key, const CacheItemHelper* helper,
|
||||||
Handle* Lookup(const Slice& key, Statistics* stats) override {
|
CreateContext* create_context,
|
||||||
Handle* h = key_only_cache_->Lookup(key);
|
Priority priority = Priority::LOW, bool wait = true,
|
||||||
if (h != nullptr) {
|
Statistics* stats = nullptr) override {
|
||||||
key_only_cache_->Release(h);
|
HandleLookup(key, stats);
|
||||||
inc_hit_counter();
|
|
||||||
RecordTick(stats, SIM_BLOCK_CACHE_HIT);
|
|
||||||
} else {
|
|
||||||
inc_miss_counter();
|
|
||||||
RecordTick(stats, SIM_BLOCK_CACHE_MISS);
|
|
||||||
}
|
|
||||||
|
|
||||||
cache_activity_logger_.ReportLookup(key);
|
|
||||||
if (!cache_) {
|
if (!cache_) {
|
||||||
return nullptr;
|
return nullptr;
|
||||||
}
|
}
|
||||||
return cache_->Lookup(key, stats);
|
return cache_->Lookup(key, helper, create_context, priority, wait, stats);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool Ref(Handle* handle) override { return cache_->Ref(handle); }
|
bool Ref(Handle* handle) override { return cache_->Ref(handle); }
|
||||||
|
@ -222,7 +213,9 @@ class SimCacheImpl : public SimCache {
|
||||||
key_only_cache_->Erase(key);
|
key_only_cache_->Erase(key);
|
||||||
}
|
}
|
||||||
|
|
||||||
void* Value(Handle* handle) override { return cache_->Value(handle); }
|
Cache::ObjectPtr Value(Handle* handle) override {
|
||||||
|
return cache_->Value(handle);
|
||||||
|
}
|
||||||
|
|
||||||
uint64_t NewId() override { return cache_->NewId(); }
|
uint64_t NewId() override { return cache_->NewId(); }
|
||||||
|
|
||||||
|
@ -242,8 +235,8 @@ class SimCacheImpl : public SimCache {
|
||||||
return cache_->GetCharge(handle);
|
return cache_->GetCharge(handle);
|
||||||
}
|
}
|
||||||
|
|
||||||
DeleterFn GetDeleter(Handle* handle) const override {
|
const CacheItemHelper* GetCacheItemHelper(Handle* handle) const override {
|
||||||
return cache_->GetDeleter(handle);
|
return cache_->GetCacheItemHelper(handle);
|
||||||
}
|
}
|
||||||
|
|
||||||
size_t GetPinnedUsage() const override { return cache_->GetPinnedUsage(); }
|
size_t GetPinnedUsage() const override { return cache_->GetPinnedUsage(); }
|
||||||
|
@ -253,15 +246,9 @@ class SimCacheImpl : public SimCache {
|
||||||
key_only_cache_->DisownData();
|
key_only_cache_->DisownData();
|
||||||
}
|
}
|
||||||
|
|
||||||
void ApplyToAllCacheEntries(void (*callback)(void*, size_t),
|
|
||||||
bool thread_safe) override {
|
|
||||||
// only apply to _cache since key_only_cache doesn't hold value
|
|
||||||
cache_->ApplyToAllCacheEntries(callback, thread_safe);
|
|
||||||
}
|
|
||||||
|
|
||||||
void ApplyToAllEntries(
|
void ApplyToAllEntries(
|
||||||
const std::function<void(const Slice& key, void* value, size_t charge,
|
const std::function<void(const Slice& key, ObjectPtr value, size_t charge,
|
||||||
DeleterFn deleter)>& callback,
|
const CacheItemHelper* helper)>& callback,
|
||||||
const ApplyToAllEntriesOptions& opts) override {
|
const ApplyToAllEntriesOptions& opts) override {
|
||||||
cache_->ApplyToAllEntries(callback, opts);
|
cache_->ApplyToAllEntries(callback, opts);
|
||||||
}
|
}
|
||||||
|
@ -338,6 +325,19 @@ class SimCacheImpl : public SimCache {
|
||||||
miss_times_.fetch_add(1, std::memory_order_relaxed);
|
miss_times_.fetch_add(1, std::memory_order_relaxed);
|
||||||
}
|
}
|
||||||
void inc_hit_counter() { hit_times_.fetch_add(1, std::memory_order_relaxed); }
|
void inc_hit_counter() { hit_times_.fetch_add(1, std::memory_order_relaxed); }
|
||||||
|
|
||||||
|
void HandleLookup(const Slice& key, Statistics* stats) {
|
||||||
|
Handle* h = key_only_cache_->Lookup(key);
|
||||||
|
if (h != nullptr) {
|
||||||
|
key_only_cache_->Release(h);
|
||||||
|
inc_hit_counter();
|
||||||
|
RecordTick(stats, SIM_BLOCK_CACHE_HIT);
|
||||||
|
} else {
|
||||||
|
inc_miss_counter();
|
||||||
|
RecordTick(stats, SIM_BLOCK_CACHE_MISS);
|
||||||
|
}
|
||||||
|
cache_activity_logger_.ReportLookup(key);
|
||||||
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
} // end anonymous namespace
|
} // end anonymous namespace
|
||||||
|
|
Loading…
Reference in New Issue