mirror of
https://github.com/facebook/rocksdb.git
synced 2024-11-28 05:43:50 +00:00
30bc495c03
Summary: Delete range logic is moved from `DBIter` to `MergingIterator`, and `MergingIterator` will seek to the end of a range deletion if possible instead of scanning through each key and check with `RangeDelAggregator`. With the invariant that a key in level L (consider memtable as the first level, each immutable and L0 as a separate level) has a larger sequence number than all keys in any level >L, a range tombstone `[start, end)` from level L covers all keys in its range in any level >L. This property motivates optimizations in iterator: - in `Seek(target)`, if level L has a range tombstone `[start, end)` that covers `target.UserKey`, then for all levels > L, we can do Seek() on `end` instead of `target` to skip some range tombstone covered keys. - in `Next()/Prev()`, if the current key is covered by a range tombstone `[start, end)` from level L, we can do `Seek` to `end` for all levels > L. This PR implements the above optimizations in `MergingIterator`. As all range tombstone covered keys are now skipped in `MergingIterator`, the range tombstone logic is removed from `DBIter`. The idea in this PR is similar to https://github.com/facebook/rocksdb/issues/7317, but this PR leaves `InternalIterator` interface mostly unchanged. **Credit**: the cascading seek optimization and the sentinel key (discussed below) are inspired by [Pebble](https://github.com/cockroachdb/pebble/blob/master/merging_iter.go) and suggested by ajkr in https://github.com/facebook/rocksdb/issues/7317. The two optimizations are mostly implemented in `SeekImpl()/SeekForPrevImpl()` and `IsNextDeleted()/IsPrevDeleted()` in `merging_iterator.cc`. See comments for each method for more detail. One notable change is that the minHeap/maxHeap used by `MergingIterator` now contains range tombstone end keys besides point key iterators. This helps to reduce the number of key comparisons. For example, for a range tombstone `[start, end)`, a `start` and an `end` `HeapItem` are inserted into the heap. When a `HeapItem` for range tombstone start key is popped from the minHeap, we know this range tombstone becomes "active" in the sense that, before the range tombstone's end key is popped from the minHeap, all the keys popped from this heap is covered by the range tombstone's internal key range `[start, end)`. Another major change, *delete range sentinel key*, is made to `LevelIterator`. Before this PR, when all point keys in an SST file are iterated through in `MergingIterator`, a level iterator would advance to the next SST file in its level. In the case when an SST file has a range tombstone that covers keys beyond the SST file's last point key, advancing to the next SST file would lose this range tombstone. Consequently, `MergingIterator` could return keys that should have been deleted by some range tombstone. We prevent this by pretending that file boundaries in each SST file are sentinel keys. A `LevelIterator` now only advance the file iterator once the sentinel key is processed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10449 Test Plan: - Added many unit tests in db_range_del_test - Stress test: `./db_stress --readpercent=5 --prefixpercent=19 --writepercent=20 -delpercent=10 --iterpercent=44 --delrangepercent=2` - Additional iterator stress test is added to verify against iterators against expected state: https://github.com/facebook/rocksdb/issues/10538. This is based on ajkr's previous attempt https://github.com/facebook/rocksdb/pull/5506#issuecomment-506021913. ``` python3 ./tools/db_crashtest.py blackbox --simple --write_buffer_size=524288 --target_file_size_base=524288 --max_bytes_for_level_base=2097152 --compression_type=none --max_background_compactions=8 --value_size_mult=33 --max_key=5000000 --interval=10 --duration=7200 --delrangepercent=3 --delpercent=9 --iterpercent=25 --writepercent=60 --readpercent=3 --prefixpercent=0 --num_iterations=1000 --range_deletion_width=100 --verify_iterator_with_expected_state_one_in=1 ``` - Performance benchmark: I used a similar setup as in the blog [post](http://rocksdb.org/blog/2018/11/21/delete-range.html) that introduced DeleteRange, "a database with 5 million data keys, and 10000 range tombstones (ignoring those dropped during compaction) that were written in regular intervals after 4.5 million data keys were written". As expected, the performance with this PR depends on the range tombstone width. ``` # Setup: TEST_TMPDIR=/dev/shm ./db_bench_main --benchmarks=fillrandom --writes=4500000 --num=5000000 TEST_TMPDIR=/dev/shm ./db_bench_main --benchmarks=overwrite --writes=500000 --num=5000000 --use_existing_db=true --writes_per_range_tombstone=50 # Scan entire DB TEST_TMPDIR=/dev/shm ./db_bench_main --benchmarks=readseq[-X5] --use_existing_db=true --num=5000000 --disable_auto_compactions=true # Short range scan (10 Next()) TEST_TMPDIR=/dev/shm/width-100/ ./db_bench_main --benchmarks=seekrandom[-X5] --use_existing_db=true --num=500000 --reads=100000 --seek_nexts=10 --disable_auto_compactions=true # Long range scan(1000 Next()) TEST_TMPDIR=/dev/shm/width-100/ ./db_bench_main --benchmarks=seekrandom[-X5] --use_existing_db=true --num=500000 --reads=2500 --seek_nexts=1000 --disable_auto_compactions=true ``` Avg over of 10 runs (some slower tests had fews runs): For the first column (tombstone), 0 means no range tombstone, 100-10000 means width of the 10k range tombstones, and 1 means there is a single range tombstone in the entire DB (width is 1000). The 1 tombstone case is to test regression when there's very few range tombstones in the DB, as no range tombstone is likely to take a different code path than with range tombstones. - Scan entire DB | tombstone width | Pre-PR ops/sec | Post-PR ops/sec | ±% | | ------------- | ------------- | ------------- | ------------- | | 0 range tombstone |2525600 (± 43564) |2486917 (± 33698) |-1.53% | | 100 |1853835 (± 24736) |2073884 (± 32176) |+11.87% | | 1000 |422415 (± 7466) |1115801 (± 22781) |+164.15% | | 10000 |22384 (± 227) |227919 (± 6647) |+918.22% | | 1 range tombstone |2176540 (± 39050) |2434954 (± 24563) |+11.87% | - Short range scan | tombstone width | Pre-PR ops/sec | Post-PR ops/sec | ±% | | ------------- | ------------- | ------------- | ------------- | | 0 range tombstone |35398 (± 533) |35338 (± 569) |-0.17% | | 100 |28276 (± 664) |31684 (± 331) |+12.05% | | 1000 |7637 (± 77) |25422 (± 277) |+232.88% | | 10000 |1367 |28667 |+1997.07% | | 1 range tombstone |32618 (± 581) |32748 (± 506) |+0.4% | - Long range scan | tombstone width | Pre-PR ops/sec | Post-PR ops/sec | ±% | | ------------- | ------------- | ------------- | ------------- | | 0 range tombstone |2262 (± 33) |2353 (± 20) |+4.02% | | 100 |1696 (± 26) |1926 (± 18) |+13.56% | | 1000 |410 (± 6) |1255 (± 29) |+206.1% | | 10000 |25 |414 |+1556.0% | | 1 range tombstone |1957 (± 30) |2185 (± 44) |+11.65% | - Microbench does not show significant regression: https://gist.github.com/cbi42/59f280f85a59b678e7e5d8561e693b61 Reviewed By: ajkr Differential Revision: D38450331 Pulled By: cbi42 fbshipit-source-id: b5ef12e8d8c289ed2e163ccdf277f5039b511fca
728 lines
26 KiB
C++
728 lines
26 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
//
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
#include "db/table_cache.h"
|
|
|
|
#include "db/dbformat.h"
|
|
#include "db/range_tombstone_fragmenter.h"
|
|
#include "db/snapshot_impl.h"
|
|
#include "db/version_edit.h"
|
|
#include "file/file_util.h"
|
|
#include "file/filename.h"
|
|
#include "file/random_access_file_reader.h"
|
|
#include "monitoring/perf_context_imp.h"
|
|
#include "rocksdb/advanced_options.h"
|
|
#include "rocksdb/statistics.h"
|
|
#include "table/block_based/block_based_table_reader.h"
|
|
#include "table/get_context.h"
|
|
#include "table/internal_iterator.h"
|
|
#include "table/iterator_wrapper.h"
|
|
#include "table/multiget_context.h"
|
|
#include "table/table_builder.h"
|
|
#include "table/table_reader.h"
|
|
#include "test_util/sync_point.h"
|
|
#include "util/cast_util.h"
|
|
#include "util/coding.h"
|
|
#include "util/stop_watch.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
namespace {
|
|
template <class T>
|
|
static void DeleteEntry(const Slice& /*key*/, void* value) {
|
|
T* typed_value = reinterpret_cast<T*>(value);
|
|
delete typed_value;
|
|
}
|
|
} // namespace
|
|
} // namespace ROCKSDB_NAMESPACE
|
|
|
|
// Generate the regular and coroutine versions of some methods by
|
|
// including table_cache_sync_and_async.h twice
|
|
// Macros in the header will expand differently based on whether
|
|
// WITH_COROUTINES or WITHOUT_COROUTINES is defined
|
|
// clang-format off
|
|
#define WITHOUT_COROUTINES
|
|
#include "db/table_cache_sync_and_async.h"
|
|
#undef WITHOUT_COROUTINES
|
|
#define WITH_COROUTINES
|
|
#include "db/table_cache_sync_and_async.h"
|
|
#undef WITH_COROUTINES
|
|
// clang-format on
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
namespace {
|
|
|
|
static void UnrefEntry(void* arg1, void* arg2) {
|
|
Cache* cache = reinterpret_cast<Cache*>(arg1);
|
|
Cache::Handle* h = reinterpret_cast<Cache::Handle*>(arg2);
|
|
cache->Release(h);
|
|
}
|
|
|
|
static Slice GetSliceForFileNumber(const uint64_t* file_number) {
|
|
return Slice(reinterpret_cast<const char*>(file_number),
|
|
sizeof(*file_number));
|
|
}
|
|
|
|
#ifndef ROCKSDB_LITE
|
|
|
|
void AppendVarint64(IterKey* key, uint64_t v) {
|
|
char buf[10];
|
|
auto ptr = EncodeVarint64(buf, v);
|
|
key->TrimAppend(key->Size(), buf, ptr - buf);
|
|
}
|
|
|
|
#endif // ROCKSDB_LITE
|
|
|
|
} // namespace
|
|
|
|
const int kLoadConcurency = 128;
|
|
|
|
TableCache::TableCache(const ImmutableOptions& ioptions,
|
|
const FileOptions* file_options, Cache* const cache,
|
|
BlockCacheTracer* const block_cache_tracer,
|
|
const std::shared_ptr<IOTracer>& io_tracer,
|
|
const std::string& db_session_id)
|
|
: ioptions_(ioptions),
|
|
file_options_(*file_options),
|
|
cache_(cache),
|
|
immortal_tables_(false),
|
|
block_cache_tracer_(block_cache_tracer),
|
|
loader_mutex_(kLoadConcurency, kGetSliceNPHash64UnseededFnPtr),
|
|
io_tracer_(io_tracer),
|
|
db_session_id_(db_session_id) {
|
|
if (ioptions_.row_cache) {
|
|
// If the same cache is shared by multiple instances, we need to
|
|
// disambiguate its entries.
|
|
PutVarint64(&row_cache_id_, ioptions_.row_cache->NewId());
|
|
}
|
|
}
|
|
|
|
TableCache::~TableCache() {
|
|
}
|
|
|
|
TableReader* TableCache::GetTableReaderFromHandle(Cache::Handle* handle) {
|
|
return reinterpret_cast<TableReader*>(cache_->Value(handle));
|
|
}
|
|
|
|
void TableCache::ReleaseHandle(Cache::Handle* handle) {
|
|
cache_->Release(handle);
|
|
}
|
|
|
|
Status TableCache::GetTableReader(
|
|
const ReadOptions& ro, const FileOptions& file_options,
|
|
const InternalKeyComparator& internal_comparator, const FileDescriptor& fd,
|
|
bool sequential_mode, bool record_read_stats, HistogramImpl* file_read_hist,
|
|
std::unique_ptr<TableReader>* table_reader,
|
|
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
|
bool skip_filters, int level, bool prefetch_index_and_filter_in_cache,
|
|
size_t max_file_size_for_l0_meta_pin, Temperature file_temperature) {
|
|
std::string fname =
|
|
TableFileName(ioptions_.cf_paths, fd.GetNumber(), fd.GetPathId());
|
|
std::unique_ptr<FSRandomAccessFile> file;
|
|
FileOptions fopts = file_options;
|
|
fopts.temperature = file_temperature;
|
|
Status s = PrepareIOFromReadOptions(ro, ioptions_.clock, fopts.io_options);
|
|
if (s.ok()) {
|
|
s = ioptions_.fs->NewRandomAccessFile(fname, fopts, &file, nullptr);
|
|
}
|
|
if (s.ok()) {
|
|
RecordTick(ioptions_.stats, NO_FILE_OPENS);
|
|
} else if (s.IsPathNotFound()) {
|
|
fname = Rocks2LevelTableFileName(fname);
|
|
s = PrepareIOFromReadOptions(ro, ioptions_.clock, fopts.io_options);
|
|
if (s.ok()) {
|
|
s = ioptions_.fs->NewRandomAccessFile(fname, file_options, &file,
|
|
nullptr);
|
|
}
|
|
if (s.ok()) {
|
|
RecordTick(ioptions_.stats, NO_FILE_OPENS);
|
|
}
|
|
}
|
|
|
|
if (s.ok()) {
|
|
if (!sequential_mode && ioptions_.advise_random_on_open) {
|
|
file->Hint(FSRandomAccessFile::kRandom);
|
|
}
|
|
StopWatch sw(ioptions_.clock, ioptions_.stats, TABLE_OPEN_IO_MICROS);
|
|
std::unique_ptr<RandomAccessFileReader> file_reader(
|
|
new RandomAccessFileReader(
|
|
std::move(file), fname, ioptions_.clock, io_tracer_,
|
|
record_read_stats ? ioptions_.stats : nullptr, SST_READ_MICROS,
|
|
file_read_hist, ioptions_.rate_limiter.get(), ioptions_.listeners,
|
|
file_temperature, level == ioptions_.num_levels - 1));
|
|
s = ioptions_.table_factory->NewTableReader(
|
|
ro,
|
|
TableReaderOptions(
|
|
ioptions_, prefix_extractor, file_options, internal_comparator,
|
|
skip_filters, immortal_tables_, false /* force_direct_prefetch */,
|
|
level, fd.largest_seqno, block_cache_tracer_,
|
|
max_file_size_for_l0_meta_pin, db_session_id_, fd.GetNumber()),
|
|
std::move(file_reader), fd.GetFileSize(), table_reader,
|
|
prefetch_index_and_filter_in_cache);
|
|
TEST_SYNC_POINT("TableCache::GetTableReader:0");
|
|
}
|
|
return s;
|
|
}
|
|
|
|
void TableCache::EraseHandle(const FileDescriptor& fd, Cache::Handle* handle) {
|
|
ReleaseHandle(handle);
|
|
uint64_t number = fd.GetNumber();
|
|
Slice key = GetSliceForFileNumber(&number);
|
|
cache_->Erase(key);
|
|
}
|
|
|
|
Status TableCache::FindTable(
|
|
const ReadOptions& ro, const FileOptions& file_options,
|
|
const InternalKeyComparator& internal_comparator, const FileDescriptor& fd,
|
|
Cache::Handle** handle,
|
|
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
|
const bool no_io, bool record_read_stats, HistogramImpl* file_read_hist,
|
|
bool skip_filters, int level, bool prefetch_index_and_filter_in_cache,
|
|
size_t max_file_size_for_l0_meta_pin, Temperature file_temperature) {
|
|
PERF_TIMER_GUARD_WITH_CLOCK(find_table_nanos, ioptions_.clock);
|
|
uint64_t number = fd.GetNumber();
|
|
Slice key = GetSliceForFileNumber(&number);
|
|
*handle = cache_->Lookup(key);
|
|
TEST_SYNC_POINT_CALLBACK("TableCache::FindTable:0",
|
|
const_cast<bool*>(&no_io));
|
|
|
|
if (*handle == nullptr) {
|
|
if (no_io) {
|
|
return Status::Incomplete("Table not found in table_cache, no_io is set");
|
|
}
|
|
MutexLock load_lock(loader_mutex_.get(key));
|
|
// We check the cache again under loading mutex
|
|
*handle = cache_->Lookup(key);
|
|
if (*handle != nullptr) {
|
|
return Status::OK();
|
|
}
|
|
|
|
std::unique_ptr<TableReader> table_reader;
|
|
Status s = GetTableReader(
|
|
ro, file_options, internal_comparator, fd, false /* sequential mode */,
|
|
record_read_stats, file_read_hist, &table_reader, prefix_extractor,
|
|
skip_filters, level, prefetch_index_and_filter_in_cache,
|
|
max_file_size_for_l0_meta_pin, file_temperature);
|
|
if (!s.ok()) {
|
|
assert(table_reader == nullptr);
|
|
RecordTick(ioptions_.stats, NO_FILE_ERRORS);
|
|
// We do not cache error results so that if the error is transient,
|
|
// or somebody repairs the file, we recover automatically.
|
|
} else {
|
|
s = cache_->Insert(key, table_reader.get(), 1, &DeleteEntry<TableReader>,
|
|
handle);
|
|
if (s.ok()) {
|
|
// Release ownership of table reader.
|
|
table_reader.release();
|
|
}
|
|
}
|
|
return s;
|
|
}
|
|
return Status::OK();
|
|
}
|
|
|
|
InternalIterator* TableCache::NewIterator(
|
|
const ReadOptions& options, const FileOptions& file_options,
|
|
const InternalKeyComparator& icomparator, const FileMetaData& file_meta,
|
|
RangeDelAggregator* range_del_agg,
|
|
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
|
TableReader** table_reader_ptr, HistogramImpl* file_read_hist,
|
|
TableReaderCaller caller, Arena* arena, bool skip_filters, int level,
|
|
size_t max_file_size_for_l0_meta_pin,
|
|
const InternalKey* smallest_compaction_key,
|
|
const InternalKey* largest_compaction_key, bool allow_unprepared_value,
|
|
TruncatedRangeDelIterator** range_del_iter) {
|
|
PERF_TIMER_GUARD(new_table_iterator_nanos);
|
|
|
|
Status s;
|
|
TableReader* table_reader = nullptr;
|
|
Cache::Handle* handle = nullptr;
|
|
if (table_reader_ptr != nullptr) {
|
|
*table_reader_ptr = nullptr;
|
|
}
|
|
bool for_compaction = caller == TableReaderCaller::kCompaction;
|
|
auto& fd = file_meta.fd;
|
|
table_reader = fd.table_reader;
|
|
if (table_reader == nullptr) {
|
|
s = FindTable(
|
|
options, file_options, icomparator, fd, &handle, prefix_extractor,
|
|
options.read_tier == kBlockCacheTier /* no_io */,
|
|
!for_compaction /* record_read_stats */, file_read_hist, skip_filters,
|
|
level, true /* prefetch_index_and_filter_in_cache */,
|
|
max_file_size_for_l0_meta_pin, file_meta.temperature);
|
|
if (s.ok()) {
|
|
table_reader = GetTableReaderFromHandle(handle);
|
|
}
|
|
}
|
|
InternalIterator* result = nullptr;
|
|
if (s.ok()) {
|
|
if (options.table_filter &&
|
|
!options.table_filter(*table_reader->GetTableProperties())) {
|
|
result = NewEmptyInternalIterator<Slice>(arena);
|
|
} else {
|
|
result = table_reader->NewIterator(
|
|
options, prefix_extractor.get(), arena, skip_filters, caller,
|
|
file_options.compaction_readahead_size, allow_unprepared_value);
|
|
}
|
|
if (handle != nullptr) {
|
|
result->RegisterCleanup(&UnrefEntry, cache_, handle);
|
|
handle = nullptr; // prevent from releasing below
|
|
}
|
|
|
|
if (for_compaction) {
|
|
table_reader->SetupForCompaction();
|
|
}
|
|
if (table_reader_ptr != nullptr) {
|
|
*table_reader_ptr = table_reader;
|
|
}
|
|
}
|
|
if (s.ok() && !options.ignore_range_deletions) {
|
|
if (range_del_iter != nullptr) {
|
|
auto new_range_del_iter =
|
|
table_reader->NewRangeTombstoneIterator(options);
|
|
if (new_range_del_iter == nullptr || new_range_del_iter->empty()) {
|
|
delete new_range_del_iter;
|
|
*range_del_iter = nullptr;
|
|
} else {
|
|
*range_del_iter = new TruncatedRangeDelIterator(
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator>(
|
|
new_range_del_iter),
|
|
&icomparator, &file_meta.smallest, &file_meta.largest);
|
|
}
|
|
}
|
|
if (range_del_agg != nullptr) {
|
|
if (range_del_agg->AddFile(fd.GetNumber())) {
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator> new_range_del_iter(
|
|
static_cast<FragmentedRangeTombstoneIterator*>(
|
|
table_reader->NewRangeTombstoneIterator(options)));
|
|
if (new_range_del_iter != nullptr) {
|
|
s = new_range_del_iter->status();
|
|
}
|
|
if (s.ok()) {
|
|
const InternalKey* smallest = &file_meta.smallest;
|
|
const InternalKey* largest = &file_meta.largest;
|
|
if (smallest_compaction_key != nullptr) {
|
|
smallest = smallest_compaction_key;
|
|
}
|
|
if (largest_compaction_key != nullptr) {
|
|
largest = largest_compaction_key;
|
|
}
|
|
range_del_agg->AddTombstones(std::move(new_range_del_iter), smallest,
|
|
largest);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
if (handle != nullptr) {
|
|
ReleaseHandle(handle);
|
|
}
|
|
if (!s.ok()) {
|
|
assert(result == nullptr);
|
|
result = NewErrorInternalIterator<Slice>(s, arena);
|
|
}
|
|
return result;
|
|
}
|
|
|
|
Status TableCache::GetRangeTombstoneIterator(
|
|
const ReadOptions& options,
|
|
const InternalKeyComparator& internal_comparator,
|
|
const FileMetaData& file_meta,
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator>* out_iter) {
|
|
assert(out_iter);
|
|
const FileDescriptor& fd = file_meta.fd;
|
|
Status s;
|
|
TableReader* t = fd.table_reader;
|
|
Cache::Handle* handle = nullptr;
|
|
if (t == nullptr) {
|
|
s = FindTable(options, file_options_, internal_comparator, fd, &handle);
|
|
if (s.ok()) {
|
|
t = GetTableReaderFromHandle(handle);
|
|
}
|
|
}
|
|
if (s.ok()) {
|
|
// Note: NewRangeTombstoneIterator could return nullptr
|
|
out_iter->reset(t->NewRangeTombstoneIterator(options));
|
|
}
|
|
if (handle) {
|
|
if (*out_iter) {
|
|
(*out_iter)->RegisterCleanup(&UnrefEntry, cache_, handle);
|
|
} else {
|
|
ReleaseHandle(handle);
|
|
}
|
|
}
|
|
return s;
|
|
}
|
|
|
|
#ifndef ROCKSDB_LITE
|
|
void TableCache::CreateRowCacheKeyPrefix(const ReadOptions& options,
|
|
const FileDescriptor& fd,
|
|
const Slice& internal_key,
|
|
GetContext* get_context,
|
|
IterKey& row_cache_key) {
|
|
uint64_t fd_number = fd.GetNumber();
|
|
// We use the user key as cache key instead of the internal key,
|
|
// otherwise the whole cache would be invalidated every time the
|
|
// sequence key increases. However, to support caching snapshot
|
|
// reads, we append the sequence number (incremented by 1 to
|
|
// distinguish from 0) only in this case.
|
|
// If the snapshot is larger than the largest seqno in the file,
|
|
// all data should be exposed to the snapshot, so we treat it
|
|
// the same as there is no snapshot. The exception is that if
|
|
// a seq-checking callback is registered, some internal keys
|
|
// may still be filtered out.
|
|
uint64_t seq_no = 0;
|
|
// Maybe we can include the whole file ifsnapshot == fd.largest_seqno.
|
|
if (options.snapshot != nullptr &&
|
|
(get_context->has_callback() ||
|
|
static_cast_with_check<const SnapshotImpl>(options.snapshot)
|
|
->GetSequenceNumber() <= fd.largest_seqno)) {
|
|
// We should consider to use options.snapshot->GetSequenceNumber()
|
|
// instead of GetInternalKeySeqno(k), which will make the code
|
|
// easier to understand.
|
|
seq_no = 1 + GetInternalKeySeqno(internal_key);
|
|
}
|
|
|
|
// Compute row cache key.
|
|
row_cache_key.TrimAppend(row_cache_key.Size(), row_cache_id_.data(),
|
|
row_cache_id_.size());
|
|
AppendVarint64(&row_cache_key, fd_number);
|
|
AppendVarint64(&row_cache_key, seq_no);
|
|
}
|
|
|
|
bool TableCache::GetFromRowCache(const Slice& user_key, IterKey& row_cache_key,
|
|
size_t prefix_size, GetContext* get_context) {
|
|
bool found = false;
|
|
|
|
row_cache_key.TrimAppend(prefix_size, user_key.data(), user_key.size());
|
|
if (auto row_handle =
|
|
ioptions_.row_cache->Lookup(row_cache_key.GetUserKey())) {
|
|
// Cleanable routine to release the cache entry
|
|
Cleanable value_pinner;
|
|
auto release_cache_entry_func = [](void* cache_to_clean,
|
|
void* cache_handle) {
|
|
((Cache*)cache_to_clean)->Release((Cache::Handle*)cache_handle);
|
|
};
|
|
auto found_row_cache_entry =
|
|
static_cast<const std::string*>(ioptions_.row_cache->Value(row_handle));
|
|
// If it comes here value is located on the cache.
|
|
// found_row_cache_entry points to the value on cache,
|
|
// and value_pinner has cleanup procedure for the cached entry.
|
|
// After replayGetContextLog() returns, get_context.pinnable_slice_
|
|
// will point to cache entry buffer (or a copy based on that) and
|
|
// cleanup routine under value_pinner will be delegated to
|
|
// get_context.pinnable_slice_. Cache entry is released when
|
|
// get_context.pinnable_slice_ is reset.
|
|
value_pinner.RegisterCleanup(release_cache_entry_func,
|
|
ioptions_.row_cache.get(), row_handle);
|
|
replayGetContextLog(*found_row_cache_entry, user_key, get_context,
|
|
&value_pinner);
|
|
RecordTick(ioptions_.stats, ROW_CACHE_HIT);
|
|
found = true;
|
|
} else {
|
|
RecordTick(ioptions_.stats, ROW_CACHE_MISS);
|
|
}
|
|
return found;
|
|
}
|
|
#endif // ROCKSDB_LITE
|
|
|
|
Status TableCache::Get(
|
|
const ReadOptions& options,
|
|
const InternalKeyComparator& internal_comparator,
|
|
const FileMetaData& file_meta, const Slice& k, GetContext* get_context,
|
|
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
|
HistogramImpl* file_read_hist, bool skip_filters, int level,
|
|
size_t max_file_size_for_l0_meta_pin) {
|
|
auto& fd = file_meta.fd;
|
|
std::string* row_cache_entry = nullptr;
|
|
bool done = false;
|
|
#ifndef ROCKSDB_LITE
|
|
IterKey row_cache_key;
|
|
std::string row_cache_entry_buffer;
|
|
|
|
// Check row cache if enabled. Since row cache does not currently store
|
|
// sequence numbers, we cannot use it if we need to fetch the sequence.
|
|
if (ioptions_.row_cache && !get_context->NeedToReadSequence()) {
|
|
auto user_key = ExtractUserKey(k);
|
|
CreateRowCacheKeyPrefix(options, fd, k, get_context, row_cache_key);
|
|
done = GetFromRowCache(user_key, row_cache_key, row_cache_key.Size(),
|
|
get_context);
|
|
if (!done) {
|
|
row_cache_entry = &row_cache_entry_buffer;
|
|
}
|
|
}
|
|
#endif // ROCKSDB_LITE
|
|
Status s;
|
|
TableReader* t = fd.table_reader;
|
|
Cache::Handle* handle = nullptr;
|
|
if (!done) {
|
|
assert(s.ok());
|
|
if (t == nullptr) {
|
|
s = FindTable(options, file_options_, internal_comparator, fd, &handle,
|
|
prefix_extractor,
|
|
options.read_tier == kBlockCacheTier /* no_io */,
|
|
true /* record_read_stats */, file_read_hist, skip_filters,
|
|
level, true /* prefetch_index_and_filter_in_cache */,
|
|
max_file_size_for_l0_meta_pin, file_meta.temperature);
|
|
if (s.ok()) {
|
|
t = GetTableReaderFromHandle(handle);
|
|
}
|
|
}
|
|
SequenceNumber* max_covering_tombstone_seq =
|
|
get_context->max_covering_tombstone_seq();
|
|
if (s.ok() && max_covering_tombstone_seq != nullptr &&
|
|
!options.ignore_range_deletions) {
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator> range_del_iter(
|
|
t->NewRangeTombstoneIterator(options));
|
|
if (range_del_iter != nullptr) {
|
|
*max_covering_tombstone_seq = std::max(
|
|
*max_covering_tombstone_seq,
|
|
range_del_iter->MaxCoveringTombstoneSeqnum(ExtractUserKey(k)));
|
|
}
|
|
}
|
|
if (s.ok()) {
|
|
get_context->SetReplayLog(row_cache_entry); // nullptr if no cache.
|
|
s = t->Get(options, k, get_context, prefix_extractor.get(), skip_filters);
|
|
get_context->SetReplayLog(nullptr);
|
|
} else if (options.read_tier == kBlockCacheTier && s.IsIncomplete()) {
|
|
// Couldn't find Table in cache but treat as kFound if no_io set
|
|
get_context->MarkKeyMayExist();
|
|
s = Status::OK();
|
|
done = true;
|
|
}
|
|
}
|
|
|
|
#ifndef ROCKSDB_LITE
|
|
// Put the replay log in row cache only if something was found.
|
|
if (!done && s.ok() && row_cache_entry && !row_cache_entry->empty()) {
|
|
size_t charge = row_cache_entry->capacity() + sizeof(std::string);
|
|
void* row_ptr = new std::string(std::move(*row_cache_entry));
|
|
// If row cache is full, it's OK to continue.
|
|
ioptions_.row_cache
|
|
->Insert(row_cache_key.GetUserKey(), row_ptr, charge,
|
|
&DeleteEntry<std::string>)
|
|
.PermitUncheckedError();
|
|
}
|
|
#endif // ROCKSDB_LITE
|
|
|
|
if (handle != nullptr) {
|
|
ReleaseHandle(handle);
|
|
}
|
|
return s;
|
|
}
|
|
|
|
void TableCache::UpdateRangeTombstoneSeqnums(
|
|
const ReadOptions& options, TableReader* t,
|
|
MultiGetContext::Range& table_range) {
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator> range_del_iter(
|
|
t->NewRangeTombstoneIterator(options));
|
|
if (range_del_iter != nullptr) {
|
|
for (auto iter = table_range.begin(); iter != table_range.end(); ++iter) {
|
|
SequenceNumber* max_covering_tombstone_seq =
|
|
iter->get_context->max_covering_tombstone_seq();
|
|
*max_covering_tombstone_seq = std::max(
|
|
*max_covering_tombstone_seq,
|
|
range_del_iter->MaxCoveringTombstoneSeqnum(iter->ukey_with_ts));
|
|
}
|
|
}
|
|
}
|
|
|
|
Status TableCache::MultiGetFilter(
|
|
const ReadOptions& options,
|
|
const InternalKeyComparator& internal_comparator,
|
|
const FileMetaData& file_meta,
|
|
const std::shared_ptr<const SliceTransform>& prefix_extractor,
|
|
HistogramImpl* file_read_hist, int level,
|
|
MultiGetContext::Range* mget_range, Cache::Handle** table_handle) {
|
|
auto& fd = file_meta.fd;
|
|
#ifndef ROCKSDB_LITE
|
|
IterKey row_cache_key;
|
|
std::string row_cache_entry_buffer;
|
|
|
|
// Check if we need to use the row cache. If yes, then we cannot do the
|
|
// filtering here, since the filtering needs to happen after the row cache
|
|
// lookup.
|
|
KeyContext& first_key = *mget_range->begin();
|
|
if (ioptions_.row_cache && !first_key.get_context->NeedToReadSequence()) {
|
|
return Status::NotSupported();
|
|
}
|
|
#endif // ROCKSDB_LITE
|
|
Status s;
|
|
TableReader* t = fd.table_reader;
|
|
Cache::Handle* handle = nullptr;
|
|
MultiGetContext::Range tombstone_range(*mget_range, mget_range->begin(),
|
|
mget_range->end());
|
|
if (t == nullptr) {
|
|
s = FindTable(
|
|
options, file_options_, internal_comparator, fd, &handle,
|
|
prefix_extractor, options.read_tier == kBlockCacheTier /* no_io */,
|
|
true /* record_read_stats */, file_read_hist, /*skip_filters=*/false,
|
|
level, true /* prefetch_index_and_filter_in_cache */,
|
|
/*max_file_size_for_l0_meta_pin=*/0, file_meta.temperature);
|
|
if (s.ok()) {
|
|
t = GetTableReaderFromHandle(handle);
|
|
}
|
|
*table_handle = handle;
|
|
}
|
|
if (s.ok()) {
|
|
s = t->MultiGetFilter(options, prefix_extractor.get(), mget_range);
|
|
}
|
|
if (s.ok() && !options.ignore_range_deletions) {
|
|
// Update the range tombstone sequence numbers for the keys here
|
|
// as TableCache::MultiGet may or may not be called, and even if it
|
|
// is, it may be called with fewer keys in the rangedue to filtering.
|
|
UpdateRangeTombstoneSeqnums(options, t, tombstone_range);
|
|
}
|
|
if (mget_range->empty() && handle) {
|
|
ReleaseHandle(handle);
|
|
*table_handle = nullptr;
|
|
}
|
|
|
|
return s;
|
|
}
|
|
|
|
Status TableCache::GetTableProperties(
|
|
const FileOptions& file_options,
|
|
const InternalKeyComparator& internal_comparator, const FileDescriptor& fd,
|
|
std::shared_ptr<const TableProperties>* properties,
|
|
const std::shared_ptr<const SliceTransform>& prefix_extractor, bool no_io) {
|
|
auto table_reader = fd.table_reader;
|
|
// table already been pre-loaded?
|
|
if (table_reader) {
|
|
*properties = table_reader->GetTableProperties();
|
|
|
|
return Status::OK();
|
|
}
|
|
|
|
Cache::Handle* table_handle = nullptr;
|
|
Status s = FindTable(ReadOptions(), file_options, internal_comparator, fd,
|
|
&table_handle, prefix_extractor, no_io);
|
|
if (!s.ok()) {
|
|
return s;
|
|
}
|
|
assert(table_handle);
|
|
auto table = GetTableReaderFromHandle(table_handle);
|
|
*properties = table->GetTableProperties();
|
|
ReleaseHandle(table_handle);
|
|
return s;
|
|
}
|
|
|
|
Status TableCache::ApproximateKeyAnchors(
|
|
const ReadOptions& ro, const InternalKeyComparator& internal_comparator,
|
|
const FileDescriptor& fd, std::vector<TableReader::Anchor>& anchors) {
|
|
Status s;
|
|
TableReader* t = fd.table_reader;
|
|
Cache::Handle* handle = nullptr;
|
|
if (t == nullptr) {
|
|
s = FindTable(ro, file_options_, internal_comparator, fd, &handle);
|
|
if (s.ok()) {
|
|
t = GetTableReaderFromHandle(handle);
|
|
}
|
|
}
|
|
if (s.ok() && t != nullptr) {
|
|
s = t->ApproximateKeyAnchors(ro, anchors);
|
|
}
|
|
if (handle != nullptr) {
|
|
ReleaseHandle(handle);
|
|
}
|
|
return s;
|
|
}
|
|
|
|
size_t TableCache::GetMemoryUsageByTableReader(
|
|
const FileOptions& file_options,
|
|
const InternalKeyComparator& internal_comparator, const FileDescriptor& fd,
|
|
const std::shared_ptr<const SliceTransform>& prefix_extractor) {
|
|
auto table_reader = fd.table_reader;
|
|
// table already been pre-loaded?
|
|
if (table_reader) {
|
|
return table_reader->ApproximateMemoryUsage();
|
|
}
|
|
|
|
Cache::Handle* table_handle = nullptr;
|
|
Status s = FindTable(ReadOptions(), file_options, internal_comparator, fd,
|
|
&table_handle, prefix_extractor, true);
|
|
if (!s.ok()) {
|
|
return 0;
|
|
}
|
|
assert(table_handle);
|
|
auto table = GetTableReaderFromHandle(table_handle);
|
|
auto ret = table->ApproximateMemoryUsage();
|
|
ReleaseHandle(table_handle);
|
|
return ret;
|
|
}
|
|
|
|
bool TableCache::HasEntry(Cache* cache, uint64_t file_number) {
|
|
Cache::Handle* handle = cache->Lookup(GetSliceForFileNumber(&file_number));
|
|
if (handle) {
|
|
cache->Release(handle);
|
|
return true;
|
|
} else {
|
|
return false;
|
|
}
|
|
}
|
|
|
|
void TableCache::Evict(Cache* cache, uint64_t file_number) {
|
|
cache->Erase(GetSliceForFileNumber(&file_number));
|
|
}
|
|
|
|
uint64_t TableCache::ApproximateOffsetOf(
|
|
const Slice& key, const FileDescriptor& fd, TableReaderCaller caller,
|
|
const InternalKeyComparator& internal_comparator,
|
|
const std::shared_ptr<const SliceTransform>& prefix_extractor) {
|
|
uint64_t result = 0;
|
|
TableReader* table_reader = fd.table_reader;
|
|
Cache::Handle* table_handle = nullptr;
|
|
if (table_reader == nullptr) {
|
|
const bool for_compaction = (caller == TableReaderCaller::kCompaction);
|
|
Status s = FindTable(ReadOptions(), file_options_, internal_comparator, fd,
|
|
&table_handle, prefix_extractor, false /* no_io */,
|
|
!for_compaction /* record_read_stats */);
|
|
if (s.ok()) {
|
|
table_reader = GetTableReaderFromHandle(table_handle);
|
|
}
|
|
}
|
|
|
|
if (table_reader != nullptr) {
|
|
result = table_reader->ApproximateOffsetOf(key, caller);
|
|
}
|
|
if (table_handle != nullptr) {
|
|
ReleaseHandle(table_handle);
|
|
}
|
|
|
|
return result;
|
|
}
|
|
|
|
uint64_t TableCache::ApproximateSize(
|
|
const Slice& start, const Slice& end, const FileDescriptor& fd,
|
|
TableReaderCaller caller, const InternalKeyComparator& internal_comparator,
|
|
const std::shared_ptr<const SliceTransform>& prefix_extractor) {
|
|
uint64_t result = 0;
|
|
TableReader* table_reader = fd.table_reader;
|
|
Cache::Handle* table_handle = nullptr;
|
|
if (table_reader == nullptr) {
|
|
const bool for_compaction = (caller == TableReaderCaller::kCompaction);
|
|
Status s = FindTable(ReadOptions(), file_options_, internal_comparator, fd,
|
|
&table_handle, prefix_extractor, false /* no_io */,
|
|
!for_compaction /* record_read_stats */);
|
|
if (s.ok()) {
|
|
table_reader = GetTableReaderFromHandle(table_handle);
|
|
}
|
|
}
|
|
|
|
if (table_reader != nullptr) {
|
|
result = table_reader->ApproximateSize(start, end, caller);
|
|
}
|
|
if (table_handle != nullptr) {
|
|
ReleaseHandle(table_handle);
|
|
}
|
|
|
|
return result;
|
|
}
|
|
} // namespace ROCKSDB_NAMESPACE
|