mirror of
https://github.com/facebook/rocksdb.git
synced 2024-12-02 10:15:54 +00:00
30bc495c03
Summary: Delete range logic is moved from `DBIter` to `MergingIterator`, and `MergingIterator` will seek to the end of a range deletion if possible instead of scanning through each key and check with `RangeDelAggregator`. With the invariant that a key in level L (consider memtable as the first level, each immutable and L0 as a separate level) has a larger sequence number than all keys in any level >L, a range tombstone `[start, end)` from level L covers all keys in its range in any level >L. This property motivates optimizations in iterator: - in `Seek(target)`, if level L has a range tombstone `[start, end)` that covers `target.UserKey`, then for all levels > L, we can do Seek() on `end` instead of `target` to skip some range tombstone covered keys. - in `Next()/Prev()`, if the current key is covered by a range tombstone `[start, end)` from level L, we can do `Seek` to `end` for all levels > L. This PR implements the above optimizations in `MergingIterator`. As all range tombstone covered keys are now skipped in `MergingIterator`, the range tombstone logic is removed from `DBIter`. The idea in this PR is similar to https://github.com/facebook/rocksdb/issues/7317, but this PR leaves `InternalIterator` interface mostly unchanged. **Credit**: the cascading seek optimization and the sentinel key (discussed below) are inspired by [Pebble](https://github.com/cockroachdb/pebble/blob/master/merging_iter.go) and suggested by ajkr in https://github.com/facebook/rocksdb/issues/7317. The two optimizations are mostly implemented in `SeekImpl()/SeekForPrevImpl()` and `IsNextDeleted()/IsPrevDeleted()` in `merging_iterator.cc`. See comments for each method for more detail. One notable change is that the minHeap/maxHeap used by `MergingIterator` now contains range tombstone end keys besides point key iterators. This helps to reduce the number of key comparisons. For example, for a range tombstone `[start, end)`, a `start` and an `end` `HeapItem` are inserted into the heap. When a `HeapItem` for range tombstone start key is popped from the minHeap, we know this range tombstone becomes "active" in the sense that, before the range tombstone's end key is popped from the minHeap, all the keys popped from this heap is covered by the range tombstone's internal key range `[start, end)`. Another major change, *delete range sentinel key*, is made to `LevelIterator`. Before this PR, when all point keys in an SST file are iterated through in `MergingIterator`, a level iterator would advance to the next SST file in its level. In the case when an SST file has a range tombstone that covers keys beyond the SST file's last point key, advancing to the next SST file would lose this range tombstone. Consequently, `MergingIterator` could return keys that should have been deleted by some range tombstone. We prevent this by pretending that file boundaries in each SST file are sentinel keys. A `LevelIterator` now only advance the file iterator once the sentinel key is processed. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10449 Test Plan: - Added many unit tests in db_range_del_test - Stress test: `./db_stress --readpercent=5 --prefixpercent=19 --writepercent=20 -delpercent=10 --iterpercent=44 --delrangepercent=2` - Additional iterator stress test is added to verify against iterators against expected state: https://github.com/facebook/rocksdb/issues/10538. This is based on ajkr's previous attempt https://github.com/facebook/rocksdb/pull/5506#issuecomment-506021913. ``` python3 ./tools/db_crashtest.py blackbox --simple --write_buffer_size=524288 --target_file_size_base=524288 --max_bytes_for_level_base=2097152 --compression_type=none --max_background_compactions=8 --value_size_mult=33 --max_key=5000000 --interval=10 --duration=7200 --delrangepercent=3 --delpercent=9 --iterpercent=25 --writepercent=60 --readpercent=3 --prefixpercent=0 --num_iterations=1000 --range_deletion_width=100 --verify_iterator_with_expected_state_one_in=1 ``` - Performance benchmark: I used a similar setup as in the blog [post](http://rocksdb.org/blog/2018/11/21/delete-range.html) that introduced DeleteRange, "a database with 5 million data keys, and 10000 range tombstones (ignoring those dropped during compaction) that were written in regular intervals after 4.5 million data keys were written". As expected, the performance with this PR depends on the range tombstone width. ``` # Setup: TEST_TMPDIR=/dev/shm ./db_bench_main --benchmarks=fillrandom --writes=4500000 --num=5000000 TEST_TMPDIR=/dev/shm ./db_bench_main --benchmarks=overwrite --writes=500000 --num=5000000 --use_existing_db=true --writes_per_range_tombstone=50 # Scan entire DB TEST_TMPDIR=/dev/shm ./db_bench_main --benchmarks=readseq[-X5] --use_existing_db=true --num=5000000 --disable_auto_compactions=true # Short range scan (10 Next()) TEST_TMPDIR=/dev/shm/width-100/ ./db_bench_main --benchmarks=seekrandom[-X5] --use_existing_db=true --num=500000 --reads=100000 --seek_nexts=10 --disable_auto_compactions=true # Long range scan(1000 Next()) TEST_TMPDIR=/dev/shm/width-100/ ./db_bench_main --benchmarks=seekrandom[-X5] --use_existing_db=true --num=500000 --reads=2500 --seek_nexts=1000 --disable_auto_compactions=true ``` Avg over of 10 runs (some slower tests had fews runs): For the first column (tombstone), 0 means no range tombstone, 100-10000 means width of the 10k range tombstones, and 1 means there is a single range tombstone in the entire DB (width is 1000). The 1 tombstone case is to test regression when there's very few range tombstones in the DB, as no range tombstone is likely to take a different code path than with range tombstones. - Scan entire DB | tombstone width | Pre-PR ops/sec | Post-PR ops/sec | ±% | | ------------- | ------------- | ------------- | ------------- | | 0 range tombstone |2525600 (± 43564) |2486917 (± 33698) |-1.53% | | 100 |1853835 (± 24736) |2073884 (± 32176) |+11.87% | | 1000 |422415 (± 7466) |1115801 (± 22781) |+164.15% | | 10000 |22384 (± 227) |227919 (± 6647) |+918.22% | | 1 range tombstone |2176540 (± 39050) |2434954 (± 24563) |+11.87% | - Short range scan | tombstone width | Pre-PR ops/sec | Post-PR ops/sec | ±% | | ------------- | ------------- | ------------- | ------------- | | 0 range tombstone |35398 (± 533) |35338 (± 569) |-0.17% | | 100 |28276 (± 664) |31684 (± 331) |+12.05% | | 1000 |7637 (± 77) |25422 (± 277) |+232.88% | | 10000 |1367 |28667 |+1997.07% | | 1 range tombstone |32618 (± 581) |32748 (± 506) |+0.4% | - Long range scan | tombstone width | Pre-PR ops/sec | Post-PR ops/sec | ±% | | ------------- | ------------- | ------------- | ------------- | | 0 range tombstone |2262 (± 33) |2353 (± 20) |+4.02% | | 100 |1696 (± 26) |1926 (± 18) |+13.56% | | 1000 |410 (± 6) |1255 (± 29) |+206.1% | | 10000 |25 |414 |+1556.0% | | 1 range tombstone |1957 (± 30) |2185 (± 44) |+11.65% | - Microbench does not show significant regression: https://gist.github.com/cbi42/59f280f85a59b678e7e5d8561e693b61 Reviewed By: ajkr Differential Revision: D38450331 Pulled By: cbi42 fbshipit-source-id: b5ef12e8d8c289ed2e163ccdf277f5039b511fca
227 lines
8.8 KiB
C++
227 lines
8.8 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
//
|
|
|
|
#pragma once
|
|
|
|
#include <string>
|
|
|
|
#include "db/dbformat.h"
|
|
#include "file/readahead_file_info.h"
|
|
#include "rocksdb/comparator.h"
|
|
#include "rocksdb/iterator.h"
|
|
#include "rocksdb/status.h"
|
|
#include "table/format.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
class PinnedIteratorsManager;
|
|
|
|
enum class IterBoundCheck : char {
|
|
kUnknown = 0,
|
|
kOutOfBound,
|
|
kInbound,
|
|
};
|
|
|
|
struct IterateResult {
|
|
Slice key;
|
|
IterBoundCheck bound_check_result = IterBoundCheck::kUnknown;
|
|
// If false, PrepareValue() needs to be called before value().
|
|
bool value_prepared = true;
|
|
};
|
|
|
|
template <class TValue>
|
|
class InternalIteratorBase : public Cleanable {
|
|
public:
|
|
InternalIteratorBase() {}
|
|
|
|
// No copying allowed
|
|
InternalIteratorBase(const InternalIteratorBase&) = delete;
|
|
InternalIteratorBase& operator=(const InternalIteratorBase&) = delete;
|
|
|
|
virtual ~InternalIteratorBase() {}
|
|
|
|
// An iterator is either positioned at a key/value pair, or
|
|
// not valid. This method returns true iff the iterator is valid.
|
|
// Always returns false if !status().ok().
|
|
virtual bool Valid() const = 0;
|
|
|
|
// Position at the first key in the source. The iterator is Valid()
|
|
// after this call iff the source is not empty.
|
|
virtual void SeekToFirst() = 0;
|
|
|
|
// Position at the last key in the source. The iterator is
|
|
// Valid() after this call iff the source is not empty.
|
|
virtual void SeekToLast() = 0;
|
|
|
|
// Position at the first key in the source that at or past target
|
|
// The iterator is Valid() after this call iff the source contains
|
|
// an entry that comes at or past target.
|
|
// All Seek*() methods clear any error status() that the iterator had prior to
|
|
// the call; after the seek, status() indicates only the error (if any) that
|
|
// happened during the seek, not any past errors.
|
|
// 'target' contains user timestamp if timestamp is enabled.
|
|
virtual void Seek(const Slice& target) = 0;
|
|
|
|
// Position at the first key in the source that at or before target
|
|
// The iterator is Valid() after this call iff the source contains
|
|
// an entry that comes at or before target.
|
|
virtual void SeekForPrev(const Slice& target) = 0;
|
|
|
|
// Moves to the next entry in the source. After this call, Valid() is
|
|
// true iff the iterator was not positioned at the last entry in the source.
|
|
// REQUIRES: Valid()
|
|
virtual void Next() = 0;
|
|
|
|
// Moves to the next entry in the source, and return result. Iterator
|
|
// implementation should override this method to help methods inline better,
|
|
// or when UpperBoundCheckResult() is non-trivial.
|
|
// REQUIRES: Valid()
|
|
virtual bool NextAndGetResult(IterateResult* result) {
|
|
Next();
|
|
bool is_valid = Valid();
|
|
if (is_valid) {
|
|
result->key = key();
|
|
// Default may_be_out_of_upper_bound to true to avoid unnecessary virtual
|
|
// call. If an implementation has non-trivial UpperBoundCheckResult(),
|
|
// it should also override NextAndGetResult().
|
|
result->bound_check_result = IterBoundCheck::kUnknown;
|
|
result->value_prepared = false;
|
|
assert(UpperBoundCheckResult() != IterBoundCheck::kOutOfBound);
|
|
}
|
|
return is_valid;
|
|
}
|
|
|
|
// Moves to the previous entry in the source. After this call, Valid() is
|
|
// true iff the iterator was not positioned at the first entry in source.
|
|
// REQUIRES: Valid()
|
|
virtual void Prev() = 0;
|
|
|
|
// Return the key for the current entry. The underlying storage for
|
|
// the returned slice is valid only until the next modification of
|
|
// the iterator.
|
|
// REQUIRES: Valid()
|
|
virtual Slice key() const = 0;
|
|
|
|
// Return user key for the current entry.
|
|
// REQUIRES: Valid()
|
|
virtual Slice user_key() const { return ExtractUserKey(key()); }
|
|
|
|
// Return the value for the current entry. The underlying storage for
|
|
// the returned slice is valid only until the next modification of
|
|
// the iterator.
|
|
// REQUIRES: Valid()
|
|
// REQUIRES: PrepareValue() has been called if needed (see PrepareValue()).
|
|
virtual TValue value() const = 0;
|
|
|
|
// If an error has occurred, return it. Else return an ok status.
|
|
// If non-blocking IO is requested and this operation cannot be
|
|
// satisfied without doing some IO, then this returns Status::Incomplete().
|
|
virtual Status status() const = 0;
|
|
|
|
// For some types of iterators, sometimes Seek()/Next()/SeekForPrev()/etc may
|
|
// load key but not value (to avoid the IO cost of reading the value from disk
|
|
// if it won't be not needed). This method loads the value in such situation.
|
|
//
|
|
// Needs to be called before value() at least once after each iterator
|
|
// movement (except if IterateResult::value_prepared = true), for iterators
|
|
// created with allow_unprepared_value = true.
|
|
//
|
|
// Returns false if an error occurred; in this case Valid() is also changed
|
|
// to false, and status() is changed to non-ok.
|
|
// REQUIRES: Valid()
|
|
virtual bool PrepareValue() { return true; }
|
|
|
|
// Keys return from this iterator can be smaller than iterate_lower_bound.
|
|
virtual bool MayBeOutOfLowerBound() { return true; }
|
|
|
|
// If the iterator has checked the key against iterate_upper_bound, returns
|
|
// the result here. The function can be used by user of the iterator to skip
|
|
// their own checks. If Valid() = true, IterBoundCheck::kUnknown is always
|
|
// a valid value. If Valid() = false, IterBoundCheck::kOutOfBound indicates
|
|
// that the iterator is filtered out by upper bound checks.
|
|
virtual IterBoundCheck UpperBoundCheckResult() {
|
|
return IterBoundCheck::kUnknown;
|
|
}
|
|
|
|
// Pass the PinnedIteratorsManager to the Iterator, most Iterators don't
|
|
// communicate with PinnedIteratorsManager so default implementation is no-op
|
|
// but for Iterators that need to communicate with PinnedIteratorsManager
|
|
// they will implement this function and use the passed pointer to communicate
|
|
// with PinnedIteratorsManager.
|
|
virtual void SetPinnedItersMgr(PinnedIteratorsManager* /*pinned_iters_mgr*/) {
|
|
}
|
|
|
|
// If true, this means that the Slice returned by key() is valid as long as
|
|
// PinnedIteratorsManager::ReleasePinnedData is not called and the
|
|
// Iterator is not deleted.
|
|
//
|
|
// IsKeyPinned() is guaranteed to always return true if
|
|
// - Iterator is created with ReadOptions::pin_data = true
|
|
// - DB tables were created with BlockBasedTableOptions::use_delta_encoding
|
|
// set to false.
|
|
virtual bool IsKeyPinned() const { return false; }
|
|
|
|
// If true, this means that the Slice returned by value() is valid as long as
|
|
// PinnedIteratorsManager::ReleasePinnedData is not called and the
|
|
// Iterator is not deleted.
|
|
// REQUIRES: Same as for value().
|
|
virtual bool IsValuePinned() const { return false; }
|
|
|
|
virtual Status GetProperty(std::string /*prop_name*/, std::string* /*prop*/) {
|
|
return Status::NotSupported("");
|
|
}
|
|
|
|
// When iterator moves from one file to another file at same level, new file's
|
|
// readahead state (details of last block read) is updated with previous
|
|
// file's readahead state. This way internal readahead_size of Prefetch Buffer
|
|
// doesn't start from scratch and can fall back to 8KB with no prefetch if
|
|
// reads are not sequential.
|
|
//
|
|
// Default implementation is no-op and its implemented by iterators.
|
|
virtual void GetReadaheadState(ReadaheadFileInfo* /*readahead_file_info*/) {}
|
|
|
|
// Default implementation is no-op and its implemented by iterators.
|
|
virtual void SetReadaheadState(ReadaheadFileInfo* /*readahead_file_info*/) {}
|
|
|
|
// When used under merging iterator, LevelIterator treats file boundaries
|
|
// as sentinel keys to prevent it from moving to next SST file before range
|
|
// tombstones in the current SST file are no longer needed. This method makes
|
|
// it cheap to check if the current key is a sentinel key. This should only be
|
|
// used by MergingIterator and LevelIterator for now.
|
|
virtual bool IsDeleteRangeSentinelKey() const { return false; }
|
|
|
|
protected:
|
|
void SeekForPrevImpl(const Slice& target, const CompareInterface* cmp) {
|
|
Seek(target);
|
|
if (!Valid()) {
|
|
SeekToLast();
|
|
}
|
|
while (Valid() && cmp->Compare(target, key()) < 0) {
|
|
Prev();
|
|
}
|
|
}
|
|
|
|
bool is_mutable_;
|
|
};
|
|
|
|
using InternalIterator = InternalIteratorBase<Slice>;
|
|
|
|
// Return an empty iterator (yields nothing).
|
|
template <class TValue = Slice>
|
|
extern InternalIteratorBase<TValue>* NewEmptyInternalIterator();
|
|
|
|
// Return an empty iterator with the specified status.
|
|
template <class TValue = Slice>
|
|
extern InternalIteratorBase<TValue>* NewErrorInternalIterator(
|
|
const Status& status);
|
|
|
|
// Return an empty iterator with the specified status, allocated arena.
|
|
template <class TValue = Slice>
|
|
extern InternalIteratorBase<TValue>* NewErrorInternalIterator(
|
|
const Status& status, Arena* arena);
|
|
|
|
} // namespace ROCKSDB_NAMESPACE
|