2018-12-18 01:26:56 +00:00
|
|
|
// Copyright (c) 2018-present, Facebook, Inc. All rights reserved.
|
2017-07-15 23:03:42 +00:00
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
Compaction Support for Range Deletion
Summary:
This diff introduces RangeDelAggregator, which takes ownership of iterators
provided to it via AddTombstones(). The tombstones are organized in a two-level
map (snapshot stripe -> begin key -> tombstone). Tombstone creation avoids data
copy by holding Slices returned by the iterator, which remain valid thanks to pinning.
For compaction, we create a hierarchical range tombstone iterator with structure
matching the iterator over compaction input data. An aggregator based on that
iterator is used by CompactionIterator to determine which keys are covered by
range tombstones. In case of merge operand, the same aggregator is used by
MergeHelper. Upon finishing each file in the compaction, relevant range tombstones
are added to the output file's range tombstone metablock and file boundaries are
updated accordingly.
To check whether a key is covered by range tombstone, RangeDelAggregator::ShouldDelete()
considers tombstones in the key's snapshot stripe. When this function is used outside of
compaction, it also checks newer stripes, which can contain covering tombstones. Currently
the intra-stripe check involves a linear scan; however, in the future we plan to collapse ranges
within a stripe such that binary search can be used.
RangeDelAggregator::AddToBuilder() adds all range tombstones in the table's key-range
to a new table's range tombstone meta-block. Since range tombstones may fall in the gap
between files, we may need to extend some files' key-ranges. The strategy is (1) first file
extends as far left as possible and other files do not extend left, (2) all files extend right
until either the start of the next file or the end of the last range tombstone in the gap,
whichever comes first.
One other notable change is adding release/move semantics to ScopedArenaIterator
such that it can be used to transfer ownership of an arena-allocated iterator, similar to
how unique_ptr is used for malloc'd data.
Depends on D61473
Test Plan: compaction_iterator_test, mock_table, end-to-end tests in D63927
Reviewers: sdong, IslamAbdelRahman, wanning, yhchiang, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D62205
2016-10-18 19:04:56 +00:00
|
|
|
|
|
|
|
#include "db/range_del_aggregator.h"
|
|
|
|
|
2019-05-31 18:52:59 +00:00
|
|
|
#include "db/compaction/compaction_iteration_stats.h"
|
2018-12-18 01:26:56 +00:00
|
|
|
#include "db/dbformat.h"
|
|
|
|
#include "db/pinned_iterators_manager.h"
|
|
|
|
#include "db/range_del_aggregator.h"
|
|
|
|
#include "db/range_tombstone_fragmenter.h"
|
|
|
|
#include "db/version_edit.h"
|
2020-01-22 00:11:08 +00:00
|
|
|
#include "rocksdb/comparator.h"
|
|
|
|
#include "rocksdb/types.h"
|
2018-12-18 01:26:56 +00:00
|
|
|
#include "table/internal_iterator.h"
|
|
|
|
#include "table/scoped_arena_iterator.h"
|
|
|
|
#include "table/table_builder.h"
|
|
|
|
#include "util/heap.h"
|
|
|
|
#include "util/kv_map.h"
|
|
|
|
#include "util/vector_iterator.h"
|
Compaction Support for Range Deletion
Summary:
This diff introduces RangeDelAggregator, which takes ownership of iterators
provided to it via AddTombstones(). The tombstones are organized in a two-level
map (snapshot stripe -> begin key -> tombstone). Tombstone creation avoids data
copy by holding Slices returned by the iterator, which remain valid thanks to pinning.
For compaction, we create a hierarchical range tombstone iterator with structure
matching the iterator over compaction input data. An aggregator based on that
iterator is used by CompactionIterator to determine which keys are covered by
range tombstones. In case of merge operand, the same aggregator is used by
MergeHelper. Upon finishing each file in the compaction, relevant range tombstones
are added to the output file's range tombstone metablock and file boundaries are
updated accordingly.
To check whether a key is covered by range tombstone, RangeDelAggregator::ShouldDelete()
considers tombstones in the key's snapshot stripe. When this function is used outside of
compaction, it also checks newer stripes, which can contain covering tombstones. Currently
the intra-stripe check involves a linear scan; however, in the future we plan to collapse ranges
within a stripe such that binary search can be used.
RangeDelAggregator::AddToBuilder() adds all range tombstones in the table's key-range
to a new table's range tombstone meta-block. Since range tombstones may fall in the gap
between files, we may need to extend some files' key-ranges. The strategy is (1) first file
extends as far left as possible and other files do not extend left, (2) all files extend right
until either the start of the next file or the end of the last range tombstone in the gap,
whichever comes first.
One other notable change is adding release/move semantics to ScopedArenaIterator
such that it can be used to transfer ownership of an arena-allocated iterator, similar to
how unique_ptr is used for malloc'd data.
Depends on D61473
Test Plan: compaction_iterator_test, mock_table, end-to-end tests in D63927
Reviewers: sdong, IslamAbdelRahman, wanning, yhchiang, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D62205
2016-10-18 19:04:56 +00:00
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
namespace ROCKSDB_NAMESPACE {
|
Compaction Support for Range Deletion
Summary:
This diff introduces RangeDelAggregator, which takes ownership of iterators
provided to it via AddTombstones(). The tombstones are organized in a two-level
map (snapshot stripe -> begin key -> tombstone). Tombstone creation avoids data
copy by holding Slices returned by the iterator, which remain valid thanks to pinning.
For compaction, we create a hierarchical range tombstone iterator with structure
matching the iterator over compaction input data. An aggregator based on that
iterator is used by CompactionIterator to determine which keys are covered by
range tombstones. In case of merge operand, the same aggregator is used by
MergeHelper. Upon finishing each file in the compaction, relevant range tombstones
are added to the output file's range tombstone metablock and file boundaries are
updated accordingly.
To check whether a key is covered by range tombstone, RangeDelAggregator::ShouldDelete()
considers tombstones in the key's snapshot stripe. When this function is used outside of
compaction, it also checks newer stripes, which can contain covering tombstones. Currently
the intra-stripe check involves a linear scan; however, in the future we plan to collapse ranges
within a stripe such that binary search can be used.
RangeDelAggregator::AddToBuilder() adds all range tombstones in the table's key-range
to a new table's range tombstone meta-block. Since range tombstones may fall in the gap
between files, we may need to extend some files' key-ranges. The strategy is (1) first file
extends as far left as possible and other files do not extend left, (2) all files extend right
until either the start of the next file or the end of the last range tombstone in the gap,
whichever comes first.
One other notable change is adding release/move semantics to ScopedArenaIterator
such that it can be used to transfer ownership of an arena-allocated iterator, similar to
how unique_ptr is used for malloc'd data.
Depends on D61473
Test Plan: compaction_iterator_test, mock_table, end-to-end tests in D63927
Reviewers: sdong, IslamAbdelRahman, wanning, yhchiang, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D62205
2016-10-18 19:04:56 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
TruncatedRangeDelIterator::TruncatedRangeDelIterator(
|
|
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator> iter,
|
|
|
|
const InternalKeyComparator* icmp, const InternalKey* smallest,
|
|
|
|
const InternalKey* largest)
|
|
|
|
: iter_(std::move(iter)),
|
|
|
|
icmp_(icmp),
|
|
|
|
smallest_ikey_(smallest),
|
|
|
|
largest_ikey_(largest) {
|
|
|
|
if (smallest != nullptr) {
|
|
|
|
pinned_bounds_.emplace_back();
|
|
|
|
auto& parsed_smallest = pinned_bounds_.back();
|
|
|
|
if (!ParseInternalKey(smallest->Encode(), &parsed_smallest)) {
|
|
|
|
assert(false);
|
2018-10-09 22:15:27 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
smallest_ = &parsed_smallest;
|
|
|
|
}
|
|
|
|
if (largest != nullptr) {
|
|
|
|
pinned_bounds_.emplace_back();
|
|
|
|
auto& parsed_largest = pinned_bounds_.back();
|
|
|
|
if (!ParseInternalKey(largest->Encode(), &parsed_largest)) {
|
|
|
|
assert(false);
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
if (parsed_largest.type == kTypeRangeDeletion &&
|
|
|
|
parsed_largest.sequence == kMaxSequenceNumber) {
|
|
|
|
// The file boundary has been artificially extended by a range tombstone.
|
|
|
|
// We do not need to adjust largest to properly truncate range
|
|
|
|
// tombstones that extend past the boundary.
|
|
|
|
} else if (parsed_largest.sequence == 0) {
|
|
|
|
// The largest key in the sstable has a sequence number of 0. Since we
|
|
|
|
// guarantee that no internal keys with the same user key and sequence
|
|
|
|
// number can exist in a DB, we know that the largest key in this sstable
|
|
|
|
// cannot exist as the smallest key in the next sstable. This further
|
|
|
|
// implies that no range tombstone in this sstable covers largest;
|
|
|
|
// otherwise, the file boundary would have been artificially extended.
|
|
|
|
//
|
|
|
|
// Therefore, we will never truncate a range tombstone at largest, so we
|
|
|
|
// can leave it unchanged.
|
|
|
|
} else {
|
|
|
|
// The same user key may straddle two sstable boundaries. To ensure that
|
|
|
|
// the truncated end key can cover the largest key in this sstable, reduce
|
|
|
|
// its sequence number by 1.
|
|
|
|
parsed_largest.sequence -= 1;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
largest_ = &parsed_largest;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
}
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
bool TruncatedRangeDelIterator::Valid() const {
|
|
|
|
return iter_->Valid() &&
|
|
|
|
(smallest_ == nullptr ||
|
|
|
|
icmp_->Compare(*smallest_, iter_->parsed_end_key()) < 0) &&
|
|
|
|
(largest_ == nullptr ||
|
|
|
|
icmp_->Compare(iter_->parsed_start_key(), *largest_) < 0);
|
|
|
|
}
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
void TruncatedRangeDelIterator::Next() { iter_->TopNext(); }
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
void TruncatedRangeDelIterator::Prev() { iter_->TopPrev(); }
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
void TruncatedRangeDelIterator::InternalNext() { iter_->Next(); }
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
// NOTE: target is a user key
|
|
|
|
void TruncatedRangeDelIterator::Seek(const Slice& target) {
|
|
|
|
if (largest_ != nullptr &&
|
|
|
|
icmp_->Compare(*largest_, ParsedInternalKey(target, kMaxSequenceNumber,
|
|
|
|
kTypeRangeDeletion)) <= 0) {
|
|
|
|
iter_->Invalidate();
|
|
|
|
return;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
if (smallest_ != nullptr &&
|
|
|
|
icmp_->user_comparator()->Compare(target, smallest_->user_key) < 0) {
|
|
|
|
iter_->Seek(smallest_->user_key);
|
|
|
|
return;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
iter_->Seek(target);
|
|
|
|
}
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
// NOTE: target is a user key
|
|
|
|
void TruncatedRangeDelIterator::SeekForPrev(const Slice& target) {
|
|
|
|
if (smallest_ != nullptr &&
|
|
|
|
icmp_->Compare(ParsedInternalKey(target, 0, kTypeRangeDeletion),
|
|
|
|
*smallest_) < 0) {
|
|
|
|
iter_->Invalidate();
|
|
|
|
return;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
if (largest_ != nullptr &&
|
|
|
|
icmp_->user_comparator()->Compare(largest_->user_key, target) < 0) {
|
|
|
|
iter_->SeekForPrev(largest_->user_key);
|
|
|
|
return;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
iter_->SeekForPrev(target);
|
|
|
|
}
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
void TruncatedRangeDelIterator::SeekToFirst() {
|
|
|
|
if (smallest_ != nullptr) {
|
|
|
|
iter_->Seek(smallest_->user_key);
|
|
|
|
return;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
iter_->SeekToTopFirst();
|
|
|
|
}
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
void TruncatedRangeDelIterator::SeekToLast() {
|
|
|
|
if (largest_ != nullptr) {
|
|
|
|
iter_->SeekForPrev(largest_->user_key);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
iter_->SeekToTopLast();
|
|
|
|
}
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
std::map<SequenceNumber, std::unique_ptr<TruncatedRangeDelIterator>>
|
|
|
|
TruncatedRangeDelIterator::SplitBySnapshot(
|
|
|
|
const std::vector<SequenceNumber>& snapshots) {
|
|
|
|
using FragmentedIterPair =
|
|
|
|
std::pair<const SequenceNumber,
|
|
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator>>;
|
|
|
|
|
|
|
|
auto split_untruncated_iters = iter_->SplitBySnapshot(snapshots);
|
|
|
|
std::map<SequenceNumber, std::unique_ptr<TruncatedRangeDelIterator>>
|
|
|
|
split_truncated_iters;
|
|
|
|
std::for_each(
|
|
|
|
split_untruncated_iters.begin(), split_untruncated_iters.end(),
|
|
|
|
[&](FragmentedIterPair& iter_pair) {
|
|
|
|
std::unique_ptr<TruncatedRangeDelIterator> truncated_iter(
|
|
|
|
new TruncatedRangeDelIterator(std::move(iter_pair.second), icmp_,
|
|
|
|
smallest_ikey_, largest_ikey_));
|
|
|
|
split_truncated_iters.emplace(iter_pair.first,
|
|
|
|
std::move(truncated_iter));
|
|
|
|
});
|
|
|
|
return split_truncated_iters;
|
|
|
|
}
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
ForwardRangeDelIterator::ForwardRangeDelIterator(
|
2018-12-18 22:10:31 +00:00
|
|
|
const InternalKeyComparator* icmp)
|
2018-12-18 01:26:56 +00:00
|
|
|
: icmp_(icmp),
|
|
|
|
unused_idx_(0),
|
|
|
|
active_seqnums_(SeqMaxComparator()),
|
|
|
|
active_iters_(EndKeyMinComparator(icmp)),
|
|
|
|
inactive_iters_(StartKeyMinComparator(icmp)) {}
|
|
|
|
|
|
|
|
bool ForwardRangeDelIterator::ShouldDelete(const ParsedInternalKey& parsed) {
|
|
|
|
// Move active iterators that end before parsed.
|
|
|
|
while (!active_iters_.empty() &&
|
|
|
|
icmp_->Compare((*active_iters_.top())->end_key(), parsed) <= 0) {
|
|
|
|
TruncatedRangeDelIterator* iter = PopActiveIter();
|
|
|
|
do {
|
|
|
|
iter->Next();
|
|
|
|
} while (iter->Valid() && icmp_->Compare(iter->end_key(), parsed) <= 0);
|
|
|
|
PushIter(iter, parsed);
|
|
|
|
assert(active_iters_.size() == active_seqnums_.size());
|
|
|
|
}
|
|
|
|
|
|
|
|
// Move inactive iterators that start before parsed.
|
|
|
|
while (!inactive_iters_.empty() &&
|
|
|
|
icmp_->Compare(inactive_iters_.top()->start_key(), parsed) <= 0) {
|
|
|
|
TruncatedRangeDelIterator* iter = PopInactiveIter();
|
|
|
|
while (iter->Valid() && icmp_->Compare(iter->end_key(), parsed) <= 0) {
|
|
|
|
iter->Next();
|
|
|
|
}
|
|
|
|
PushIter(iter, parsed);
|
|
|
|
assert(active_iters_.size() == active_seqnums_.size());
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
return active_seqnums_.empty()
|
|
|
|
? false
|
|
|
|
: (*active_seqnums_.begin())->seq() > parsed.sequence;
|
2016-11-19 00:54:09 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
void ForwardRangeDelIterator::Invalidate() {
|
|
|
|
unused_idx_ = 0;
|
|
|
|
active_iters_.clear();
|
|
|
|
active_seqnums_.clear();
|
|
|
|
inactive_iters_.clear();
|
Compaction Support for Range Deletion
Summary:
This diff introduces RangeDelAggregator, which takes ownership of iterators
provided to it via AddTombstones(). The tombstones are organized in a two-level
map (snapshot stripe -> begin key -> tombstone). Tombstone creation avoids data
copy by holding Slices returned by the iterator, which remain valid thanks to pinning.
For compaction, we create a hierarchical range tombstone iterator with structure
matching the iterator over compaction input data. An aggregator based on that
iterator is used by CompactionIterator to determine which keys are covered by
range tombstones. In case of merge operand, the same aggregator is used by
MergeHelper. Upon finishing each file in the compaction, relevant range tombstones
are added to the output file's range tombstone metablock and file boundaries are
updated accordingly.
To check whether a key is covered by range tombstone, RangeDelAggregator::ShouldDelete()
considers tombstones in the key's snapshot stripe. When this function is used outside of
compaction, it also checks newer stripes, which can contain covering tombstones. Currently
the intra-stripe check involves a linear scan; however, in the future we plan to collapse ranges
within a stripe such that binary search can be used.
RangeDelAggregator::AddToBuilder() adds all range tombstones in the table's key-range
to a new table's range tombstone meta-block. Since range tombstones may fall in the gap
between files, we may need to extend some files' key-ranges. The strategy is (1) first file
extends as far left as possible and other files do not extend left, (2) all files extend right
until either the start of the next file or the end of the last range tombstone in the gap,
whichever comes first.
One other notable change is adding release/move semantics to ScopedArenaIterator
such that it can be used to transfer ownership of an arena-allocated iterator, similar to
how unique_ptr is used for malloc'd data.
Depends on D61473
Test Plan: compaction_iterator_test, mock_table, end-to-end tests in D63927
Reviewers: sdong, IslamAbdelRahman, wanning, yhchiang, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D62205
2016-10-18 19:04:56 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
ReverseRangeDelIterator::ReverseRangeDelIterator(
|
2018-12-18 22:10:31 +00:00
|
|
|
const InternalKeyComparator* icmp)
|
2018-12-18 01:26:56 +00:00
|
|
|
: icmp_(icmp),
|
|
|
|
unused_idx_(0),
|
|
|
|
active_seqnums_(SeqMaxComparator()),
|
|
|
|
active_iters_(StartKeyMaxComparator(icmp)),
|
|
|
|
inactive_iters_(EndKeyMaxComparator(icmp)) {}
|
|
|
|
|
|
|
|
bool ReverseRangeDelIterator::ShouldDelete(const ParsedInternalKey& parsed) {
|
|
|
|
// Move active iterators that start after parsed.
|
|
|
|
while (!active_iters_.empty() &&
|
|
|
|
icmp_->Compare(parsed, (*active_iters_.top())->start_key()) < 0) {
|
|
|
|
TruncatedRangeDelIterator* iter = PopActiveIter();
|
|
|
|
do {
|
|
|
|
iter->Prev();
|
|
|
|
} while (iter->Valid() && icmp_->Compare(parsed, iter->start_key()) < 0);
|
|
|
|
PushIter(iter, parsed);
|
|
|
|
assert(active_iters_.size() == active_seqnums_.size());
|
|
|
|
}
|
|
|
|
|
|
|
|
// Move inactive iterators that end after parsed.
|
|
|
|
while (!inactive_iters_.empty() &&
|
|
|
|
icmp_->Compare(parsed, inactive_iters_.top()->end_key()) < 0) {
|
|
|
|
TruncatedRangeDelIterator* iter = PopInactiveIter();
|
|
|
|
while (iter->Valid() && icmp_->Compare(parsed, iter->start_key()) < 0) {
|
|
|
|
iter->Prev();
|
|
|
|
}
|
|
|
|
PushIter(iter, parsed);
|
|
|
|
assert(active_iters_.size() == active_seqnums_.size());
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
return active_seqnums_.empty()
|
|
|
|
? false
|
|
|
|
: (*active_seqnums_.begin())->seq() > parsed.sequence;
|
2016-11-04 01:40:23 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
void ReverseRangeDelIterator::Invalidate() {
|
|
|
|
unused_idx_ = 0;
|
|
|
|
active_iters_.clear();
|
|
|
|
active_seqnums_.clear();
|
|
|
|
inactive_iters_.clear();
|
Compaction Support for Range Deletion
Summary:
This diff introduces RangeDelAggregator, which takes ownership of iterators
provided to it via AddTombstones(). The tombstones are organized in a two-level
map (snapshot stripe -> begin key -> tombstone). Tombstone creation avoids data
copy by holding Slices returned by the iterator, which remain valid thanks to pinning.
For compaction, we create a hierarchical range tombstone iterator with structure
matching the iterator over compaction input data. An aggregator based on that
iterator is used by CompactionIterator to determine which keys are covered by
range tombstones. In case of merge operand, the same aggregator is used by
MergeHelper. Upon finishing each file in the compaction, relevant range tombstones
are added to the output file's range tombstone metablock and file boundaries are
updated accordingly.
To check whether a key is covered by range tombstone, RangeDelAggregator::ShouldDelete()
considers tombstones in the key's snapshot stripe. When this function is used outside of
compaction, it also checks newer stripes, which can contain covering tombstones. Currently
the intra-stripe check involves a linear scan; however, in the future we plan to collapse ranges
within a stripe such that binary search can be used.
RangeDelAggregator::AddToBuilder() adds all range tombstones in the table's key-range
to a new table's range tombstone meta-block. Since range tombstones may fall in the gap
between files, we may need to extend some files' key-ranges. The strategy is (1) first file
extends as far left as possible and other files do not extend left, (2) all files extend right
until either the start of the next file or the end of the last range tombstone in the gap,
whichever comes first.
One other notable change is adding release/move semantics to ScopedArenaIterator
such that it can be used to transfer ownership of an arena-allocated iterator, similar to
how unique_ptr is used for malloc'd data.
Depends on D61473
Test Plan: compaction_iterator_test, mock_table, end-to-end tests in D63927
Reviewers: sdong, IslamAbdelRahman, wanning, yhchiang, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D62205
2016-10-18 19:04:56 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
bool RangeDelAggregator::StripeRep::ShouldDelete(
|
|
|
|
const ParsedInternalKey& parsed, RangeDelPositioningMode mode) {
|
|
|
|
if (!InStripe(parsed.sequence) || IsEmpty()) {
|
2017-11-28 19:18:42 +00:00
|
|
|
return false;
|
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
switch (mode) {
|
|
|
|
case RangeDelPositioningMode::kForwardTraversal:
|
|
|
|
InvalidateReverseIter();
|
|
|
|
|
|
|
|
// Pick up previously unseen iterators.
|
|
|
|
for (auto it = std::next(iters_.begin(), forward_iter_.UnusedIdx());
|
|
|
|
it != iters_.end(); ++it, forward_iter_.IncUnusedIdx()) {
|
|
|
|
auto& iter = *it;
|
|
|
|
forward_iter_.AddNewIter(iter.get(), parsed);
|
|
|
|
}
|
|
|
|
|
|
|
|
return forward_iter_.ShouldDelete(parsed);
|
|
|
|
case RangeDelPositioningMode::kBackwardTraversal:
|
|
|
|
InvalidateForwardIter();
|
|
|
|
|
|
|
|
// Pick up previously unseen iterators.
|
|
|
|
for (auto it = std::next(iters_.begin(), reverse_iter_.UnusedIdx());
|
|
|
|
it != iters_.end(); ++it, reverse_iter_.IncUnusedIdx()) {
|
|
|
|
auto& iter = *it;
|
|
|
|
reverse_iter_.AddNewIter(iter.get(), parsed);
|
|
|
|
}
|
|
|
|
|
|
|
|
return reverse_iter_.ShouldDelete(parsed);
|
|
|
|
default:
|
|
|
|
assert(false);
|
|
|
|
return false;
|
Compaction Support for Range Deletion
Summary:
This diff introduces RangeDelAggregator, which takes ownership of iterators
provided to it via AddTombstones(). The tombstones are organized in a two-level
map (snapshot stripe -> begin key -> tombstone). Tombstone creation avoids data
copy by holding Slices returned by the iterator, which remain valid thanks to pinning.
For compaction, we create a hierarchical range tombstone iterator with structure
matching the iterator over compaction input data. An aggregator based on that
iterator is used by CompactionIterator to determine which keys are covered by
range tombstones. In case of merge operand, the same aggregator is used by
MergeHelper. Upon finishing each file in the compaction, relevant range tombstones
are added to the output file's range tombstone metablock and file boundaries are
updated accordingly.
To check whether a key is covered by range tombstone, RangeDelAggregator::ShouldDelete()
considers tombstones in the key's snapshot stripe. When this function is used outside of
compaction, it also checks newer stripes, which can contain covering tombstones. Currently
the intra-stripe check involves a linear scan; however, in the future we plan to collapse ranges
within a stripe such that binary search can be used.
RangeDelAggregator::AddToBuilder() adds all range tombstones in the table's key-range
to a new table's range tombstone meta-block. Since range tombstones may fall in the gap
between files, we may need to extend some files' key-ranges. The strategy is (1) first file
extends as far left as possible and other files do not extend left, (2) all files extend right
until either the start of the next file or the end of the last range tombstone in the gap,
whichever comes first.
One other notable change is adding release/move semantics to ScopedArenaIterator
such that it can be used to transfer ownership of an arena-allocated iterator, similar to
how unique_ptr is used for malloc'd data.
Depends on D61473
Test Plan: compaction_iterator_test, mock_table, end-to-end tests in D63927
Reviewers: sdong, IslamAbdelRahman, wanning, yhchiang, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D62205
2016-10-18 19:04:56 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
bool RangeDelAggregator::StripeRep::IsRangeOverlapped(const Slice& start,
|
|
|
|
const Slice& end) {
|
|
|
|
Invalidate();
|
|
|
|
|
|
|
|
// Set the internal start/end keys so that:
|
|
|
|
// - if start_ikey has the same user key and sequence number as the
|
|
|
|
// current end key, start_ikey will be considered greater; and
|
|
|
|
// - if end_ikey has the same user key and sequence number as the current
|
|
|
|
// start key, end_ikey will be considered greater.
|
|
|
|
ParsedInternalKey start_ikey(start, kMaxSequenceNumber,
|
|
|
|
static_cast<ValueType>(0));
|
|
|
|
ParsedInternalKey end_ikey(end, 0, static_cast<ValueType>(0));
|
|
|
|
for (auto& iter : iters_) {
|
|
|
|
bool checked_candidate_tombstones = false;
|
|
|
|
for (iter->SeekForPrev(start);
|
|
|
|
iter->Valid() && icmp_->Compare(iter->start_key(), end_ikey) <= 0;
|
|
|
|
iter->Next()) {
|
|
|
|
checked_candidate_tombstones = true;
|
|
|
|
if (icmp_->Compare(start_ikey, iter->end_key()) < 0 &&
|
|
|
|
icmp_->Compare(iter->start_key(), end_ikey) <= 0) {
|
|
|
|
return true;
|
2018-07-14 00:34:54 +00:00
|
|
|
}
|
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
|
|
|
|
if (!checked_candidate_tombstones) {
|
|
|
|
// Do an additional check for when the end of the range is the begin
|
|
|
|
// key of a tombstone, which we missed earlier since SeekForPrev'ing
|
|
|
|
// to the start was invalid.
|
|
|
|
iter->SeekForPrev(end);
|
|
|
|
if (iter->Valid() && icmp_->Compare(start_ikey, iter->end_key()) < 0 &&
|
|
|
|
icmp_->Compare(iter->start_key(), end_ikey) <= 0) {
|
|
|
|
return true;
|
2018-07-14 00:34:54 +00:00
|
|
|
}
|
|
|
|
}
|
Compaction Support for Range Deletion
Summary:
This diff introduces RangeDelAggregator, which takes ownership of iterators
provided to it via AddTombstones(). The tombstones are organized in a two-level
map (snapshot stripe -> begin key -> tombstone). Tombstone creation avoids data
copy by holding Slices returned by the iterator, which remain valid thanks to pinning.
For compaction, we create a hierarchical range tombstone iterator with structure
matching the iterator over compaction input data. An aggregator based on that
iterator is used by CompactionIterator to determine which keys are covered by
range tombstones. In case of merge operand, the same aggregator is used by
MergeHelper. Upon finishing each file in the compaction, relevant range tombstones
are added to the output file's range tombstone metablock and file boundaries are
updated accordingly.
To check whether a key is covered by range tombstone, RangeDelAggregator::ShouldDelete()
considers tombstones in the key's snapshot stripe. When this function is used outside of
compaction, it also checks newer stripes, which can contain covering tombstones. Currently
the intra-stripe check involves a linear scan; however, in the future we plan to collapse ranges
within a stripe such that binary search can be used.
RangeDelAggregator::AddToBuilder() adds all range tombstones in the table's key-range
to a new table's range tombstone meta-block. Since range tombstones may fall in the gap
between files, we may need to extend some files' key-ranges. The strategy is (1) first file
extends as far left as possible and other files do not extend left, (2) all files extend right
until either the start of the next file or the end of the last range tombstone in the gap,
whichever comes first.
One other notable change is adding release/move semantics to ScopedArenaIterator
such that it can be used to transfer ownership of an arena-allocated iterator, similar to
how unique_ptr is used for malloc'd data.
Depends on D61473
Test Plan: compaction_iterator_test, mock_table, end-to-end tests in D63927
Reviewers: sdong, IslamAbdelRahman, wanning, yhchiang, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D62205
2016-10-18 19:04:56 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
return false;
|
Compaction Support for Range Deletion
Summary:
This diff introduces RangeDelAggregator, which takes ownership of iterators
provided to it via AddTombstones(). The tombstones are organized in a two-level
map (snapshot stripe -> begin key -> tombstone). Tombstone creation avoids data
copy by holding Slices returned by the iterator, which remain valid thanks to pinning.
For compaction, we create a hierarchical range tombstone iterator with structure
matching the iterator over compaction input data. An aggregator based on that
iterator is used by CompactionIterator to determine which keys are covered by
range tombstones. In case of merge operand, the same aggregator is used by
MergeHelper. Upon finishing each file in the compaction, relevant range tombstones
are added to the output file's range tombstone metablock and file boundaries are
updated accordingly.
To check whether a key is covered by range tombstone, RangeDelAggregator::ShouldDelete()
considers tombstones in the key's snapshot stripe. When this function is used outside of
compaction, it also checks newer stripes, which can contain covering tombstones. Currently
the intra-stripe check involves a linear scan; however, in the future we plan to collapse ranges
within a stripe such that binary search can be used.
RangeDelAggregator::AddToBuilder() adds all range tombstones in the table's key-range
to a new table's range tombstone meta-block. Since range tombstones may fall in the gap
between files, we may need to extend some files' key-ranges. The strategy is (1) first file
extends as far left as possible and other files do not extend left, (2) all files extend right
until either the start of the next file or the end of the last range tombstone in the gap,
whichever comes first.
One other notable change is adding release/move semantics to ScopedArenaIterator
such that it can be used to transfer ownership of an arena-allocated iterator, similar to
how unique_ptr is used for malloc'd data.
Depends on D61473
Test Plan: compaction_iterator_test, mock_table, end-to-end tests in D63927
Reviewers: sdong, IslamAbdelRahman, wanning, yhchiang, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D62205
2016-10-18 19:04:56 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
void ReadRangeDelAggregator::AddTombstones(
|
|
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator> input_iter,
|
|
|
|
const InternalKey* smallest, const InternalKey* largest) {
|
|
|
|
if (input_iter == nullptr || input_iter->empty()) {
|
Maintain position in range deletions map
Summary:
When deletion-collapsing mode is enabled (i.e., for DBIter/CompactionIterator), we maintain position in the tombstone maps across calls to ShouldDelete(). Since iterators often access keys sequentially (or reverse-sequentially), scanning forward/backward from the last position can be faster than binary-searching the map for every key.
- When Next() is invoked on an iterator, we use kForwardTraversal to scan forwards, if needed, until arriving at the range deletion containing the next key.
- Similarly for Prev(), we use kBackwardTraversal to scan backwards in the range deletion map.
- When the iterator seeks, we use kBinarySearch for repositioning
- After tombstones are added or before the first ShouldDelete() invocation, the current position is set to invalid, which forces kBinarySearch to be used.
- Non-iterator users (i.e., Get()) use kFullScan, which has the same behavior as before---scan the whole map for every key passed to ShouldDelete().
Closes https://github.com/facebook/rocksdb/pull/1701
Differential Revision: D4350318
Pulled By: ajkr
fbshipit-source-id: 5129b76
2017-01-05 18:22:46 +00:00
|
|
|
return;
|
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
rep_.AddTombstones(
|
|
|
|
std::unique_ptr<TruncatedRangeDelIterator>(new TruncatedRangeDelIterator(
|
|
|
|
std::move(input_iter), icmp_, smallest, largest)));
|
2016-12-20 00:44:30 +00:00
|
|
|
}
|
|
|
|
|
2019-04-18 19:22:29 +00:00
|
|
|
bool ReadRangeDelAggregator::ShouldDeleteImpl(const ParsedInternalKey& parsed,
|
|
|
|
RangeDelPositioningMode mode) {
|
2018-12-18 01:26:56 +00:00
|
|
|
return rep_.ShouldDelete(parsed, mode);
|
2018-10-17 18:45:30 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
bool ReadRangeDelAggregator::IsRangeOverlapped(const Slice& start,
|
|
|
|
const Slice& end) {
|
|
|
|
InvalidateRangeDelMapPositions();
|
|
|
|
return rep_.IsRangeOverlapped(start, end);
|
Compaction Support for Range Deletion
Summary:
This diff introduces RangeDelAggregator, which takes ownership of iterators
provided to it via AddTombstones(). The tombstones are organized in a two-level
map (snapshot stripe -> begin key -> tombstone). Tombstone creation avoids data
copy by holding Slices returned by the iterator, which remain valid thanks to pinning.
For compaction, we create a hierarchical range tombstone iterator with structure
matching the iterator over compaction input data. An aggregator based on that
iterator is used by CompactionIterator to determine which keys are covered by
range tombstones. In case of merge operand, the same aggregator is used by
MergeHelper. Upon finishing each file in the compaction, relevant range tombstones
are added to the output file's range tombstone metablock and file boundaries are
updated accordingly.
To check whether a key is covered by range tombstone, RangeDelAggregator::ShouldDelete()
considers tombstones in the key's snapshot stripe. When this function is used outside of
compaction, it also checks newer stripes, which can contain covering tombstones. Currently
the intra-stripe check involves a linear scan; however, in the future we plan to collapse ranges
within a stripe such that binary search can be used.
RangeDelAggregator::AddToBuilder() adds all range tombstones in the table's key-range
to a new table's range tombstone meta-block. Since range tombstones may fall in the gap
between files, we may need to extend some files' key-ranges. The strategy is (1) first file
extends as far left as possible and other files do not extend left, (2) all files extend right
until either the start of the next file or the end of the last range tombstone in the gap,
whichever comes first.
One other notable change is adding release/move semantics to ScopedArenaIterator
such that it can be used to transfer ownership of an arena-allocated iterator, similar to
how unique_ptr is used for malloc'd data.
Depends on D61473
Test Plan: compaction_iterator_test, mock_table, end-to-end tests in D63927
Reviewers: sdong, IslamAbdelRahman, wanning, yhchiang, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D62205
2016-10-18 19:04:56 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
void CompactionRangeDelAggregator::AddTombstones(
|
|
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator> input_iter,
|
|
|
|
const InternalKey* smallest, const InternalKey* largest) {
|
|
|
|
if (input_iter == nullptr || input_iter->empty()) {
|
|
|
|
return;
|
2016-11-19 00:54:09 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
assert(input_iter->lower_bound() == 0);
|
|
|
|
assert(input_iter->upper_bound() == kMaxSequenceNumber);
|
|
|
|
parent_iters_.emplace_back(new TruncatedRangeDelIterator(
|
|
|
|
std::move(input_iter), icmp_, smallest, largest));
|
|
|
|
|
|
|
|
auto split_iters = parent_iters_.back()->SplitBySnapshot(*snapshots_);
|
|
|
|
for (auto& split_iter : split_iters) {
|
|
|
|
auto it = reps_.find(split_iter.first);
|
|
|
|
if (it == reps_.end()) {
|
|
|
|
bool inserted;
|
|
|
|
SequenceNumber upper_bound = split_iter.second->upper_bound();
|
|
|
|
SequenceNumber lower_bound = split_iter.second->lower_bound();
|
|
|
|
std::tie(it, inserted) = reps_.emplace(
|
|
|
|
split_iter.first, StripeRep(icmp_, upper_bound, lower_bound));
|
|
|
|
assert(inserted);
|
2016-11-04 18:53:38 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
assert(it != reps_.end());
|
|
|
|
it->second.AddTombstones(std::move(split_iter.second));
|
2016-11-04 18:53:38 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
bool CompactionRangeDelAggregator::ShouldDelete(const ParsedInternalKey& parsed,
|
|
|
|
RangeDelPositioningMode mode) {
|
|
|
|
auto it = reps_.lower_bound(parsed.sequence);
|
|
|
|
if (it == reps_.end()) {
|
|
|
|
return false;
|
2018-05-04 23:37:39 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
return it->second.ShouldDelete(parsed, mode);
|
2018-05-04 23:37:39 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
namespace {
|
|
|
|
|
|
|
|
class TruncatedRangeDelMergingIter : public InternalIterator {
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
public:
|
2018-12-18 01:26:56 +00:00
|
|
|
TruncatedRangeDelMergingIter(
|
|
|
|
const InternalKeyComparator* icmp, const Slice* lower_bound,
|
|
|
|
const Slice* upper_bound, bool upper_bound_inclusive,
|
|
|
|
const std::vector<std::unique_ptr<TruncatedRangeDelIterator>>& children)
|
|
|
|
: icmp_(icmp),
|
|
|
|
lower_bound_(lower_bound),
|
|
|
|
upper_bound_(upper_bound),
|
|
|
|
upper_bound_inclusive_(upper_bound_inclusive),
|
|
|
|
heap_(StartKeyMinComparator(icmp)) {
|
|
|
|
for (auto& child : children) {
|
|
|
|
if (child != nullptr) {
|
|
|
|
assert(child->lower_bound() == 0);
|
|
|
|
assert(child->upper_bound() == kMaxSequenceNumber);
|
|
|
|
children_.push_back(child.get());
|
|
|
|
}
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
bool Valid() const override {
|
|
|
|
return !heap_.empty() && BeforeEndKey(heap_.top());
|
|
|
|
}
|
|
|
|
Status status() const override { return Status::OK(); }
|
|
|
|
|
|
|
|
void SeekToFirst() override {
|
|
|
|
heap_.clear();
|
|
|
|
for (auto& child : children_) {
|
|
|
|
if (lower_bound_ != nullptr) {
|
|
|
|
child->Seek(*lower_bound_);
|
|
|
|
} else {
|
|
|
|
child->SeekToFirst();
|
|
|
|
}
|
|
|
|
if (child->Valid()) {
|
|
|
|
heap_.push(child);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
|
|
|
void Next() override {
|
2018-12-18 01:26:56 +00:00
|
|
|
auto* top = heap_.top();
|
|
|
|
top->InternalNext();
|
|
|
|
if (top->Valid()) {
|
|
|
|
heap_.replace_top(top);
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
} else {
|
|
|
|
heap_.pop();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
Slice key() const override {
|
|
|
|
auto* top = heap_.top();
|
|
|
|
cur_start_key_.Set(top->start_key().user_key, top->seq(),
|
|
|
|
kTypeRangeDeletion);
|
|
|
|
return cur_start_key_.Encode();
|
2018-10-09 22:15:27 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
Slice value() const override {
|
|
|
|
auto* top = heap_.top();
|
|
|
|
assert(top->end_key().sequence == kMaxSequenceNumber);
|
|
|
|
return top->end_key().user_key;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
// Unused InternalIterator methods
|
|
|
|
void Prev() override { assert(false); }
|
|
|
|
void Seek(const Slice& /* target */) override { assert(false); }
|
|
|
|
void SeekForPrev(const Slice& /* target */) override { assert(false); }
|
|
|
|
void SeekToLast() override { assert(false); }
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
|
|
|
private:
|
2018-12-18 01:26:56 +00:00
|
|
|
bool BeforeEndKey(const TruncatedRangeDelIterator* iter) const {
|
|
|
|
if (upper_bound_ == nullptr) {
|
|
|
|
return true;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
2018-12-18 01:26:56 +00:00
|
|
|
int cmp = icmp_->user_comparator()->Compare(iter->start_key().user_key,
|
|
|
|
*upper_bound_);
|
|
|
|
return upper_bound_inclusive_ ? cmp <= 0 : cmp < 0;
|
|
|
|
}
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
const InternalKeyComparator* icmp_;
|
|
|
|
const Slice* lower_bound_;
|
|
|
|
const Slice* upper_bound_;
|
|
|
|
bool upper_bound_inclusive_;
|
|
|
|
BinaryHeap<TruncatedRangeDelIterator*, StartKeyMinComparator> heap_;
|
|
|
|
std::vector<TruncatedRangeDelIterator*> children_;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
mutable InternalKey cur_start_key_;
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
};
|
|
|
|
|
2018-12-18 01:26:56 +00:00
|
|
|
} // namespace
|
|
|
|
|
|
|
|
std::unique_ptr<FragmentedRangeTombstoneIterator>
|
|
|
|
CompactionRangeDelAggregator::NewIterator(const Slice* lower_bound,
|
|
|
|
const Slice* upper_bound,
|
|
|
|
bool upper_bound_inclusive) {
|
|
|
|
InvalidateRangeDelMapPositions();
|
|
|
|
std::unique_ptr<TruncatedRangeDelMergingIter> merging_iter(
|
|
|
|
new TruncatedRangeDelMergingIter(icmp_, lower_bound, upper_bound,
|
|
|
|
upper_bound_inclusive, parent_iters_));
|
|
|
|
|
|
|
|
auto fragmented_tombstone_list =
|
|
|
|
std::make_shared<FragmentedRangeTombstoneList>(
|
|
|
|
std::move(merging_iter), *icmp_, true /* for_compaction */,
|
|
|
|
*snapshots_);
|
|
|
|
|
|
|
|
return std::unique_ptr<FragmentedRangeTombstoneIterator>(
|
|
|
|
new FragmentedRangeTombstoneIterator(
|
|
|
|
fragmented_tombstone_list, *icmp_,
|
|
|
|
kMaxSequenceNumber /* upper_bound */));
|
Range deletion performance improvements + cleanup (#4014)
Summary:
This fixes the same performance issue that #3992 fixes but with much more invasive cleanup.
I'm more excited about this PR because it paves the way for fixing another problem we uncovered at Cockroach where range deletion tombstones can cause massive compactions. For example, suppose L4 contains deletions from [a, c) and [x, z) and no other keys, and L5 is entirely empty. L6, however, is full of data. When compacting L4 -> L5, we'll end up with one file that spans, massively, from [a, z). When we go to compact L5 -> L6, we'll have to rewrite all of L6! If, instead of range deletions in L4, we had keys a, b, x, y, and z, RocksDB would have been smart enough to create two files in L5: one for a and b and another for x, y, and z.
With the changes in this PR, it will be possible to adjust the compaction logic to split tombstones/start new output files when they would span too many files in the grandparent level.
ajkr please take a look when you have a minute!
Pull Request resolved: https://github.com/facebook/rocksdb/pull/4014
Differential Revision: D8773253
Pulled By: ajkr
fbshipit-source-id: ec62fa85f648fdebe1380b83ed997f9baec35677
2018-07-12 21:28:10 +00:00
|
|
|
}
|
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
} // namespace ROCKSDB_NAMESPACE
|