2020-03-21 02:17:54 +00:00
|
|
|
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
//
|
|
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
|
|
|
|
#include "db/version_edit_handler.h"
|
|
|
|
|
2020-11-11 15:58:15 +00:00
|
|
|
#include <cinttypes>
|
2022-01-08 02:08:50 +00:00
|
|
|
#include <sstream>
|
2020-11-11 15:58:15 +00:00
|
|
|
|
2021-04-19 18:55:20 +00:00
|
|
|
#include "db/blob/blob_file_reader.h"
|
2022-06-21 03:58:11 +00:00
|
|
|
#include "db/blob/blob_source.h"
|
Sort L0 files by newly introduced epoch_num (#10922)
Summary:
**Context:**
Sorting L0 files by `largest_seqno` has at least two inconvenience:
- File ingestion and compaction involving ingested files can create files of overlapping seqno range with the existing files. `force_consistency_check=true` will catch such overlap seqno range even those harmless overlap.
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n")
- insert k1@1 to memtable m1
- ingest file s1 with k2@2, ingest file s2 with k3@3
- insert k4@4 to m1
- compact files s1, s2 and result in new file s3 of seqno range [2, 3]
- flush m1 and result in new file s4 of seqno range [1, 4]. And `force_consistency_check=true` will think s4 and s3 has file reordering corruption that might cause retuning an old value of k1
- However such caught corruption is a false positive since s1, s2 will not have overlapped keys with k1 or whatever inserted into m1 before ingest file s1 by the requirement of file ingestion (otherwise the m1 will be flushed first before any of the file ingestion completes). Therefore there in fact isn't any file reordering corruption.
- Single delete can decrease a file's largest seqno and ordering by `largest_seqno` can introduce a wrong ordering hence file reordering corruption
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n", Credit to ajkr for this example)
- an existing SST s1 contains only k1@1
- insert k1@2 to memtable m1
- ingest file s2 with k3@3, ingest file s3 with k4@4
- insert single delete k5@5 in m1
- flush m1 and result in new file s4 of seqno range [2, 5]
- compact s1, s2, s3 and result in new file s5 of seqno range [1, 4]
- compact s4 and result in new file s6 of seqno range [2] due to single delete
- By the last step, we have file ordering by largest seqno (">" means "newer") : s5 > s6 while s6 contains a newer version of the k1's value (i.e, k1@2) than s5, which is a real reordering corruption. While this can be caught by `force_consistency_check=true`, there isn't a good way to prevent this from happening if ordering by `largest_seqno`
Therefore, we are redesigning the sorting criteria of L0 files and avoid above inconvenience. Credit to ajkr , we now introduce `epoch_num` which describes the order of a file being flushed or ingested/imported (compaction output file will has the minimum `epoch_num` among input files'). This will avoid the above inconvenience in the following ways:
- In the first case above, there will no longer be overlap seqno range check in `force_consistency_check=true` but `epoch_number` ordering check. This will result in file ordering s1 < s2 < s4 (pre-compaction) and s3 < s4 (post-compaction) which won't trigger false positive corruption. See test class `DBCompactionTestL0FilesMisorderCorruption*` for more.
- In the second case above, this will result in file ordering s1 < s2 < s3 < s4 (pre-compacting s1, s2, s3), s5 < s4 (post-compacting s1, s2, s3), s5 < s6 (post-compacting s4), which are correct file ordering without causing any corruption.
**Summary:**
- Introduce `epoch_number` stored per `ColumnFamilyData` and sort CF's L0 files by their assigned `epoch_number` instead of `largest_seqno`.
- `epoch_number` is increased and assigned upon `VersionEdit::AddFile()` for flush (or similarly for WriteLevel0TableForRecovery) and file ingestion (except for allow_behind_true, which will always get assigned as the `kReservedEpochNumberForFileIngestedBehind`)
- Compaction output file is assigned with the minimum `epoch_number` among input files'
- Refit level: reuse refitted file's epoch_number
- Other paths needing `epoch_number` treatment:
- Import column families: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`
- Repair: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`.
- Assigning new epoch_number to a file and adding this file to LSM tree should be atomic. This is guaranteed by us assigning epoch_number right upon `VersionEdit::AddFile()` where this version edit will be apply to LSM tree shape right after by holding the db mutex (e.g, flush, file ingestion, import column family) or by there is only 1 ongoing edit per CF (e.g, WriteLevel0TableForRecovery, Repair).
- Assigning the minimum input epoch number to compaction output file won't misorder L0 files (even through later `Refit(target_level=0)`). It's due to for every key "k" in the input range, a legit compaction will cover a continuous epoch number range of that key. As long as we assign the key "k" the minimum input epoch number, it won't become newer or older than the versions of this key that aren't included in this compaction hence no misorder.
- Persist `epoch_number` of each file in manifest and recover `epoch_number` on db recovery
- Backward compatibility with old db without `epoch_number` support is guaranteed by assigning `epoch_number` to recovered files by `NewestFirstBySeqno` order. See `VersionStorageInfo::RecoverEpochNumbers()` for more
- Forward compatibility with manifest is guaranteed by flexibility of `NewFileCustomTag`
- Replace `force_consistent_check` on L0 with `epoch_number` and remove false positive check like case 1 with `largest_seqno` above
- Due to backward compatibility issue, we might encounter files with missing epoch number at the beginning of db recovery. We will still use old L0 sorting mechanism (`NewestFirstBySeqno`) to check/sort them till we infer their epoch number. See usages of `EpochNumberRequirement`.
- Remove fix https://github.com/facebook/rocksdb/pull/5958#issue-511150930 and their outdated tests to file reordering corruption because such fix can be replaced by this PR.
- Misc:
- update existing tests with `epoch_number` so make check will pass
- update https://github.com/facebook/rocksdb/pull/5958#issue-511150930 tests to verify corruption is fixed using `epoch_number` and cover universal/fifo compaction/CompactRange/CompactFile cases
- assert db_mutex is held for a few places before calling ColumnFamilyData::NewEpochNumber()
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10922
Test Plan:
- `make check`
- New unit tests under `db/db_compaction_test.cc`, `db/db_test2.cc`, `db/version_builder_test.cc`, `db/repair_test.cc`
- Updated tests (i.e, `DBCompactionTestL0FilesMisorderCorruption*`) under https://github.com/facebook/rocksdb/pull/5958#issue-511150930
- [Ongoing] Compatibility test: manually run https://github.com/ajkr/rocksdb/commit/36a5686ec012f35a4371e409aa85c404ca1c210d (with file ingestion off for running the `.orig` binary to prevent this bug affecting upgrade/downgrade formality checking) for 1 hour on `simple black/white box`, `cf_consistency/txn/enable_ts with whitebox + test_best_efforts_recovery with blackbox`
- [Ongoing] normal db stress test
- [Ongoing] db stress test with aggressive value https://github.com/facebook/rocksdb/pull/10761
Reviewed By: ajkr
Differential Revision: D41063187
Pulled By: hx235
fbshipit-source-id: 826cb23455de7beaabe2d16c57682a82733a32a9
2022-12-13 21:29:37 +00:00
|
|
|
#include "db/version_edit.h"
|
2021-09-29 11:01:57 +00:00
|
|
|
#include "logging/logging.h"
|
2020-03-21 02:17:54 +00:00
|
|
|
#include "monitoring/persistent_stats_history.h"
|
2023-07-27 03:16:32 +00:00
|
|
|
#include "util/udt_util.h"
|
2020-03-21 02:17:54 +00:00
|
|
|
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
|
|
|
2020-11-11 15:58:15 +00:00
|
|
|
void VersionEditHandlerBase::Iterate(log::Reader& reader,
|
|
|
|
Status* log_read_status) {
|
2020-03-21 02:17:54 +00:00
|
|
|
Slice record;
|
|
|
|
std::string scratch;
|
2020-06-18 17:07:42 +00:00
|
|
|
assert(log_read_status);
|
|
|
|
assert(log_read_status->ok());
|
|
|
|
|
2023-03-10 00:42:57 +00:00
|
|
|
[[maybe_unused]] size_t recovered_edits = 0;
|
2020-03-21 02:17:54 +00:00
|
|
|
Status s = Initialize();
|
2020-11-11 15:58:15 +00:00
|
|
|
while (reader.LastRecordEnd() < max_manifest_read_size_ && s.ok() &&
|
|
|
|
reader.ReadRecord(&record, &scratch) && log_read_status->ok()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
VersionEdit edit;
|
|
|
|
s = edit.DecodeFrom(record);
|
|
|
|
if (!s.ok()) {
|
|
|
|
break;
|
|
|
|
}
|
2020-11-11 15:58:15 +00:00
|
|
|
|
2020-03-21 02:17:54 +00:00
|
|
|
s = read_buffer_.AddEdit(&edit);
|
|
|
|
if (!s.ok()) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
ColumnFamilyData* cfd = nullptr;
|
2023-11-01 19:04:11 +00:00
|
|
|
if (edit.IsInAtomicGroup()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
if (read_buffer_.IsFull()) {
|
|
|
|
for (auto& e : read_buffer_.replay_buffer()) {
|
|
|
|
s = ApplyVersionEdit(e, &cfd);
|
|
|
|
if (!s.ok()) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
++recovered_edits;
|
|
|
|
}
|
|
|
|
if (!s.ok()) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
read_buffer_.Clear();
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
s = ApplyVersionEdit(edit, &cfd);
|
|
|
|
if (s.ok()) {
|
|
|
|
++recovered_edits;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-06-18 17:07:42 +00:00
|
|
|
if (!log_read_status->ok()) {
|
|
|
|
s = *log_read_status;
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
|
|
|
|
CheckIterationResult(reader, &s);
|
|
|
|
|
|
|
|
if (!s.ok()) {
|
2022-01-08 02:08:50 +00:00
|
|
|
if (s.IsCorruption()) {
|
|
|
|
// when we find a Corruption error, something is
|
|
|
|
// wrong with the underlying file. in this case we
|
|
|
|
// want to report the filename, so in here we append
|
|
|
|
// the filename to the Corruption message
|
|
|
|
assert(reader.file());
|
|
|
|
|
|
|
|
// build a new error message
|
|
|
|
std::stringstream message;
|
|
|
|
// append previous dynamic state message
|
|
|
|
const char* state = s.getState();
|
|
|
|
if (state != nullptr) {
|
|
|
|
message << state;
|
|
|
|
message << ' ';
|
|
|
|
}
|
|
|
|
// append the filename to the corruption message
|
2023-07-10 22:52:38 +00:00
|
|
|
message << " The file " << reader.file()->file_name()
|
|
|
|
<< " may be corrupted.";
|
2022-01-08 02:08:50 +00:00
|
|
|
// overwrite the status with the extended status
|
|
|
|
s = Status(s.code(), s.subcode(), s.severity(), message.str());
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
status_ = s;
|
|
|
|
}
|
2020-11-11 15:58:15 +00:00
|
|
|
TEST_SYNC_POINT_CALLBACK("VersionEditHandlerBase::Iterate:Finish",
|
|
|
|
&recovered_edits);
|
|
|
|
}
|
|
|
|
|
|
|
|
Status ListColumnFamiliesHandler::ApplyVersionEdit(
|
|
|
|
VersionEdit& edit, ColumnFamilyData** /*unused*/) {
|
|
|
|
Status s;
|
2023-11-01 19:04:11 +00:00
|
|
|
uint32_t cf_id = edit.GetColumnFamily();
|
|
|
|
if (edit.IsColumnFamilyAdd()) {
|
|
|
|
if (column_family_names_.find(cf_id) != column_family_names_.end()) {
|
2020-11-11 15:58:15 +00:00
|
|
|
s = Status::Corruption("Manifest adding the same column family twice");
|
|
|
|
} else {
|
2023-11-01 19:04:11 +00:00
|
|
|
column_family_names_.insert({cf_id, edit.GetColumnFamilyName()});
|
2020-11-11 15:58:15 +00:00
|
|
|
}
|
2023-11-01 19:04:11 +00:00
|
|
|
} else if (edit.IsColumnFamilyDrop()) {
|
|
|
|
if (column_family_names_.find(cf_id) == column_family_names_.end()) {
|
2020-11-11 15:58:15 +00:00
|
|
|
s = Status::Corruption("Manifest - dropping non-existing column family");
|
|
|
|
} else {
|
2023-11-01 19:04:11 +00:00
|
|
|
column_family_names_.erase(cf_id);
|
2020-11-11 15:58:15 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
|
|
|
Status FileChecksumRetriever::ApplyVersionEdit(VersionEdit& edit,
|
|
|
|
ColumnFamilyData** /*unused*/) {
|
|
|
|
for (const auto& deleted_file : edit.GetDeletedFiles()) {
|
2021-01-07 23:21:51 +00:00
|
|
|
Status s = file_checksum_list_.RemoveOneFileChecksum(deleted_file.second);
|
|
|
|
if (!s.ok()) {
|
|
|
|
return s;
|
|
|
|
}
|
2020-11-11 15:58:15 +00:00
|
|
|
}
|
|
|
|
for (const auto& new_file : edit.GetNewFiles()) {
|
2021-01-07 23:21:51 +00:00
|
|
|
Status s = file_checksum_list_.InsertOneFileChecksum(
|
2020-11-11 15:58:15 +00:00
|
|
|
new_file.second.fd.GetNumber(), new_file.second.file_checksum,
|
|
|
|
new_file.second.file_checksum_func_name);
|
2021-01-07 23:21:51 +00:00
|
|
|
if (!s.ok()) {
|
|
|
|
return s;
|
|
|
|
}
|
2020-11-11 15:58:15 +00:00
|
|
|
}
|
2021-03-02 04:05:19 +00:00
|
|
|
for (const auto& new_blob_file : edit.GetBlobFileAdditions()) {
|
|
|
|
std::string checksum_value = new_blob_file.GetChecksumValue();
|
|
|
|
std::string checksum_method = new_blob_file.GetChecksumMethod();
|
|
|
|
assert(checksum_value.empty() == checksum_method.empty());
|
|
|
|
if (checksum_method.empty()) {
|
|
|
|
checksum_value = kUnknownFileChecksum;
|
|
|
|
checksum_method = kUnknownFileChecksumFuncName;
|
|
|
|
}
|
|
|
|
Status s = file_checksum_list_.InsertOneFileChecksum(
|
|
|
|
new_blob_file.GetBlobFileNumber(), checksum_value, checksum_method);
|
|
|
|
if (!s.ok()) {
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
}
|
2020-11-11 15:58:15 +00:00
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
|
|
|
VersionEditHandler::VersionEditHandler(
|
2021-03-10 18:58:07 +00:00
|
|
|
bool read_only, std::vector<ColumnFamilyDescriptor> column_families,
|
2020-11-11 15:58:15 +00:00
|
|
|
VersionSet* version_set, bool track_missing_files,
|
2021-04-19 18:55:20 +00:00
|
|
|
bool no_error_if_files_missing, const std::shared_ptr<IOTracer>& io_tracer,
|
2023-04-21 16:07:18 +00:00
|
|
|
const ReadOptions& read_options, bool skip_load_table_files,
|
|
|
|
EpochNumberRequirement epoch_number_requirement)
|
|
|
|
: VersionEditHandlerBase(read_options),
|
2020-11-11 15:58:15 +00:00
|
|
|
read_only_(read_only),
|
2021-03-10 18:58:07 +00:00
|
|
|
column_families_(std::move(column_families)),
|
2020-11-11 15:58:15 +00:00
|
|
|
version_set_(version_set),
|
|
|
|
track_missing_files_(track_missing_files),
|
2021-04-19 18:55:20 +00:00
|
|
|
no_error_if_files_missing_(no_error_if_files_missing),
|
2020-11-11 15:58:15 +00:00
|
|
|
io_tracer_(io_tracer),
|
|
|
|
skip_load_table_files_(skip_load_table_files),
|
Sort L0 files by newly introduced epoch_num (#10922)
Summary:
**Context:**
Sorting L0 files by `largest_seqno` has at least two inconvenience:
- File ingestion and compaction involving ingested files can create files of overlapping seqno range with the existing files. `force_consistency_check=true` will catch such overlap seqno range even those harmless overlap.
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n")
- insert k1@1 to memtable m1
- ingest file s1 with k2@2, ingest file s2 with k3@3
- insert k4@4 to m1
- compact files s1, s2 and result in new file s3 of seqno range [2, 3]
- flush m1 and result in new file s4 of seqno range [1, 4]. And `force_consistency_check=true` will think s4 and s3 has file reordering corruption that might cause retuning an old value of k1
- However such caught corruption is a false positive since s1, s2 will not have overlapped keys with k1 or whatever inserted into m1 before ingest file s1 by the requirement of file ingestion (otherwise the m1 will be flushed first before any of the file ingestion completes). Therefore there in fact isn't any file reordering corruption.
- Single delete can decrease a file's largest seqno and ordering by `largest_seqno` can introduce a wrong ordering hence file reordering corruption
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n", Credit to ajkr for this example)
- an existing SST s1 contains only k1@1
- insert k1@2 to memtable m1
- ingest file s2 with k3@3, ingest file s3 with k4@4
- insert single delete k5@5 in m1
- flush m1 and result in new file s4 of seqno range [2, 5]
- compact s1, s2, s3 and result in new file s5 of seqno range [1, 4]
- compact s4 and result in new file s6 of seqno range [2] due to single delete
- By the last step, we have file ordering by largest seqno (">" means "newer") : s5 > s6 while s6 contains a newer version of the k1's value (i.e, k1@2) than s5, which is a real reordering corruption. While this can be caught by `force_consistency_check=true`, there isn't a good way to prevent this from happening if ordering by `largest_seqno`
Therefore, we are redesigning the sorting criteria of L0 files and avoid above inconvenience. Credit to ajkr , we now introduce `epoch_num` which describes the order of a file being flushed or ingested/imported (compaction output file will has the minimum `epoch_num` among input files'). This will avoid the above inconvenience in the following ways:
- In the first case above, there will no longer be overlap seqno range check in `force_consistency_check=true` but `epoch_number` ordering check. This will result in file ordering s1 < s2 < s4 (pre-compaction) and s3 < s4 (post-compaction) which won't trigger false positive corruption. See test class `DBCompactionTestL0FilesMisorderCorruption*` for more.
- In the second case above, this will result in file ordering s1 < s2 < s3 < s4 (pre-compacting s1, s2, s3), s5 < s4 (post-compacting s1, s2, s3), s5 < s6 (post-compacting s4), which are correct file ordering without causing any corruption.
**Summary:**
- Introduce `epoch_number` stored per `ColumnFamilyData` and sort CF's L0 files by their assigned `epoch_number` instead of `largest_seqno`.
- `epoch_number` is increased and assigned upon `VersionEdit::AddFile()` for flush (or similarly for WriteLevel0TableForRecovery) and file ingestion (except for allow_behind_true, which will always get assigned as the `kReservedEpochNumberForFileIngestedBehind`)
- Compaction output file is assigned with the minimum `epoch_number` among input files'
- Refit level: reuse refitted file's epoch_number
- Other paths needing `epoch_number` treatment:
- Import column families: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`
- Repair: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`.
- Assigning new epoch_number to a file and adding this file to LSM tree should be atomic. This is guaranteed by us assigning epoch_number right upon `VersionEdit::AddFile()` where this version edit will be apply to LSM tree shape right after by holding the db mutex (e.g, flush, file ingestion, import column family) or by there is only 1 ongoing edit per CF (e.g, WriteLevel0TableForRecovery, Repair).
- Assigning the minimum input epoch number to compaction output file won't misorder L0 files (even through later `Refit(target_level=0)`). It's due to for every key "k" in the input range, a legit compaction will cover a continuous epoch number range of that key. As long as we assign the key "k" the minimum input epoch number, it won't become newer or older than the versions of this key that aren't included in this compaction hence no misorder.
- Persist `epoch_number` of each file in manifest and recover `epoch_number` on db recovery
- Backward compatibility with old db without `epoch_number` support is guaranteed by assigning `epoch_number` to recovered files by `NewestFirstBySeqno` order. See `VersionStorageInfo::RecoverEpochNumbers()` for more
- Forward compatibility with manifest is guaranteed by flexibility of `NewFileCustomTag`
- Replace `force_consistent_check` on L0 with `epoch_number` and remove false positive check like case 1 with `largest_seqno` above
- Due to backward compatibility issue, we might encounter files with missing epoch number at the beginning of db recovery. We will still use old L0 sorting mechanism (`NewestFirstBySeqno`) to check/sort them till we infer their epoch number. See usages of `EpochNumberRequirement`.
- Remove fix https://github.com/facebook/rocksdb/pull/5958#issue-511150930 and their outdated tests to file reordering corruption because such fix can be replaced by this PR.
- Misc:
- update existing tests with `epoch_number` so make check will pass
- update https://github.com/facebook/rocksdb/pull/5958#issue-511150930 tests to verify corruption is fixed using `epoch_number` and cover universal/fifo compaction/CompactRange/CompactFile cases
- assert db_mutex is held for a few places before calling ColumnFamilyData::NewEpochNumber()
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10922
Test Plan:
- `make check`
- New unit tests under `db/db_compaction_test.cc`, `db/db_test2.cc`, `db/version_builder_test.cc`, `db/repair_test.cc`
- Updated tests (i.e, `DBCompactionTestL0FilesMisorderCorruption*`) under https://github.com/facebook/rocksdb/pull/5958#issue-511150930
- [Ongoing] Compatibility test: manually run https://github.com/ajkr/rocksdb/commit/36a5686ec012f35a4371e409aa85c404ca1c210d (with file ingestion off for running the `.orig` binary to prevent this bug affecting upgrade/downgrade formality checking) for 1 hour on `simple black/white box`, `cf_consistency/txn/enable_ts with whitebox + test_best_efforts_recovery with blackbox`
- [Ongoing] normal db stress test
- [Ongoing] db stress test with aggressive value https://github.com/facebook/rocksdb/pull/10761
Reviewed By: ajkr
Differential Revision: D41063187
Pulled By: hx235
fbshipit-source-id: 826cb23455de7beaabe2d16c57682a82733a32a9
2022-12-13 21:29:37 +00:00
|
|
|
initialized_(false),
|
|
|
|
epoch_number_requirement_(epoch_number_requirement) {
|
2020-11-11 15:58:15 +00:00
|
|
|
assert(version_set_ != nullptr);
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
Status VersionEditHandler::Initialize() {
|
|
|
|
Status s;
|
|
|
|
if (!initialized_) {
|
|
|
|
for (const auto& cf_desc : column_families_) {
|
|
|
|
name_to_options_.emplace(cf_desc.name, cf_desc.options);
|
|
|
|
}
|
|
|
|
auto default_cf_iter = name_to_options_.find(kDefaultColumnFamilyName);
|
|
|
|
if (default_cf_iter == name_to_options_.end()) {
|
|
|
|
s = Status::InvalidArgument("Default column family not specified");
|
|
|
|
}
|
|
|
|
if (s.ok()) {
|
|
|
|
VersionEdit default_cf_edit;
|
|
|
|
default_cf_edit.AddColumnFamily(kDefaultColumnFamilyName);
|
|
|
|
default_cf_edit.SetColumnFamily(0);
|
|
|
|
ColumnFamilyData* cfd =
|
|
|
|
CreateCfAndInit(default_cf_iter->second, default_cf_edit);
|
|
|
|
assert(cfd != nullptr);
|
|
|
|
#ifdef NDEBUG
|
|
|
|
(void)cfd;
|
|
|
|
#endif
|
|
|
|
initialized_ = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
|
|
|
Status VersionEditHandler::ApplyVersionEdit(VersionEdit& edit,
|
|
|
|
ColumnFamilyData** cfd) {
|
|
|
|
Status s;
|
2023-11-01 19:04:11 +00:00
|
|
|
if (edit.IsColumnFamilyAdd()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
s = OnColumnFamilyAdd(edit, cfd);
|
2023-11-01 19:04:11 +00:00
|
|
|
} else if (edit.IsColumnFamilyDrop()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
s = OnColumnFamilyDrop(edit, cfd);
|
2020-10-24 05:48:00 +00:00
|
|
|
} else if (edit.IsWalAddition()) {
|
|
|
|
s = OnWalAddition(edit);
|
|
|
|
} else if (edit.IsWalDeletion()) {
|
|
|
|
s = OnWalDeletion(edit);
|
2020-03-21 02:17:54 +00:00
|
|
|
} else {
|
|
|
|
s = OnNonCfOperation(edit, cfd);
|
|
|
|
}
|
|
|
|
if (s.ok()) {
|
|
|
|
assert(cfd != nullptr);
|
|
|
|
s = ExtractInfoFromVersionEdit(*cfd, edit);
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
|
|
|
Status VersionEditHandler::OnColumnFamilyAdd(VersionEdit& edit,
|
|
|
|
ColumnFamilyData** cfd) {
|
|
|
|
bool cf_in_not_found = false;
|
|
|
|
bool cf_in_builders = false;
|
|
|
|
CheckColumnFamilyId(edit, &cf_in_not_found, &cf_in_builders);
|
|
|
|
|
|
|
|
assert(cfd != nullptr);
|
|
|
|
*cfd = nullptr;
|
2023-11-01 19:04:11 +00:00
|
|
|
const std::string& cf_name = edit.GetColumnFamilyName();
|
2020-03-21 02:17:54 +00:00
|
|
|
Status s;
|
|
|
|
if (cf_in_builders || cf_in_not_found) {
|
|
|
|
s = Status::Corruption("MANIFEST adding the same column family twice: " +
|
2023-11-01 19:04:11 +00:00
|
|
|
cf_name);
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
if (s.ok()) {
|
2023-11-01 19:04:11 +00:00
|
|
|
auto cf_options = name_to_options_.find(cf_name);
|
2020-03-21 02:17:54 +00:00
|
|
|
// implicitly add persistent_stats column family without requiring user
|
|
|
|
// to specify
|
|
|
|
ColumnFamilyData* tmp_cfd = nullptr;
|
|
|
|
bool is_persistent_stats_column_family =
|
2023-11-01 19:04:11 +00:00
|
|
|
cf_name.compare(kPersistentStatsColumnFamilyName) == 0;
|
2020-03-21 02:17:54 +00:00
|
|
|
if (cf_options == name_to_options_.end() &&
|
|
|
|
!is_persistent_stats_column_family) {
|
2023-11-01 19:04:11 +00:00
|
|
|
column_families_not_found_.emplace(edit.GetColumnFamily(), cf_name);
|
2020-03-21 02:17:54 +00:00
|
|
|
} else {
|
|
|
|
if (is_persistent_stats_column_family) {
|
|
|
|
ColumnFamilyOptions cfo;
|
|
|
|
OptimizeForPersistentStats(&cfo);
|
|
|
|
tmp_cfd = CreateCfAndInit(cfo, edit);
|
|
|
|
} else {
|
|
|
|
tmp_cfd = CreateCfAndInit(cf_options->second, edit);
|
|
|
|
}
|
|
|
|
*cfd = tmp_cfd;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
|
|
|
Status VersionEditHandler::OnColumnFamilyDrop(VersionEdit& edit,
|
|
|
|
ColumnFamilyData** cfd) {
|
|
|
|
bool cf_in_not_found = false;
|
|
|
|
bool cf_in_builders = false;
|
|
|
|
CheckColumnFamilyId(edit, &cf_in_not_found, &cf_in_builders);
|
|
|
|
|
|
|
|
assert(cfd != nullptr);
|
|
|
|
*cfd = nullptr;
|
|
|
|
ColumnFamilyData* tmp_cfd = nullptr;
|
|
|
|
Status s;
|
|
|
|
if (cf_in_builders) {
|
|
|
|
tmp_cfd = DestroyCfAndCleanup(edit);
|
|
|
|
} else if (cf_in_not_found) {
|
2023-11-01 19:04:11 +00:00
|
|
|
column_families_not_found_.erase(edit.GetColumnFamily());
|
2020-03-21 02:17:54 +00:00
|
|
|
} else {
|
|
|
|
s = Status::Corruption("MANIFEST - dropping non-existing column family");
|
|
|
|
}
|
|
|
|
*cfd = tmp_cfd;
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
2020-10-24 05:48:00 +00:00
|
|
|
Status VersionEditHandler::OnWalAddition(VersionEdit& edit) {
|
|
|
|
assert(edit.IsWalAddition());
|
|
|
|
return version_set_->wals_.AddWals(edit.GetWalAdditions());
|
|
|
|
}
|
|
|
|
|
|
|
|
Status VersionEditHandler::OnWalDeletion(VersionEdit& edit) {
|
|
|
|
assert(edit.IsWalDeletion());
|
2020-11-07 00:30:44 +00:00
|
|
|
return version_set_->wals_.DeleteWalsBefore(
|
|
|
|
edit.GetWalDeletion().GetLogNumber());
|
2020-10-24 05:48:00 +00:00
|
|
|
}
|
|
|
|
|
2020-03-21 02:17:54 +00:00
|
|
|
Status VersionEditHandler::OnNonCfOperation(VersionEdit& edit,
|
|
|
|
ColumnFamilyData** cfd) {
|
|
|
|
bool cf_in_not_found = false;
|
|
|
|
bool cf_in_builders = false;
|
|
|
|
CheckColumnFamilyId(edit, &cf_in_not_found, &cf_in_builders);
|
|
|
|
|
|
|
|
assert(cfd != nullptr);
|
|
|
|
*cfd = nullptr;
|
|
|
|
Status s;
|
|
|
|
if (!cf_in_not_found) {
|
|
|
|
if (!cf_in_builders) {
|
|
|
|
s = Status::Corruption(
|
|
|
|
"MANIFEST record referencing unknown column family");
|
|
|
|
}
|
|
|
|
ColumnFamilyData* tmp_cfd = nullptr;
|
|
|
|
if (s.ok()) {
|
2023-11-01 19:04:11 +00:00
|
|
|
auto builder_iter = builders_.find(edit.GetColumnFamily());
|
2020-03-21 02:17:54 +00:00
|
|
|
assert(builder_iter != builders_.end());
|
|
|
|
tmp_cfd = version_set_->GetColumnFamilySet()->GetColumnFamily(
|
2023-11-01 19:04:11 +00:00
|
|
|
edit.GetColumnFamily());
|
2020-03-21 02:17:54 +00:00
|
|
|
assert(tmp_cfd != nullptr);
|
2023-07-10 18:03:25 +00:00
|
|
|
// It's important to handle file boundaries before `MaybeCreateVersion`
|
|
|
|
// because `VersionEditHandlerPointInTime::MaybeCreateVersion` does
|
|
|
|
// `FileMetaData` verification that involves the file boundaries.
|
|
|
|
// All `VersionEditHandlerBase` subclasses that need to deal with
|
|
|
|
// `FileMetaData` for new files are also subclasses of
|
|
|
|
// `VersionEditHandler`, so it's sufficient to do the file boundaries
|
|
|
|
// handling in this method.
|
|
|
|
s = MaybeHandleFileBoundariesForNewFiles(edit, tmp_cfd);
|
|
|
|
if (!s.ok()) {
|
|
|
|
return s;
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
s = MaybeCreateVersion(edit, tmp_cfd, /*force_create_version=*/false);
|
|
|
|
if (s.ok()) {
|
|
|
|
s = builder_iter->second->version_builder()->Apply(&edit);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
*cfd = tmp_cfd;
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
|
|
|
// TODO maybe cache the computation result
|
|
|
|
bool VersionEditHandler::HasMissingFiles() const {
|
|
|
|
bool ret = false;
|
|
|
|
for (const auto& elem : cf_to_missing_files_) {
|
|
|
|
const auto& missing_files = elem.second;
|
|
|
|
if (!missing_files.empty()) {
|
|
|
|
ret = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2021-04-19 18:55:20 +00:00
|
|
|
if (!ret) {
|
|
|
|
for (const auto& elem : cf_to_missing_blob_files_high_) {
|
|
|
|
if (elem.second != kInvalidBlobFileNumber) {
|
|
|
|
ret = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
void VersionEditHandler::CheckColumnFamilyId(const VersionEdit& edit,
|
|
|
|
bool* cf_in_not_found,
|
|
|
|
bool* cf_in_builders) const {
|
|
|
|
assert(cf_in_not_found != nullptr);
|
|
|
|
assert(cf_in_builders != nullptr);
|
|
|
|
// Not found means that user didn't supply that column
|
|
|
|
// family option AND we encountered column family add
|
|
|
|
// record. Once we encounter column family drop record,
|
|
|
|
// we will delete the column family from
|
|
|
|
// column_families_not_found.
|
2023-11-01 19:04:11 +00:00
|
|
|
uint32_t cf_id = edit.GetColumnFamily();
|
|
|
|
bool in_not_found = column_families_not_found_.find(cf_id) !=
|
2020-03-21 02:17:54 +00:00
|
|
|
column_families_not_found_.end();
|
|
|
|
// in builders means that user supplied that column family
|
|
|
|
// option AND that we encountered column family add record
|
2023-11-01 19:04:11 +00:00
|
|
|
bool in_builders = builders_.find(cf_id) != builders_.end();
|
2020-03-21 02:17:54 +00:00
|
|
|
// They cannot both be true
|
|
|
|
assert(!(in_not_found && in_builders));
|
|
|
|
*cf_in_not_found = in_not_found;
|
|
|
|
*cf_in_builders = in_builders;
|
|
|
|
}
|
|
|
|
|
|
|
|
void VersionEditHandler::CheckIterationResult(const log::Reader& reader,
|
|
|
|
Status* s) {
|
|
|
|
assert(s != nullptr);
|
|
|
|
if (!s->ok()) {
|
2020-11-11 15:58:15 +00:00
|
|
|
// Do nothing here.
|
2023-11-01 19:04:11 +00:00
|
|
|
} else if (!version_edit_params_.HasLogNumber() ||
|
|
|
|
!version_edit_params_.HasNextFile() ||
|
|
|
|
!version_edit_params_.HasLastSequence()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
std::string msg("no ");
|
2023-11-01 19:04:11 +00:00
|
|
|
if (!version_edit_params_.HasLogNumber()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
msg.append("log_file_number, ");
|
|
|
|
}
|
2023-11-01 19:04:11 +00:00
|
|
|
if (!version_edit_params_.HasNextFile()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
msg.append("next_file_number, ");
|
|
|
|
}
|
2023-11-01 19:04:11 +00:00
|
|
|
if (!version_edit_params_.HasLastSequence()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
msg.append("last_sequence, ");
|
|
|
|
}
|
|
|
|
msg = msg.substr(0, msg.size() - 2);
|
|
|
|
msg.append(" entry in MANIFEST");
|
|
|
|
*s = Status::Corruption(msg);
|
|
|
|
}
|
2020-11-11 15:58:15 +00:00
|
|
|
// There were some column families in the MANIFEST that weren't specified
|
|
|
|
// in the argument. This is OK in read_only mode
|
2021-03-10 18:58:07 +00:00
|
|
|
if (s->ok() && MustOpenAllColumnFamilies() &&
|
|
|
|
!column_families_not_found_.empty()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
std::string msg;
|
|
|
|
for (const auto& cf : column_families_not_found_) {
|
|
|
|
msg.append(", ");
|
|
|
|
msg.append(cf.second);
|
|
|
|
}
|
|
|
|
msg = msg.substr(2);
|
|
|
|
*s = Status::InvalidArgument("Column families not opened: " + msg);
|
|
|
|
}
|
|
|
|
if (s->ok()) {
|
|
|
|
version_set_->GetColumnFamilySet()->UpdateMaxColumnFamily(
|
2023-11-01 19:04:11 +00:00
|
|
|
version_edit_params_.GetMaxColumnFamily());
|
Fix a race condition in WAL tracking causing DB open failure (#9715)
Summary:
There is a race condition if WAL tracking in the MANIFEST is enabled in a database that disables 2PC.
The race condition is between two background flush threads trying to install flush results to the MANIFEST.
Consider an example database with two column families: "default" (cfd0) and "cf1" (cfd1). Initially,
both column families have one mutable (active) memtable whose data backed by 6.log.
1. Trigger a manual flush for "cf1", creating a 7.log
2. Insert another key to "default", and trigger flush for "default", creating 8.log
3. BgFlushThread1 finishes writing 9.sst
4. BgFlushThread2 finishes writing 10.sst
```
Time BgFlushThread1 BgFlushThread2
| mutex_.Lock()
| precompute min_wal_to_keep as 6
| mutex_.Unlock()
| mutex_.Lock()
| precompute min_wal_to_keep as 6
| join MANIFEST write queue and mutex_.Unlock()
| write to MANIFEST
| mutex_.Lock()
| cfd1->log_number = 7
| Signal bg_flush_2 and mutex_.Unlock()
| wake up and mutex_.Lock()
| cfd0->log_number = 8
| FindObsoleteFiles() with job_context->log_number == 7
| mutex_.Unlock()
| PurgeObsoleteFiles() deletes 6.log
V
```
As shown in the above, BgFlushThread2 thinks that the min wal to keep is 6.log because "cf1" has unflushed data in 6.log (cf1.log_number=6).
Similarly, BgThread1 thinks that min wal to keep is also 6.log because "default" has unflushed data (default.log_number=6).
No WAL deletion will be written to MANIFEST because 6 is equal to `versions_->wals_.min_wal_number_to_keep`,
due to https://github.com/facebook/rocksdb/blob/7.1.fb/db/memtable_list.cc#L513:L514.
The bg flush thread that finishes last will perform file purging. `job_context.log_number` will be evaluated as 7, i.e.
the min wal that contains unflushed data, causing 6.log to be deleted. However, MANIFEST thinks 6.log should still exist.
If you close the db at this point, you won't be able to re-open it if `track_and_verify_wal_in_manifest` is true.
We must handle the case of multiple bg flush threads, and it is difficult for one bg flush thread to know
the correct min wal number until the other bg flush threads have finished committing to the manifest and updated
the `cfd::log_number`.
To fix this issue, we rename an existing variable `min_log_number_to_keep_2pc` to `min_log_number_to_keep`,
and use it to track WAL file deletion in non-2pc mode as well.
This variable is updated only 1) during recovery with mutex held, or 2) in the MANIFEST write thread.
`min_log_number_to_keep` means RocksDB will delete WALs below it, although there may be WALs
above it which are also obsolete. Formally, we will have [min_wal_to_keep, max_obsolete_wal]. During recovery, we
make sure that only WALs above max_obsolete_wal are checked and added back to `alive_log_files_`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9715
Test Plan:
```
make check
```
Also ran stress test below (with asan) to make sure it completes successfully.
```
TEST_TMPDIR=/dev/shm/rocksdb OPT=-g ASAN_OPTIONS=disable_coredump=0 \
CRASH_TEST_EXT_ARGS=--compression_type=zstd SKIP_FORMAT_BUCK_CHECKS=1 \
make J=52 -j52 blackbox_asan_crash_test
```
Reviewed By: ltamasi
Differential Revision: D34984412
Pulled By: riversand963
fbshipit-source-id: c7b21a8d84751bb55ea79c9f387103d21b231005
2022-03-24 02:41:31 +00:00
|
|
|
version_set_->MarkMinLogNumberToKeep(
|
2023-11-01 19:04:11 +00:00
|
|
|
version_edit_params_.GetMinLogNumberToKeep());
|
|
|
|
version_set_->MarkFileNumberUsed(version_edit_params_.GetPrevLogNumber());
|
|
|
|
version_set_->MarkFileNumberUsed(version_edit_params_.GetLogNumber());
|
2020-03-21 02:17:54 +00:00
|
|
|
for (auto* cfd : *(version_set_->GetColumnFamilySet())) {
|
2021-03-10 18:58:07 +00:00
|
|
|
if (cfd->IsDropped()) {
|
|
|
|
continue;
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
auto builder_iter = builders_.find(cfd->GetID());
|
|
|
|
assert(builder_iter != builders_.end());
|
|
|
|
auto* builder = builder_iter->second->version_builder();
|
|
|
|
if (!builder->CheckConsistencyForNumLevels()) {
|
|
|
|
*s = Status::InvalidArgument(
|
|
|
|
"db has more levels than options.num_levels");
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (s->ok()) {
|
|
|
|
for (auto* cfd : *(version_set_->GetColumnFamilySet())) {
|
|
|
|
if (cfd->IsDropped()) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
if (read_only_) {
|
|
|
|
cfd->table_cache()->SetTablesAreImmortal();
|
|
|
|
}
|
|
|
|
*s = LoadTables(cfd, /*prefetch_index_and_filter_in_cache=*/false,
|
|
|
|
/*is_initial_load=*/true);
|
|
|
|
if (!s->ok()) {
|
2020-11-11 15:58:15 +00:00
|
|
|
// If s is IOError::PathNotFound, then we mark the db as corrupted.
|
|
|
|
if (s->IsPathNotFound()) {
|
|
|
|
*s = Status::Corruption("Corruption: " + s->ToString());
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
Sort L0 files by newly introduced epoch_num (#10922)
Summary:
**Context:**
Sorting L0 files by `largest_seqno` has at least two inconvenience:
- File ingestion and compaction involving ingested files can create files of overlapping seqno range with the existing files. `force_consistency_check=true` will catch such overlap seqno range even those harmless overlap.
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n")
- insert k1@1 to memtable m1
- ingest file s1 with k2@2, ingest file s2 with k3@3
- insert k4@4 to m1
- compact files s1, s2 and result in new file s3 of seqno range [2, 3]
- flush m1 and result in new file s4 of seqno range [1, 4]. And `force_consistency_check=true` will think s4 and s3 has file reordering corruption that might cause retuning an old value of k1
- However such caught corruption is a false positive since s1, s2 will not have overlapped keys with k1 or whatever inserted into m1 before ingest file s1 by the requirement of file ingestion (otherwise the m1 will be flushed first before any of the file ingestion completes). Therefore there in fact isn't any file reordering corruption.
- Single delete can decrease a file's largest seqno and ordering by `largest_seqno` can introduce a wrong ordering hence file reordering corruption
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n", Credit to ajkr for this example)
- an existing SST s1 contains only k1@1
- insert k1@2 to memtable m1
- ingest file s2 with k3@3, ingest file s3 with k4@4
- insert single delete k5@5 in m1
- flush m1 and result in new file s4 of seqno range [2, 5]
- compact s1, s2, s3 and result in new file s5 of seqno range [1, 4]
- compact s4 and result in new file s6 of seqno range [2] due to single delete
- By the last step, we have file ordering by largest seqno (">" means "newer") : s5 > s6 while s6 contains a newer version of the k1's value (i.e, k1@2) than s5, which is a real reordering corruption. While this can be caught by `force_consistency_check=true`, there isn't a good way to prevent this from happening if ordering by `largest_seqno`
Therefore, we are redesigning the sorting criteria of L0 files and avoid above inconvenience. Credit to ajkr , we now introduce `epoch_num` which describes the order of a file being flushed or ingested/imported (compaction output file will has the minimum `epoch_num` among input files'). This will avoid the above inconvenience in the following ways:
- In the first case above, there will no longer be overlap seqno range check in `force_consistency_check=true` but `epoch_number` ordering check. This will result in file ordering s1 < s2 < s4 (pre-compaction) and s3 < s4 (post-compaction) which won't trigger false positive corruption. See test class `DBCompactionTestL0FilesMisorderCorruption*` for more.
- In the second case above, this will result in file ordering s1 < s2 < s3 < s4 (pre-compacting s1, s2, s3), s5 < s4 (post-compacting s1, s2, s3), s5 < s6 (post-compacting s4), which are correct file ordering without causing any corruption.
**Summary:**
- Introduce `epoch_number` stored per `ColumnFamilyData` and sort CF's L0 files by their assigned `epoch_number` instead of `largest_seqno`.
- `epoch_number` is increased and assigned upon `VersionEdit::AddFile()` for flush (or similarly for WriteLevel0TableForRecovery) and file ingestion (except for allow_behind_true, which will always get assigned as the `kReservedEpochNumberForFileIngestedBehind`)
- Compaction output file is assigned with the minimum `epoch_number` among input files'
- Refit level: reuse refitted file's epoch_number
- Other paths needing `epoch_number` treatment:
- Import column families: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`
- Repair: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`.
- Assigning new epoch_number to a file and adding this file to LSM tree should be atomic. This is guaranteed by us assigning epoch_number right upon `VersionEdit::AddFile()` where this version edit will be apply to LSM tree shape right after by holding the db mutex (e.g, flush, file ingestion, import column family) or by there is only 1 ongoing edit per CF (e.g, WriteLevel0TableForRecovery, Repair).
- Assigning the minimum input epoch number to compaction output file won't misorder L0 files (even through later `Refit(target_level=0)`). It's due to for every key "k" in the input range, a legit compaction will cover a continuous epoch number range of that key. As long as we assign the key "k" the minimum input epoch number, it won't become newer or older than the versions of this key that aren't included in this compaction hence no misorder.
- Persist `epoch_number` of each file in manifest and recover `epoch_number` on db recovery
- Backward compatibility with old db without `epoch_number` support is guaranteed by assigning `epoch_number` to recovered files by `NewestFirstBySeqno` order. See `VersionStorageInfo::RecoverEpochNumbers()` for more
- Forward compatibility with manifest is guaranteed by flexibility of `NewFileCustomTag`
- Replace `force_consistent_check` on L0 with `epoch_number` and remove false positive check like case 1 with `largest_seqno` above
- Due to backward compatibility issue, we might encounter files with missing epoch number at the beginning of db recovery. We will still use old L0 sorting mechanism (`NewestFirstBySeqno`) to check/sort them till we infer their epoch number. See usages of `EpochNumberRequirement`.
- Remove fix https://github.com/facebook/rocksdb/pull/5958#issue-511150930 and their outdated tests to file reordering corruption because such fix can be replaced by this PR.
- Misc:
- update existing tests with `epoch_number` so make check will pass
- update https://github.com/facebook/rocksdb/pull/5958#issue-511150930 tests to verify corruption is fixed using `epoch_number` and cover universal/fifo compaction/CompactRange/CompactFile cases
- assert db_mutex is held for a few places before calling ColumnFamilyData::NewEpochNumber()
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10922
Test Plan:
- `make check`
- New unit tests under `db/db_compaction_test.cc`, `db/db_test2.cc`, `db/version_builder_test.cc`, `db/repair_test.cc`
- Updated tests (i.e, `DBCompactionTestL0FilesMisorderCorruption*`) under https://github.com/facebook/rocksdb/pull/5958#issue-511150930
- [Ongoing] Compatibility test: manually run https://github.com/ajkr/rocksdb/commit/36a5686ec012f35a4371e409aa85c404ca1c210d (with file ingestion off for running the `.orig` binary to prevent this bug affecting upgrade/downgrade formality checking) for 1 hour on `simple black/white box`, `cf_consistency/txn/enable_ts with whitebox + test_best_efforts_recovery with blackbox`
- [Ongoing] normal db stress test
- [Ongoing] db stress test with aggressive value https://github.com/facebook/rocksdb/pull/10761
Reviewed By: ajkr
Differential Revision: D41063187
Pulled By: hx235
fbshipit-source-id: 826cb23455de7beaabe2d16c57682a82733a32a9
2022-12-13 21:29:37 +00:00
|
|
|
|
2020-03-21 02:17:54 +00:00
|
|
|
if (s->ok()) {
|
|
|
|
for (auto* cfd : *(version_set_->column_family_set_)) {
|
|
|
|
if (cfd->IsDropped()) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
assert(cfd->initialized());
|
|
|
|
VersionEdit edit;
|
|
|
|
*s = MaybeCreateVersion(edit, cfd, /*force_create_version=*/true);
|
|
|
|
if (!s->ok()) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (s->ok()) {
|
|
|
|
version_set_->manifest_file_size_ = reader.GetReadOffset();
|
|
|
|
assert(version_set_->manifest_file_size_ > 0);
|
2023-11-01 19:04:11 +00:00
|
|
|
version_set_->next_file_number_.store(version_edit_params_.GetNextFile() +
|
|
|
|
1);
|
|
|
|
SequenceNumber last_seq = version_edit_params_.GetLastSequence();
|
2021-08-25 01:17:32 +00:00
|
|
|
assert(last_seq != kMaxSequenceNumber);
|
|
|
|
if (last_seq != kMaxSequenceNumber &&
|
|
|
|
last_seq > version_set_->last_allocated_sequence_.load()) {
|
|
|
|
version_set_->last_allocated_sequence_.store(last_seq);
|
|
|
|
}
|
|
|
|
if (last_seq != kMaxSequenceNumber &&
|
|
|
|
last_seq > version_set_->last_published_sequence_.load()) {
|
|
|
|
version_set_->last_published_sequence_.store(last_seq);
|
|
|
|
}
|
|
|
|
if (last_seq != kMaxSequenceNumber &&
|
|
|
|
last_seq > version_set_->last_sequence_.load()) {
|
|
|
|
version_set_->last_sequence_.store(last_seq);
|
|
|
|
}
|
2022-01-06 00:00:41 +00:00
|
|
|
if (last_seq != kMaxSequenceNumber &&
|
|
|
|
last_seq > version_set_->descriptor_last_sequence_) {
|
|
|
|
// This is the maximum last sequence of all `VersionEdit`s iterated. It
|
|
|
|
// may be greater than the maximum `largest_seqno` of all files in case
|
|
|
|
// the newest data referred to by the MANIFEST has been dropped or had its
|
|
|
|
// sequence number zeroed through compaction.
|
|
|
|
version_set_->descriptor_last_sequence_ = last_seq;
|
|
|
|
}
|
2023-11-01 19:04:11 +00:00
|
|
|
version_set_->prev_log_number_ = version_edit_params_.GetPrevLogNumber();
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ColumnFamilyData* VersionEditHandler::CreateCfAndInit(
|
|
|
|
const ColumnFamilyOptions& cf_options, const VersionEdit& edit) {
|
2023-11-01 19:04:11 +00:00
|
|
|
uint32_t cf_id = edit.GetColumnFamily();
|
2023-04-21 16:07:18 +00:00
|
|
|
ColumnFamilyData* cfd =
|
|
|
|
version_set_->CreateColumnFamily(cf_options, read_options_, &edit);
|
2020-03-21 02:17:54 +00:00
|
|
|
assert(cfd != nullptr);
|
|
|
|
cfd->set_initialized();
|
2023-11-01 19:04:11 +00:00
|
|
|
assert(builders_.find(cf_id) == builders_.end());
|
|
|
|
builders_.emplace(cf_id,
|
2020-03-21 02:17:54 +00:00
|
|
|
VersionBuilderUPtr(new BaseReferencedVersionBuilder(cfd)));
|
|
|
|
if (track_missing_files_) {
|
2023-11-01 19:04:11 +00:00
|
|
|
cf_to_missing_files_.emplace(cf_id, std::unordered_set<uint64_t>());
|
|
|
|
cf_to_missing_blob_files_high_.emplace(cf_id, kInvalidBlobFileNumber);
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
return cfd;
|
|
|
|
}
|
|
|
|
|
|
|
|
ColumnFamilyData* VersionEditHandler::DestroyCfAndCleanup(
|
|
|
|
const VersionEdit& edit) {
|
2023-11-01 19:04:11 +00:00
|
|
|
uint32_t cf_id = edit.GetColumnFamily();
|
|
|
|
auto builder_iter = builders_.find(cf_id);
|
2020-03-21 02:17:54 +00:00
|
|
|
assert(builder_iter != builders_.end());
|
|
|
|
builders_.erase(builder_iter);
|
|
|
|
if (track_missing_files_) {
|
2023-11-01 19:04:11 +00:00
|
|
|
auto missing_files_iter = cf_to_missing_files_.find(cf_id);
|
2020-03-21 02:17:54 +00:00
|
|
|
assert(missing_files_iter != cf_to_missing_files_.end());
|
|
|
|
cf_to_missing_files_.erase(missing_files_iter);
|
2021-04-19 18:55:20 +00:00
|
|
|
|
|
|
|
auto missing_blob_files_high_iter =
|
2023-11-01 19:04:11 +00:00
|
|
|
cf_to_missing_blob_files_high_.find(cf_id);
|
2021-04-19 18:55:20 +00:00
|
|
|
assert(missing_blob_files_high_iter !=
|
|
|
|
cf_to_missing_blob_files_high_.end());
|
|
|
|
cf_to_missing_blob_files_high_.erase(missing_blob_files_high_iter);
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
ColumnFamilyData* ret =
|
2023-11-01 19:04:11 +00:00
|
|
|
version_set_->GetColumnFamilySet()->GetColumnFamily(cf_id);
|
2020-03-21 02:17:54 +00:00
|
|
|
assert(ret != nullptr);
|
2021-03-10 18:58:07 +00:00
|
|
|
ret->SetDropped();
|
|
|
|
ret->UnrefAndTryDelete();
|
|
|
|
ret = nullptr;
|
2020-03-21 02:17:54 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
Status VersionEditHandler::MaybeCreateVersion(const VersionEdit& /*edit*/,
|
|
|
|
ColumnFamilyData* cfd,
|
|
|
|
bool force_create_version) {
|
|
|
|
assert(cfd->initialized());
|
2020-05-05 17:44:12 +00:00
|
|
|
Status s;
|
2020-03-21 02:17:54 +00:00
|
|
|
if (force_create_version) {
|
|
|
|
auto builder_iter = builders_.find(cfd->GetID());
|
|
|
|
assert(builder_iter != builders_.end());
|
|
|
|
auto* builder = builder_iter->second->version_builder();
|
|
|
|
auto* v = new Version(cfd, version_set_, version_set_->file_options_,
|
2020-09-08 17:49:01 +00:00
|
|
|
*cfd->GetLatestMutableCFOptions(), io_tracer_,
|
Sort L0 files by newly introduced epoch_num (#10922)
Summary:
**Context:**
Sorting L0 files by `largest_seqno` has at least two inconvenience:
- File ingestion and compaction involving ingested files can create files of overlapping seqno range with the existing files. `force_consistency_check=true` will catch such overlap seqno range even those harmless overlap.
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n")
- insert k1@1 to memtable m1
- ingest file s1 with k2@2, ingest file s2 with k3@3
- insert k4@4 to m1
- compact files s1, s2 and result in new file s3 of seqno range [2, 3]
- flush m1 and result in new file s4 of seqno range [1, 4]. And `force_consistency_check=true` will think s4 and s3 has file reordering corruption that might cause retuning an old value of k1
- However such caught corruption is a false positive since s1, s2 will not have overlapped keys with k1 or whatever inserted into m1 before ingest file s1 by the requirement of file ingestion (otherwise the m1 will be flushed first before any of the file ingestion completes). Therefore there in fact isn't any file reordering corruption.
- Single delete can decrease a file's largest seqno and ordering by `largest_seqno` can introduce a wrong ordering hence file reordering corruption
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n", Credit to ajkr for this example)
- an existing SST s1 contains only k1@1
- insert k1@2 to memtable m1
- ingest file s2 with k3@3, ingest file s3 with k4@4
- insert single delete k5@5 in m1
- flush m1 and result in new file s4 of seqno range [2, 5]
- compact s1, s2, s3 and result in new file s5 of seqno range [1, 4]
- compact s4 and result in new file s6 of seqno range [2] due to single delete
- By the last step, we have file ordering by largest seqno (">" means "newer") : s5 > s6 while s6 contains a newer version of the k1's value (i.e, k1@2) than s5, which is a real reordering corruption. While this can be caught by `force_consistency_check=true`, there isn't a good way to prevent this from happening if ordering by `largest_seqno`
Therefore, we are redesigning the sorting criteria of L0 files and avoid above inconvenience. Credit to ajkr , we now introduce `epoch_num` which describes the order of a file being flushed or ingested/imported (compaction output file will has the minimum `epoch_num` among input files'). This will avoid the above inconvenience in the following ways:
- In the first case above, there will no longer be overlap seqno range check in `force_consistency_check=true` but `epoch_number` ordering check. This will result in file ordering s1 < s2 < s4 (pre-compaction) and s3 < s4 (post-compaction) which won't trigger false positive corruption. See test class `DBCompactionTestL0FilesMisorderCorruption*` for more.
- In the second case above, this will result in file ordering s1 < s2 < s3 < s4 (pre-compacting s1, s2, s3), s5 < s4 (post-compacting s1, s2, s3), s5 < s6 (post-compacting s4), which are correct file ordering without causing any corruption.
**Summary:**
- Introduce `epoch_number` stored per `ColumnFamilyData` and sort CF's L0 files by their assigned `epoch_number` instead of `largest_seqno`.
- `epoch_number` is increased and assigned upon `VersionEdit::AddFile()` for flush (or similarly for WriteLevel0TableForRecovery) and file ingestion (except for allow_behind_true, which will always get assigned as the `kReservedEpochNumberForFileIngestedBehind`)
- Compaction output file is assigned with the minimum `epoch_number` among input files'
- Refit level: reuse refitted file's epoch_number
- Other paths needing `epoch_number` treatment:
- Import column families: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`
- Repair: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`.
- Assigning new epoch_number to a file and adding this file to LSM tree should be atomic. This is guaranteed by us assigning epoch_number right upon `VersionEdit::AddFile()` where this version edit will be apply to LSM tree shape right after by holding the db mutex (e.g, flush, file ingestion, import column family) or by there is only 1 ongoing edit per CF (e.g, WriteLevel0TableForRecovery, Repair).
- Assigning the minimum input epoch number to compaction output file won't misorder L0 files (even through later `Refit(target_level=0)`). It's due to for every key "k" in the input range, a legit compaction will cover a continuous epoch number range of that key. As long as we assign the key "k" the minimum input epoch number, it won't become newer or older than the versions of this key that aren't included in this compaction hence no misorder.
- Persist `epoch_number` of each file in manifest and recover `epoch_number` on db recovery
- Backward compatibility with old db without `epoch_number` support is guaranteed by assigning `epoch_number` to recovered files by `NewestFirstBySeqno` order. See `VersionStorageInfo::RecoverEpochNumbers()` for more
- Forward compatibility with manifest is guaranteed by flexibility of `NewFileCustomTag`
- Replace `force_consistent_check` on L0 with `epoch_number` and remove false positive check like case 1 with `largest_seqno` above
- Due to backward compatibility issue, we might encounter files with missing epoch number at the beginning of db recovery. We will still use old L0 sorting mechanism (`NewestFirstBySeqno`) to check/sort them till we infer their epoch number. See usages of `EpochNumberRequirement`.
- Remove fix https://github.com/facebook/rocksdb/pull/5958#issue-511150930 and their outdated tests to file reordering corruption because such fix can be replaced by this PR.
- Misc:
- update existing tests with `epoch_number` so make check will pass
- update https://github.com/facebook/rocksdb/pull/5958#issue-511150930 tests to verify corruption is fixed using `epoch_number` and cover universal/fifo compaction/CompactRange/CompactFile cases
- assert db_mutex is held for a few places before calling ColumnFamilyData::NewEpochNumber()
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10922
Test Plan:
- `make check`
- New unit tests under `db/db_compaction_test.cc`, `db/db_test2.cc`, `db/version_builder_test.cc`, `db/repair_test.cc`
- Updated tests (i.e, `DBCompactionTestL0FilesMisorderCorruption*`) under https://github.com/facebook/rocksdb/pull/5958#issue-511150930
- [Ongoing] Compatibility test: manually run https://github.com/ajkr/rocksdb/commit/36a5686ec012f35a4371e409aa85c404ca1c210d (with file ingestion off for running the `.orig` binary to prevent this bug affecting upgrade/downgrade formality checking) for 1 hour on `simple black/white box`, `cf_consistency/txn/enable_ts with whitebox + test_best_efforts_recovery with blackbox`
- [Ongoing] normal db stress test
- [Ongoing] db stress test with aggressive value https://github.com/facebook/rocksdb/pull/10761
Reviewed By: ajkr
Differential Revision: D41063187
Pulled By: hx235
fbshipit-source-id: 826cb23455de7beaabe2d16c57682a82733a32a9
2022-12-13 21:29:37 +00:00
|
|
|
version_set_->current_version_number_++,
|
|
|
|
epoch_number_requirement_);
|
2020-05-05 17:44:12 +00:00
|
|
|
s = builder->SaveTo(v->storage_info());
|
|
|
|
if (s.ok()) {
|
|
|
|
// Install new version
|
2022-02-04 16:18:18 +00:00
|
|
|
v->PrepareAppend(
|
2023-04-21 16:07:18 +00:00
|
|
|
*cfd->GetLatestMutableCFOptions(), read_options_,
|
2020-05-05 17:44:12 +00:00
|
|
|
!(version_set_->db_options_->skip_stats_update_on_db_open));
|
|
|
|
version_set_->AppendVersion(cfd, v);
|
|
|
|
} else {
|
|
|
|
delete v;
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
2020-05-05 17:44:12 +00:00
|
|
|
return s;
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
Status VersionEditHandler::LoadTables(ColumnFamilyData* cfd,
|
|
|
|
bool prefetch_index_and_filter_in_cache,
|
|
|
|
bool is_initial_load) {
|
Improve / clean up meta block code & integrity (#9163)
Summary:
* Checksums are now checked on meta blocks unless specifically
suppressed or not applicable (e.g. plain table). (Was other way around.)
This means a number of cases that were not checking checksums now are,
including direct read TableProperties in Version::GetTableProperties
(fixed in meta_blocks ReadTableProperties), reading any block from
PersistentCache (fixed in BlockFetcher), read TableProperties in
SstFileDumper (ldb/sst_dump/BackupEngine) before table reader open,
maybe more.
* For that to work, I moved the global_seqno+TableProperties checksum
logic to the shared table/ code, because that is used by many utilies
such as SstFileDumper.
* Also for that to work, we have to know when we're dealing with a block
that has a checksum (trailer), so added that capability to Footer based
on magic number, and from there BlockFetcher.
* Knowledge of trailer presence has also fixed a problem where other
table formats were reading blocks including bytes for a non-existant
trailer--and awkwardly kind-of not using them, e.g. no shared code
checking checksums. (BlockFetcher compression type was populated
incorrectly.) Now we only read what is needed.
* Minimized code duplication and differing/incompatible/awkward
abstractions in meta_blocks.{cc,h} (e.g. SeekTo in metaindex block
without parsing block handle)
* Moved some meta block handling code from table_properties*.*
* Moved some code specific to block-based table from shared table/ code
to BlockBasedTable class. The checksum stuff means we can't completely
separate it, but things that don't need to be in shared table/ code
should not be.
* Use unique_ptr rather than raw ptr in more places. (Note: you can
std::move from unique_ptr to shared_ptr.)
Without enhancements to GetPropertiesOfAllTablesTest (see below),
net reduction of roughly 100 lines of code.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9163
Test Plan:
existing tests and
* Enhanced DBTablePropertiesTest.GetPropertiesOfAllTablesTest to verify that
checksums are now checked on direct read of table properties by TableCache
(new test would fail before this change)
* Also enhanced DBTablePropertiesTest.GetPropertiesOfAllTablesTest to test
putting table properties under old meta name
* Also generally enhanced that same test to actually test what it was
supposed to be testing already, by kicking things out of table cache when
we don't want them there.
Reviewed By: ajkr, mrambacher
Differential Revision: D32514757
Pulled By: pdillinger
fbshipit-source-id: 507964b9311d186ae8d1131182290cbd97a99fa9
2021-11-18 19:42:12 +00:00
|
|
|
bool skip_load_table_files = skip_load_table_files_;
|
|
|
|
TEST_SYNC_POINT_CALLBACK(
|
|
|
|
"VersionEditHandler::LoadTables:skip_load_table_files",
|
|
|
|
&skip_load_table_files);
|
|
|
|
if (skip_load_table_files) {
|
2020-11-11 15:58:15 +00:00
|
|
|
return Status::OK();
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
assert(cfd != nullptr);
|
|
|
|
assert(!cfd->IsDropped());
|
|
|
|
auto builder_iter = builders_.find(cfd->GetID());
|
|
|
|
assert(builder_iter != builders_.end());
|
|
|
|
assert(builder_iter->second != nullptr);
|
|
|
|
VersionBuilder* builder = builder_iter->second->version_builder();
|
|
|
|
assert(builder);
|
2023-04-25 19:08:23 +00:00
|
|
|
const MutableCFOptions* moptions = cfd->GetLatestMutableCFOptions();
|
2020-06-16 19:56:43 +00:00
|
|
|
Status s = builder->LoadTableHandlers(
|
2020-03-21 02:17:54 +00:00
|
|
|
cfd->internal_stats(),
|
|
|
|
version_set_->db_options_->max_file_opening_threads,
|
|
|
|
prefetch_index_and_filter_in_cache, is_initial_load,
|
2023-04-25 19:08:23 +00:00
|
|
|
moptions->prefix_extractor, MaxFileSizeForL0MetaPin(*moptions),
|
|
|
|
read_options_, moptions->block_protection_bytes_per_key);
|
2021-04-19 18:55:20 +00:00
|
|
|
if ((s.IsPathNotFound() || s.IsCorruption()) && no_error_if_files_missing_) {
|
2020-03-21 02:17:54 +00:00
|
|
|
s = Status::OK();
|
|
|
|
}
|
|
|
|
if (!s.ok() && !version_set_->db_options_->paranoid_checks) {
|
|
|
|
s = Status::OK();
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
|
|
|
Status VersionEditHandler::ExtractInfoFromVersionEdit(ColumnFamilyData* cfd,
|
|
|
|
const VersionEdit& edit) {
|
|
|
|
Status s;
|
2023-11-01 19:04:11 +00:00
|
|
|
if (edit.HasDbId()) {
|
2020-11-11 15:58:15 +00:00
|
|
|
version_set_->db_id_ = edit.GetDbId();
|
2023-11-01 19:04:11 +00:00
|
|
|
version_edit_params_.SetDBId(edit.GetDbId());
|
2020-11-11 15:58:15 +00:00
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
if (cfd != nullptr) {
|
2023-11-01 19:04:11 +00:00
|
|
|
if (edit.HasLogNumber()) {
|
|
|
|
if (cfd->GetLogNumber() > edit.GetLogNumber()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
ROCKS_LOG_WARN(
|
|
|
|
version_set_->db_options()->info_log,
|
|
|
|
"MANIFEST corruption detected, but ignored - Log numbers in "
|
|
|
|
"records NOT monotonically increasing");
|
|
|
|
} else {
|
2023-11-01 19:04:11 +00:00
|
|
|
cfd->SetLogNumber(edit.GetLogNumber());
|
|
|
|
version_edit_params_.SetLogNumber(edit.GetLogNumber());
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
}
|
2023-11-01 19:04:11 +00:00
|
|
|
if (edit.HasComparatorName()) {
|
2023-07-27 03:16:32 +00:00
|
|
|
bool mark_sst_files_has_no_udt = false;
|
|
|
|
// If `persist_user_defined_timestamps` flag is recorded in manifest, it
|
|
|
|
// is guaranteed to be in the same VersionEdit as comparator. Otherwise,
|
|
|
|
// it's not recorded and it should have default value true.
|
|
|
|
s = ValidateUserDefinedTimestampsOptions(
|
2023-11-01 19:04:11 +00:00
|
|
|
cfd->user_comparator(), edit.GetComparatorName(),
|
2023-07-27 03:16:32 +00:00
|
|
|
cfd->ioptions()->persist_user_defined_timestamps,
|
2023-11-01 19:04:11 +00:00
|
|
|
edit.GetPersistUserDefinedTimestamps(), &mark_sst_files_has_no_udt);
|
2023-07-27 03:16:32 +00:00
|
|
|
if (!s.ok() && cf_to_cmp_names_) {
|
2023-11-01 19:04:11 +00:00
|
|
|
cf_to_cmp_names_->emplace(cfd->GetID(), edit.GetComparatorName());
|
2021-04-22 03:42:02 +00:00
|
|
|
}
|
2023-07-27 03:16:32 +00:00
|
|
|
if (mark_sst_files_has_no_udt) {
|
|
|
|
cfds_to_mark_no_udt_.insert(cfd->GetID());
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
2020-12-05 22:17:11 +00:00
|
|
|
if (edit.HasFullHistoryTsLow()) {
|
|
|
|
const std::string& new_ts = edit.GetFullHistoryTsLow();
|
|
|
|
cfd->SetFullHistoryTsLow(new_ts);
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (s.ok()) {
|
2023-11-01 19:04:11 +00:00
|
|
|
if (edit.HasPrevLogNumber()) {
|
|
|
|
version_edit_params_.SetPrevLogNumber(edit.GetPrevLogNumber());
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
2023-11-01 19:04:11 +00:00
|
|
|
if (edit.HasNextFile()) {
|
|
|
|
version_edit_params_.SetNextFile(edit.GetNextFile());
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
2023-11-01 19:04:11 +00:00
|
|
|
if (edit.HasMaxColumnFamily()) {
|
|
|
|
version_edit_params_.SetMaxColumnFamily(edit.GetMaxColumnFamily());
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
2023-11-01 19:04:11 +00:00
|
|
|
if (edit.HasMinLogNumberToKeep()) {
|
|
|
|
version_edit_params_.SetMinLogNumberToKeep(
|
|
|
|
std::max(version_edit_params_.GetMinLogNumberToKeep(),
|
|
|
|
edit.GetMinLogNumberToKeep()));
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
2023-11-01 19:04:11 +00:00
|
|
|
if (edit.HasLastSequence()) {
|
2022-01-06 00:00:41 +00:00
|
|
|
// `VersionEdit::last_sequence_`s are assumed to be non-decreasing. This
|
|
|
|
// is legacy behavior that cannot change without breaking downgrade
|
|
|
|
// compatibility.
|
2023-11-01 19:04:11 +00:00
|
|
|
assert(!version_edit_params_.HasLastSequence() ||
|
|
|
|
version_edit_params_.GetLastSequence() <= edit.GetLastSequence());
|
|
|
|
version_edit_params_.SetLastSequence(edit.GetLastSequence());
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
2023-11-01 19:04:11 +00:00
|
|
|
if (!version_edit_params_.HasPrevLogNumber()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
version_edit_params_.SetPrevLogNumber(0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
2023-07-10 18:03:25 +00:00
|
|
|
Status VersionEditHandler::MaybeHandleFileBoundariesForNewFiles(
|
|
|
|
VersionEdit& edit, const ColumnFamilyData* cfd) {
|
|
|
|
if (edit.GetNewFiles().empty()) {
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
auto ucmp = cfd->user_comparator();
|
|
|
|
assert(ucmp);
|
|
|
|
size_t ts_sz = ucmp->timestamp_size();
|
|
|
|
if (ts_sz == 0) {
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
|
|
|
VersionEdit::NewFiles& new_files = edit.GetMutableNewFiles();
|
|
|
|
assert(!new_files.empty());
|
2023-07-27 03:16:32 +00:00
|
|
|
// If true, enabling user-defined timestamp is detected for this column
|
|
|
|
// family. All its existing SST files need to have the file boundaries handled
|
|
|
|
// and their `persist_user_defined_timestamps` flag set to false regardless of
|
|
|
|
// its existing value.
|
|
|
|
bool mark_existing_ssts_with_no_udt =
|
|
|
|
cfds_to_mark_no_udt_.find(cfd->GetID()) != cfds_to_mark_no_udt_.end();
|
2023-07-10 18:03:25 +00:00
|
|
|
bool file_boundaries_need_handling = false;
|
|
|
|
for (auto& new_file : new_files) {
|
|
|
|
FileMetaData& meta = new_file.second;
|
2023-07-27 03:16:32 +00:00
|
|
|
if (meta.user_defined_timestamps_persisted &&
|
|
|
|
!mark_existing_ssts_with_no_udt) {
|
2023-07-10 18:03:25 +00:00
|
|
|
// `FileMetaData.user_defined_timestamps_persisted` field is the value of
|
|
|
|
// the flag `AdvancedColumnFamilyOptions.persist_user_defined_timestamps`
|
|
|
|
// at the time when the SST file was created. As a result, all added SST
|
|
|
|
// files in one `VersionEdit` should have the same value for it.
|
|
|
|
if (file_boundaries_need_handling) {
|
|
|
|
return Status::Corruption(
|
|
|
|
"New files in one VersionEdit has different "
|
|
|
|
"user_defined_timestamps_persisted value.");
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
file_boundaries_need_handling = true;
|
2023-07-27 03:16:32 +00:00
|
|
|
assert(!meta.user_defined_timestamps_persisted ||
|
|
|
|
mark_existing_ssts_with_no_udt);
|
|
|
|
if (mark_existing_ssts_with_no_udt) {
|
|
|
|
meta.user_defined_timestamps_persisted = false;
|
|
|
|
}
|
2023-07-10 18:03:25 +00:00
|
|
|
std::string smallest_buf;
|
|
|
|
std::string largest_buf;
|
|
|
|
PadInternalKeyWithMinTimestamp(&smallest_buf, meta.smallest.Encode(),
|
|
|
|
ts_sz);
|
|
|
|
PadInternalKeyWithMinTimestamp(&largest_buf, meta.largest.Encode(), ts_sz);
|
|
|
|
meta.smallest.DecodeFrom(smallest_buf);
|
|
|
|
meta.largest.DecodeFrom(largest_buf);
|
|
|
|
}
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
2020-03-21 02:17:54 +00:00
|
|
|
VersionEditHandlerPointInTime::VersionEditHandlerPointInTime(
|
2021-03-10 18:58:07 +00:00
|
|
|
bool read_only, std::vector<ColumnFamilyDescriptor> column_families,
|
Sort L0 files by newly introduced epoch_num (#10922)
Summary:
**Context:**
Sorting L0 files by `largest_seqno` has at least two inconvenience:
- File ingestion and compaction involving ingested files can create files of overlapping seqno range with the existing files. `force_consistency_check=true` will catch such overlap seqno range even those harmless overlap.
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n")
- insert k1@1 to memtable m1
- ingest file s1 with k2@2, ingest file s2 with k3@3
- insert k4@4 to m1
- compact files s1, s2 and result in new file s3 of seqno range [2, 3]
- flush m1 and result in new file s4 of seqno range [1, 4]. And `force_consistency_check=true` will think s4 and s3 has file reordering corruption that might cause retuning an old value of k1
- However such caught corruption is a false positive since s1, s2 will not have overlapped keys with k1 or whatever inserted into m1 before ingest file s1 by the requirement of file ingestion (otherwise the m1 will be flushed first before any of the file ingestion completes). Therefore there in fact isn't any file reordering corruption.
- Single delete can decrease a file's largest seqno and ordering by `largest_seqno` can introduce a wrong ordering hence file reordering corruption
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n", Credit to ajkr for this example)
- an existing SST s1 contains only k1@1
- insert k1@2 to memtable m1
- ingest file s2 with k3@3, ingest file s3 with k4@4
- insert single delete k5@5 in m1
- flush m1 and result in new file s4 of seqno range [2, 5]
- compact s1, s2, s3 and result in new file s5 of seqno range [1, 4]
- compact s4 and result in new file s6 of seqno range [2] due to single delete
- By the last step, we have file ordering by largest seqno (">" means "newer") : s5 > s6 while s6 contains a newer version of the k1's value (i.e, k1@2) than s5, which is a real reordering corruption. While this can be caught by `force_consistency_check=true`, there isn't a good way to prevent this from happening if ordering by `largest_seqno`
Therefore, we are redesigning the sorting criteria of L0 files and avoid above inconvenience. Credit to ajkr , we now introduce `epoch_num` which describes the order of a file being flushed or ingested/imported (compaction output file will has the minimum `epoch_num` among input files'). This will avoid the above inconvenience in the following ways:
- In the first case above, there will no longer be overlap seqno range check in `force_consistency_check=true` but `epoch_number` ordering check. This will result in file ordering s1 < s2 < s4 (pre-compaction) and s3 < s4 (post-compaction) which won't trigger false positive corruption. See test class `DBCompactionTestL0FilesMisorderCorruption*` for more.
- In the second case above, this will result in file ordering s1 < s2 < s3 < s4 (pre-compacting s1, s2, s3), s5 < s4 (post-compacting s1, s2, s3), s5 < s6 (post-compacting s4), which are correct file ordering without causing any corruption.
**Summary:**
- Introduce `epoch_number` stored per `ColumnFamilyData` and sort CF's L0 files by their assigned `epoch_number` instead of `largest_seqno`.
- `epoch_number` is increased and assigned upon `VersionEdit::AddFile()` for flush (or similarly for WriteLevel0TableForRecovery) and file ingestion (except for allow_behind_true, which will always get assigned as the `kReservedEpochNumberForFileIngestedBehind`)
- Compaction output file is assigned with the minimum `epoch_number` among input files'
- Refit level: reuse refitted file's epoch_number
- Other paths needing `epoch_number` treatment:
- Import column families: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`
- Repair: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`.
- Assigning new epoch_number to a file and adding this file to LSM tree should be atomic. This is guaranteed by us assigning epoch_number right upon `VersionEdit::AddFile()` where this version edit will be apply to LSM tree shape right after by holding the db mutex (e.g, flush, file ingestion, import column family) or by there is only 1 ongoing edit per CF (e.g, WriteLevel0TableForRecovery, Repair).
- Assigning the minimum input epoch number to compaction output file won't misorder L0 files (even through later `Refit(target_level=0)`). It's due to for every key "k" in the input range, a legit compaction will cover a continuous epoch number range of that key. As long as we assign the key "k" the minimum input epoch number, it won't become newer or older than the versions of this key that aren't included in this compaction hence no misorder.
- Persist `epoch_number` of each file in manifest and recover `epoch_number` on db recovery
- Backward compatibility with old db without `epoch_number` support is guaranteed by assigning `epoch_number` to recovered files by `NewestFirstBySeqno` order. See `VersionStorageInfo::RecoverEpochNumbers()` for more
- Forward compatibility with manifest is guaranteed by flexibility of `NewFileCustomTag`
- Replace `force_consistent_check` on L0 with `epoch_number` and remove false positive check like case 1 with `largest_seqno` above
- Due to backward compatibility issue, we might encounter files with missing epoch number at the beginning of db recovery. We will still use old L0 sorting mechanism (`NewestFirstBySeqno`) to check/sort them till we infer their epoch number. See usages of `EpochNumberRequirement`.
- Remove fix https://github.com/facebook/rocksdb/pull/5958#issue-511150930 and their outdated tests to file reordering corruption because such fix can be replaced by this PR.
- Misc:
- update existing tests with `epoch_number` so make check will pass
- update https://github.com/facebook/rocksdb/pull/5958#issue-511150930 tests to verify corruption is fixed using `epoch_number` and cover universal/fifo compaction/CompactRange/CompactFile cases
- assert db_mutex is held for a few places before calling ColumnFamilyData::NewEpochNumber()
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10922
Test Plan:
- `make check`
- New unit tests under `db/db_compaction_test.cc`, `db/db_test2.cc`, `db/version_builder_test.cc`, `db/repair_test.cc`
- Updated tests (i.e, `DBCompactionTestL0FilesMisorderCorruption*`) under https://github.com/facebook/rocksdb/pull/5958#issue-511150930
- [Ongoing] Compatibility test: manually run https://github.com/ajkr/rocksdb/commit/36a5686ec012f35a4371e409aa85c404ca1c210d (with file ingestion off for running the `.orig` binary to prevent this bug affecting upgrade/downgrade formality checking) for 1 hour on `simple black/white box`, `cf_consistency/txn/enable_ts with whitebox + test_best_efforts_recovery with blackbox`
- [Ongoing] normal db stress test
- [Ongoing] db stress test with aggressive value https://github.com/facebook/rocksdb/pull/10761
Reviewed By: ajkr
Differential Revision: D41063187
Pulled By: hx235
fbshipit-source-id: 826cb23455de7beaabe2d16c57682a82733a32a9
2022-12-13 21:29:37 +00:00
|
|
|
VersionSet* version_set, const std::shared_ptr<IOTracer>& io_tracer,
|
2023-04-21 16:07:18 +00:00
|
|
|
const ReadOptions& read_options,
|
Sort L0 files by newly introduced epoch_num (#10922)
Summary:
**Context:**
Sorting L0 files by `largest_seqno` has at least two inconvenience:
- File ingestion and compaction involving ingested files can create files of overlapping seqno range with the existing files. `force_consistency_check=true` will catch such overlap seqno range even those harmless overlap.
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n")
- insert k1@1 to memtable m1
- ingest file s1 with k2@2, ingest file s2 with k3@3
- insert k4@4 to m1
- compact files s1, s2 and result in new file s3 of seqno range [2, 3]
- flush m1 and result in new file s4 of seqno range [1, 4]. And `force_consistency_check=true` will think s4 and s3 has file reordering corruption that might cause retuning an old value of k1
- However such caught corruption is a false positive since s1, s2 will not have overlapped keys with k1 or whatever inserted into m1 before ingest file s1 by the requirement of file ingestion (otherwise the m1 will be flushed first before any of the file ingestion completes). Therefore there in fact isn't any file reordering corruption.
- Single delete can decrease a file's largest seqno and ordering by `largest_seqno` can introduce a wrong ordering hence file reordering corruption
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n", Credit to ajkr for this example)
- an existing SST s1 contains only k1@1
- insert k1@2 to memtable m1
- ingest file s2 with k3@3, ingest file s3 with k4@4
- insert single delete k5@5 in m1
- flush m1 and result in new file s4 of seqno range [2, 5]
- compact s1, s2, s3 and result in new file s5 of seqno range [1, 4]
- compact s4 and result in new file s6 of seqno range [2] due to single delete
- By the last step, we have file ordering by largest seqno (">" means "newer") : s5 > s6 while s6 contains a newer version of the k1's value (i.e, k1@2) than s5, which is a real reordering corruption. While this can be caught by `force_consistency_check=true`, there isn't a good way to prevent this from happening if ordering by `largest_seqno`
Therefore, we are redesigning the sorting criteria of L0 files and avoid above inconvenience. Credit to ajkr , we now introduce `epoch_num` which describes the order of a file being flushed or ingested/imported (compaction output file will has the minimum `epoch_num` among input files'). This will avoid the above inconvenience in the following ways:
- In the first case above, there will no longer be overlap seqno range check in `force_consistency_check=true` but `epoch_number` ordering check. This will result in file ordering s1 < s2 < s4 (pre-compaction) and s3 < s4 (post-compaction) which won't trigger false positive corruption. See test class `DBCompactionTestL0FilesMisorderCorruption*` for more.
- In the second case above, this will result in file ordering s1 < s2 < s3 < s4 (pre-compacting s1, s2, s3), s5 < s4 (post-compacting s1, s2, s3), s5 < s6 (post-compacting s4), which are correct file ordering without causing any corruption.
**Summary:**
- Introduce `epoch_number` stored per `ColumnFamilyData` and sort CF's L0 files by their assigned `epoch_number` instead of `largest_seqno`.
- `epoch_number` is increased and assigned upon `VersionEdit::AddFile()` for flush (or similarly for WriteLevel0TableForRecovery) and file ingestion (except for allow_behind_true, which will always get assigned as the `kReservedEpochNumberForFileIngestedBehind`)
- Compaction output file is assigned with the minimum `epoch_number` among input files'
- Refit level: reuse refitted file's epoch_number
- Other paths needing `epoch_number` treatment:
- Import column families: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`
- Repair: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`.
- Assigning new epoch_number to a file and adding this file to LSM tree should be atomic. This is guaranteed by us assigning epoch_number right upon `VersionEdit::AddFile()` where this version edit will be apply to LSM tree shape right after by holding the db mutex (e.g, flush, file ingestion, import column family) or by there is only 1 ongoing edit per CF (e.g, WriteLevel0TableForRecovery, Repair).
- Assigning the minimum input epoch number to compaction output file won't misorder L0 files (even through later `Refit(target_level=0)`). It's due to for every key "k" in the input range, a legit compaction will cover a continuous epoch number range of that key. As long as we assign the key "k" the minimum input epoch number, it won't become newer or older than the versions of this key that aren't included in this compaction hence no misorder.
- Persist `epoch_number` of each file in manifest and recover `epoch_number` on db recovery
- Backward compatibility with old db without `epoch_number` support is guaranteed by assigning `epoch_number` to recovered files by `NewestFirstBySeqno` order. See `VersionStorageInfo::RecoverEpochNumbers()` for more
- Forward compatibility with manifest is guaranteed by flexibility of `NewFileCustomTag`
- Replace `force_consistent_check` on L0 with `epoch_number` and remove false positive check like case 1 with `largest_seqno` above
- Due to backward compatibility issue, we might encounter files with missing epoch number at the beginning of db recovery. We will still use old L0 sorting mechanism (`NewestFirstBySeqno`) to check/sort them till we infer their epoch number. See usages of `EpochNumberRequirement`.
- Remove fix https://github.com/facebook/rocksdb/pull/5958#issue-511150930 and their outdated tests to file reordering corruption because such fix can be replaced by this PR.
- Misc:
- update existing tests with `epoch_number` so make check will pass
- update https://github.com/facebook/rocksdb/pull/5958#issue-511150930 tests to verify corruption is fixed using `epoch_number` and cover universal/fifo compaction/CompactRange/CompactFile cases
- assert db_mutex is held for a few places before calling ColumnFamilyData::NewEpochNumber()
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10922
Test Plan:
- `make check`
- New unit tests under `db/db_compaction_test.cc`, `db/db_test2.cc`, `db/version_builder_test.cc`, `db/repair_test.cc`
- Updated tests (i.e, `DBCompactionTestL0FilesMisorderCorruption*`) under https://github.com/facebook/rocksdb/pull/5958#issue-511150930
- [Ongoing] Compatibility test: manually run https://github.com/ajkr/rocksdb/commit/36a5686ec012f35a4371e409aa85c404ca1c210d (with file ingestion off for running the `.orig` binary to prevent this bug affecting upgrade/downgrade formality checking) for 1 hour on `simple black/white box`, `cf_consistency/txn/enable_ts with whitebox + test_best_efforts_recovery with blackbox`
- [Ongoing] normal db stress test
- [Ongoing] db stress test with aggressive value https://github.com/facebook/rocksdb/pull/10761
Reviewed By: ajkr
Differential Revision: D41063187
Pulled By: hx235
fbshipit-source-id: 826cb23455de7beaabe2d16c57682a82733a32a9
2022-12-13 21:29:37 +00:00
|
|
|
EpochNumberRequirement epoch_number_requirement)
|
2020-03-21 02:17:54 +00:00
|
|
|
: VersionEditHandler(read_only, column_families, version_set,
|
|
|
|
/*track_missing_files=*/true,
|
Sort L0 files by newly introduced epoch_num (#10922)
Summary:
**Context:**
Sorting L0 files by `largest_seqno` has at least two inconvenience:
- File ingestion and compaction involving ingested files can create files of overlapping seqno range with the existing files. `force_consistency_check=true` will catch such overlap seqno range even those harmless overlap.
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n")
- insert k1@1 to memtable m1
- ingest file s1 with k2@2, ingest file s2 with k3@3
- insert k4@4 to m1
- compact files s1, s2 and result in new file s3 of seqno range [2, 3]
- flush m1 and result in new file s4 of seqno range [1, 4]. And `force_consistency_check=true` will think s4 and s3 has file reordering corruption that might cause retuning an old value of k1
- However such caught corruption is a false positive since s1, s2 will not have overlapped keys with k1 or whatever inserted into m1 before ingest file s1 by the requirement of file ingestion (otherwise the m1 will be flushed first before any of the file ingestion completes). Therefore there in fact isn't any file reordering corruption.
- Single delete can decrease a file's largest seqno and ordering by `largest_seqno` can introduce a wrong ordering hence file reordering corruption
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n", Credit to ajkr for this example)
- an existing SST s1 contains only k1@1
- insert k1@2 to memtable m1
- ingest file s2 with k3@3, ingest file s3 with k4@4
- insert single delete k5@5 in m1
- flush m1 and result in new file s4 of seqno range [2, 5]
- compact s1, s2, s3 and result in new file s5 of seqno range [1, 4]
- compact s4 and result in new file s6 of seqno range [2] due to single delete
- By the last step, we have file ordering by largest seqno (">" means "newer") : s5 > s6 while s6 contains a newer version of the k1's value (i.e, k1@2) than s5, which is a real reordering corruption. While this can be caught by `force_consistency_check=true`, there isn't a good way to prevent this from happening if ordering by `largest_seqno`
Therefore, we are redesigning the sorting criteria of L0 files and avoid above inconvenience. Credit to ajkr , we now introduce `epoch_num` which describes the order of a file being flushed or ingested/imported (compaction output file will has the minimum `epoch_num` among input files'). This will avoid the above inconvenience in the following ways:
- In the first case above, there will no longer be overlap seqno range check in `force_consistency_check=true` but `epoch_number` ordering check. This will result in file ordering s1 < s2 < s4 (pre-compaction) and s3 < s4 (post-compaction) which won't trigger false positive corruption. See test class `DBCompactionTestL0FilesMisorderCorruption*` for more.
- In the second case above, this will result in file ordering s1 < s2 < s3 < s4 (pre-compacting s1, s2, s3), s5 < s4 (post-compacting s1, s2, s3), s5 < s6 (post-compacting s4), which are correct file ordering without causing any corruption.
**Summary:**
- Introduce `epoch_number` stored per `ColumnFamilyData` and sort CF's L0 files by their assigned `epoch_number` instead of `largest_seqno`.
- `epoch_number` is increased and assigned upon `VersionEdit::AddFile()` for flush (or similarly for WriteLevel0TableForRecovery) and file ingestion (except for allow_behind_true, which will always get assigned as the `kReservedEpochNumberForFileIngestedBehind`)
- Compaction output file is assigned with the minimum `epoch_number` among input files'
- Refit level: reuse refitted file's epoch_number
- Other paths needing `epoch_number` treatment:
- Import column families: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`
- Repair: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`.
- Assigning new epoch_number to a file and adding this file to LSM tree should be atomic. This is guaranteed by us assigning epoch_number right upon `VersionEdit::AddFile()` where this version edit will be apply to LSM tree shape right after by holding the db mutex (e.g, flush, file ingestion, import column family) or by there is only 1 ongoing edit per CF (e.g, WriteLevel0TableForRecovery, Repair).
- Assigning the minimum input epoch number to compaction output file won't misorder L0 files (even through later `Refit(target_level=0)`). It's due to for every key "k" in the input range, a legit compaction will cover a continuous epoch number range of that key. As long as we assign the key "k" the minimum input epoch number, it won't become newer or older than the versions of this key that aren't included in this compaction hence no misorder.
- Persist `epoch_number` of each file in manifest and recover `epoch_number` on db recovery
- Backward compatibility with old db without `epoch_number` support is guaranteed by assigning `epoch_number` to recovered files by `NewestFirstBySeqno` order. See `VersionStorageInfo::RecoverEpochNumbers()` for more
- Forward compatibility with manifest is guaranteed by flexibility of `NewFileCustomTag`
- Replace `force_consistent_check` on L0 with `epoch_number` and remove false positive check like case 1 with `largest_seqno` above
- Due to backward compatibility issue, we might encounter files with missing epoch number at the beginning of db recovery. We will still use old L0 sorting mechanism (`NewestFirstBySeqno`) to check/sort them till we infer their epoch number. See usages of `EpochNumberRequirement`.
- Remove fix https://github.com/facebook/rocksdb/pull/5958#issue-511150930 and their outdated tests to file reordering corruption because such fix can be replaced by this PR.
- Misc:
- update existing tests with `epoch_number` so make check will pass
- update https://github.com/facebook/rocksdb/pull/5958#issue-511150930 tests to verify corruption is fixed using `epoch_number` and cover universal/fifo compaction/CompactRange/CompactFile cases
- assert db_mutex is held for a few places before calling ColumnFamilyData::NewEpochNumber()
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10922
Test Plan:
- `make check`
- New unit tests under `db/db_compaction_test.cc`, `db/db_test2.cc`, `db/version_builder_test.cc`, `db/repair_test.cc`
- Updated tests (i.e, `DBCompactionTestL0FilesMisorderCorruption*`) under https://github.com/facebook/rocksdb/pull/5958#issue-511150930
- [Ongoing] Compatibility test: manually run https://github.com/ajkr/rocksdb/commit/36a5686ec012f35a4371e409aa85c404ca1c210d (with file ingestion off for running the `.orig` binary to prevent this bug affecting upgrade/downgrade formality checking) for 1 hour on `simple black/white box`, `cf_consistency/txn/enable_ts with whitebox + test_best_efforts_recovery with blackbox`
- [Ongoing] normal db stress test
- [Ongoing] db stress test with aggressive value https://github.com/facebook/rocksdb/pull/10761
Reviewed By: ajkr
Differential Revision: D41063187
Pulled By: hx235
fbshipit-source-id: 826cb23455de7beaabe2d16c57682a82733a32a9
2022-12-13 21:29:37 +00:00
|
|
|
/*no_error_if_files_missing=*/true, io_tracer,
|
2023-04-21 16:07:18 +00:00
|
|
|
read_options, epoch_number_requirement) {}
|
2020-03-21 02:17:54 +00:00
|
|
|
|
|
|
|
VersionEditHandlerPointInTime::~VersionEditHandlerPointInTime() {
|
|
|
|
for (const auto& elem : versions_) {
|
|
|
|
delete elem.second;
|
|
|
|
}
|
|
|
|
versions_.clear();
|
|
|
|
}
|
|
|
|
|
|
|
|
void VersionEditHandlerPointInTime::CheckIterationResult(
|
|
|
|
const log::Reader& reader, Status* s) {
|
|
|
|
VersionEditHandler::CheckIterationResult(reader, s);
|
|
|
|
assert(s != nullptr);
|
|
|
|
if (s->ok()) {
|
|
|
|
for (auto* cfd : *(version_set_->column_family_set_)) {
|
|
|
|
if (cfd->IsDropped()) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
assert(cfd->initialized());
|
|
|
|
auto v_iter = versions_.find(cfd->GetID());
|
|
|
|
if (v_iter != versions_.end()) {
|
|
|
|
assert(v_iter->second != nullptr);
|
|
|
|
|
|
|
|
version_set_->AppendVersion(cfd, v_iter->second);
|
|
|
|
versions_.erase(v_iter);
|
|
|
|
}
|
|
|
|
}
|
2021-11-20 03:52:17 +00:00
|
|
|
} else {
|
|
|
|
for (const auto& elem : versions_) {
|
|
|
|
delete elem.second;
|
|
|
|
}
|
|
|
|
versions_.clear();
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ColumnFamilyData* VersionEditHandlerPointInTime::DestroyCfAndCleanup(
|
|
|
|
const VersionEdit& edit) {
|
|
|
|
ColumnFamilyData* cfd = VersionEditHandler::DestroyCfAndCleanup(edit);
|
2023-11-01 19:04:11 +00:00
|
|
|
auto v_iter = versions_.find(edit.GetColumnFamily());
|
2020-03-21 02:17:54 +00:00
|
|
|
if (v_iter != versions_.end()) {
|
|
|
|
delete v_iter->second;
|
|
|
|
versions_.erase(v_iter);
|
|
|
|
}
|
|
|
|
return cfd;
|
|
|
|
}
|
|
|
|
|
|
|
|
Status VersionEditHandlerPointInTime::MaybeCreateVersion(
|
|
|
|
const VersionEdit& edit, ColumnFamilyData* cfd, bool force_create_version) {
|
|
|
|
assert(cfd != nullptr);
|
|
|
|
if (!force_create_version) {
|
2023-11-01 19:04:11 +00:00
|
|
|
assert(edit.GetColumnFamily() == cfd->GetID());
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
auto missing_files_iter = cf_to_missing_files_.find(cfd->GetID());
|
|
|
|
assert(missing_files_iter != cf_to_missing_files_.end());
|
|
|
|
std::unordered_set<uint64_t>& missing_files = missing_files_iter->second;
|
2021-04-19 18:55:20 +00:00
|
|
|
|
|
|
|
auto missing_blob_files_high_iter =
|
|
|
|
cf_to_missing_blob_files_high_.find(cfd->GetID());
|
|
|
|
assert(missing_blob_files_high_iter != cf_to_missing_blob_files_high_.end());
|
|
|
|
const uint64_t prev_missing_blob_file_high =
|
|
|
|
missing_blob_files_high_iter->second;
|
|
|
|
|
|
|
|
VersionBuilder* builder = nullptr;
|
|
|
|
|
|
|
|
if (prev_missing_blob_file_high != kInvalidBlobFileNumber) {
|
|
|
|
auto builder_iter = builders_.find(cfd->GetID());
|
|
|
|
assert(builder_iter != builders_.end());
|
|
|
|
builder = builder_iter->second->version_builder();
|
|
|
|
assert(builder != nullptr);
|
|
|
|
}
|
|
|
|
|
|
|
|
// At this point, we have not yet applied the new version edits read from the
|
|
|
|
// MANIFEST. We check whether we have any missing table and blob files.
|
|
|
|
const bool prev_has_missing_files =
|
|
|
|
!missing_files.empty() ||
|
|
|
|
(prev_missing_blob_file_high != kInvalidBlobFileNumber &&
|
|
|
|
prev_missing_blob_file_high >= builder->GetMinOldestBlobFileNumber());
|
|
|
|
|
2020-03-21 02:17:54 +00:00
|
|
|
for (const auto& file : edit.GetDeletedFiles()) {
|
|
|
|
uint64_t file_num = file.second;
|
|
|
|
auto fiter = missing_files.find(file_num);
|
|
|
|
if (fiter != missing_files.end()) {
|
|
|
|
missing_files.erase(fiter);
|
|
|
|
}
|
|
|
|
}
|
2021-04-19 18:55:20 +00:00
|
|
|
|
|
|
|
assert(!cfd->ioptions()->cf_paths.empty());
|
2020-03-21 02:17:54 +00:00
|
|
|
Status s;
|
|
|
|
for (const auto& elem : edit.GetNewFiles()) {
|
2022-11-23 06:53:31 +00:00
|
|
|
int level = elem.first;
|
2020-03-21 02:17:54 +00:00
|
|
|
const FileMetaData& meta = elem.second;
|
|
|
|
const FileDescriptor& fd = meta.fd;
|
|
|
|
uint64_t file_num = fd.GetNumber();
|
|
|
|
const std::string fpath =
|
|
|
|
MakeTableFileName(cfd->ioptions()->cf_paths[0].path, file_num);
|
2022-11-23 06:53:31 +00:00
|
|
|
s = VerifyFile(cfd, fpath, level, meta);
|
2020-05-08 19:59:02 +00:00
|
|
|
if (s.IsPathNotFound() || s.IsNotFound() || s.IsCorruption()) {
|
2020-03-21 02:17:54 +00:00
|
|
|
missing_files.insert(file_num);
|
|
|
|
s = Status::OK();
|
2020-06-16 19:56:43 +00:00
|
|
|
} else if (!s.ok()) {
|
|
|
|
break;
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
}
|
2021-04-19 18:55:20 +00:00
|
|
|
|
|
|
|
uint64_t missing_blob_file_num = prev_missing_blob_file_high;
|
|
|
|
for (const auto& elem : edit.GetBlobFileAdditions()) {
|
|
|
|
uint64_t file_num = elem.GetBlobFileNumber();
|
|
|
|
s = VerifyBlobFile(cfd, file_num, elem);
|
|
|
|
if (s.IsPathNotFound() || s.IsNotFound() || s.IsCorruption()) {
|
|
|
|
missing_blob_file_num = std::max(missing_blob_file_num, file_num);
|
|
|
|
s = Status::OK();
|
|
|
|
} else if (!s.ok()) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
bool has_missing_blob_files = false;
|
|
|
|
if (missing_blob_file_num != kInvalidBlobFileNumber &&
|
|
|
|
missing_blob_file_num >= prev_missing_blob_file_high) {
|
|
|
|
missing_blob_files_high_iter->second = missing_blob_file_num;
|
|
|
|
has_missing_blob_files = true;
|
|
|
|
} else if (missing_blob_file_num < prev_missing_blob_file_high) {
|
|
|
|
assert(false);
|
|
|
|
}
|
|
|
|
|
|
|
|
// We still have not applied the new version edit, but have tried to add new
|
|
|
|
// table and blob files after verifying their presence and consistency.
|
|
|
|
// Therefore, we know whether we will see new missing table and blob files
|
|
|
|
// later after actually applying the version edit. We perform the check here
|
|
|
|
// and record the result.
|
|
|
|
const bool has_missing_files =
|
|
|
|
!missing_files.empty() || has_missing_blob_files;
|
|
|
|
|
2023-11-01 19:04:11 +00:00
|
|
|
bool missing_info = !version_edit_params_.HasLogNumber() ||
|
|
|
|
!version_edit_params_.HasNextFile() ||
|
|
|
|
!version_edit_params_.HasLastSequence();
|
2020-03-21 02:17:54 +00:00
|
|
|
|
2021-04-19 18:55:20 +00:00
|
|
|
// Create version before apply edit. The version will represent the state
|
|
|
|
// before applying the version edit.
|
|
|
|
// A new version will created if:
|
|
|
|
// 1) no error has occurred so far, and
|
|
|
|
// 2) log_number_, next_file_number_ and last_sequence_ are known, and
|
|
|
|
// 3) any of the following:
|
|
|
|
// a) no missing file before, but will have missing file(s) after applying
|
|
|
|
// this version edit.
|
|
|
|
// b) no missing file after applying the version edit, and the caller
|
|
|
|
// explicitly request that a new version be created.
|
2020-06-16 19:56:43 +00:00
|
|
|
if (s.ok() && !missing_info &&
|
2021-04-19 18:55:20 +00:00
|
|
|
((has_missing_files && !prev_has_missing_files) ||
|
|
|
|
(!has_missing_files && force_create_version))) {
|
|
|
|
if (!builder) {
|
|
|
|
auto builder_iter = builders_.find(cfd->GetID());
|
|
|
|
assert(builder_iter != builders_.end());
|
|
|
|
builder = builder_iter->second->version_builder();
|
|
|
|
assert(builder);
|
|
|
|
}
|
|
|
|
|
2023-04-25 19:08:23 +00:00
|
|
|
const MutableCFOptions* cf_opts_ptr = cfd->GetLatestMutableCFOptions();
|
2020-03-21 02:17:54 +00:00
|
|
|
auto* version = new Version(cfd, version_set_, version_set_->file_options_,
|
2023-04-25 19:08:23 +00:00
|
|
|
*cf_opts_ptr, io_tracer_,
|
Sort L0 files by newly introduced epoch_num (#10922)
Summary:
**Context:**
Sorting L0 files by `largest_seqno` has at least two inconvenience:
- File ingestion and compaction involving ingested files can create files of overlapping seqno range with the existing files. `force_consistency_check=true` will catch such overlap seqno range even those harmless overlap.
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n")
- insert k1@1 to memtable m1
- ingest file s1 with k2@2, ingest file s2 with k3@3
- insert k4@4 to m1
- compact files s1, s2 and result in new file s3 of seqno range [2, 3]
- flush m1 and result in new file s4 of seqno range [1, 4]. And `force_consistency_check=true` will think s4 and s3 has file reordering corruption that might cause retuning an old value of k1
- However such caught corruption is a false positive since s1, s2 will not have overlapped keys with k1 or whatever inserted into m1 before ingest file s1 by the requirement of file ingestion (otherwise the m1 will be flushed first before any of the file ingestion completes). Therefore there in fact isn't any file reordering corruption.
- Single delete can decrease a file's largest seqno and ordering by `largest_seqno` can introduce a wrong ordering hence file reordering corruption
- For example, consider the following sequence of events ("key@n" indicates key at seqno "n", Credit to ajkr for this example)
- an existing SST s1 contains only k1@1
- insert k1@2 to memtable m1
- ingest file s2 with k3@3, ingest file s3 with k4@4
- insert single delete k5@5 in m1
- flush m1 and result in new file s4 of seqno range [2, 5]
- compact s1, s2, s3 and result in new file s5 of seqno range [1, 4]
- compact s4 and result in new file s6 of seqno range [2] due to single delete
- By the last step, we have file ordering by largest seqno (">" means "newer") : s5 > s6 while s6 contains a newer version of the k1's value (i.e, k1@2) than s5, which is a real reordering corruption. While this can be caught by `force_consistency_check=true`, there isn't a good way to prevent this from happening if ordering by `largest_seqno`
Therefore, we are redesigning the sorting criteria of L0 files and avoid above inconvenience. Credit to ajkr , we now introduce `epoch_num` which describes the order of a file being flushed or ingested/imported (compaction output file will has the minimum `epoch_num` among input files'). This will avoid the above inconvenience in the following ways:
- In the first case above, there will no longer be overlap seqno range check in `force_consistency_check=true` but `epoch_number` ordering check. This will result in file ordering s1 < s2 < s4 (pre-compaction) and s3 < s4 (post-compaction) which won't trigger false positive corruption. See test class `DBCompactionTestL0FilesMisorderCorruption*` for more.
- In the second case above, this will result in file ordering s1 < s2 < s3 < s4 (pre-compacting s1, s2, s3), s5 < s4 (post-compacting s1, s2, s3), s5 < s6 (post-compacting s4), which are correct file ordering without causing any corruption.
**Summary:**
- Introduce `epoch_number` stored per `ColumnFamilyData` and sort CF's L0 files by their assigned `epoch_number` instead of `largest_seqno`.
- `epoch_number` is increased and assigned upon `VersionEdit::AddFile()` for flush (or similarly for WriteLevel0TableForRecovery) and file ingestion (except for allow_behind_true, which will always get assigned as the `kReservedEpochNumberForFileIngestedBehind`)
- Compaction output file is assigned with the minimum `epoch_number` among input files'
- Refit level: reuse refitted file's epoch_number
- Other paths needing `epoch_number` treatment:
- Import column families: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`
- Repair: reuse file's epoch_number if exists. If not, assign one based on `NewestFirstBySeqNo`.
- Assigning new epoch_number to a file and adding this file to LSM tree should be atomic. This is guaranteed by us assigning epoch_number right upon `VersionEdit::AddFile()` where this version edit will be apply to LSM tree shape right after by holding the db mutex (e.g, flush, file ingestion, import column family) or by there is only 1 ongoing edit per CF (e.g, WriteLevel0TableForRecovery, Repair).
- Assigning the minimum input epoch number to compaction output file won't misorder L0 files (even through later `Refit(target_level=0)`). It's due to for every key "k" in the input range, a legit compaction will cover a continuous epoch number range of that key. As long as we assign the key "k" the minimum input epoch number, it won't become newer or older than the versions of this key that aren't included in this compaction hence no misorder.
- Persist `epoch_number` of each file in manifest and recover `epoch_number` on db recovery
- Backward compatibility with old db without `epoch_number` support is guaranteed by assigning `epoch_number` to recovered files by `NewestFirstBySeqno` order. See `VersionStorageInfo::RecoverEpochNumbers()` for more
- Forward compatibility with manifest is guaranteed by flexibility of `NewFileCustomTag`
- Replace `force_consistent_check` on L0 with `epoch_number` and remove false positive check like case 1 with `largest_seqno` above
- Due to backward compatibility issue, we might encounter files with missing epoch number at the beginning of db recovery. We will still use old L0 sorting mechanism (`NewestFirstBySeqno`) to check/sort them till we infer their epoch number. See usages of `EpochNumberRequirement`.
- Remove fix https://github.com/facebook/rocksdb/pull/5958#issue-511150930 and their outdated tests to file reordering corruption because such fix can be replaced by this PR.
- Misc:
- update existing tests with `epoch_number` so make check will pass
- update https://github.com/facebook/rocksdb/pull/5958#issue-511150930 tests to verify corruption is fixed using `epoch_number` and cover universal/fifo compaction/CompactRange/CompactFile cases
- assert db_mutex is held for a few places before calling ColumnFamilyData::NewEpochNumber()
Pull Request resolved: https://github.com/facebook/rocksdb/pull/10922
Test Plan:
- `make check`
- New unit tests under `db/db_compaction_test.cc`, `db/db_test2.cc`, `db/version_builder_test.cc`, `db/repair_test.cc`
- Updated tests (i.e, `DBCompactionTestL0FilesMisorderCorruption*`) under https://github.com/facebook/rocksdb/pull/5958#issue-511150930
- [Ongoing] Compatibility test: manually run https://github.com/ajkr/rocksdb/commit/36a5686ec012f35a4371e409aa85c404ca1c210d (with file ingestion off for running the `.orig` binary to prevent this bug affecting upgrade/downgrade formality checking) for 1 hour on `simple black/white box`, `cf_consistency/txn/enable_ts with whitebox + test_best_efforts_recovery with blackbox`
- [Ongoing] normal db stress test
- [Ongoing] db stress test with aggressive value https://github.com/facebook/rocksdb/pull/10761
Reviewed By: ajkr
Differential Revision: D41063187
Pulled By: hx235
fbshipit-source-id: 826cb23455de7beaabe2d16c57682a82733a32a9
2022-12-13 21:29:37 +00:00
|
|
|
version_set_->current_version_number_++,
|
|
|
|
epoch_number_requirement_);
|
2022-11-23 06:53:31 +00:00
|
|
|
s = builder->LoadTableHandlers(
|
|
|
|
cfd->internal_stats(),
|
|
|
|
version_set_->db_options_->max_file_opening_threads, false, true,
|
2023-04-25 19:08:23 +00:00
|
|
|
cf_opts_ptr->prefix_extractor, MaxFileSizeForL0MetaPin(*cf_opts_ptr),
|
|
|
|
read_options_, cf_opts_ptr->block_protection_bytes_per_key);
|
2022-11-23 06:53:31 +00:00
|
|
|
if (!s.ok()) {
|
|
|
|
delete version;
|
|
|
|
if (s.IsCorruption()) {
|
|
|
|
s = Status::OK();
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
2020-05-05 17:44:12 +00:00
|
|
|
s = builder->SaveTo(version->storage_info());
|
|
|
|
if (s.ok()) {
|
2022-02-04 16:18:18 +00:00
|
|
|
version->PrepareAppend(
|
2023-04-21 16:07:18 +00:00
|
|
|
*cfd->GetLatestMutableCFOptions(), read_options_,
|
2020-05-05 17:44:12 +00:00
|
|
|
!version_set_->db_options_->skip_stats_update_on_db_open);
|
|
|
|
auto v_iter = versions_.find(cfd->GetID());
|
|
|
|
if (v_iter != versions_.end()) {
|
|
|
|
delete v_iter->second;
|
|
|
|
v_iter->second = version;
|
|
|
|
} else {
|
|
|
|
versions_.emplace(cfd->GetID(), version);
|
|
|
|
}
|
2020-03-21 02:17:54 +00:00
|
|
|
} else {
|
2020-05-05 17:44:12 +00:00
|
|
|
delete version;
|
2020-03-21 02:17:54 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
2022-11-23 06:53:31 +00:00
|
|
|
Status VersionEditHandlerPointInTime::VerifyFile(ColumnFamilyData* cfd,
|
|
|
|
const std::string& fpath,
|
|
|
|
int level,
|
2021-03-10 18:58:07 +00:00
|
|
|
const FileMetaData& fmeta) {
|
2023-04-21 16:07:18 +00:00
|
|
|
return version_set_->VerifyFileMetadata(read_options_, cfd, fpath, level,
|
|
|
|
fmeta);
|
2021-03-10 18:58:07 +00:00
|
|
|
}
|
|
|
|
|
2021-04-19 18:55:20 +00:00
|
|
|
Status VersionEditHandlerPointInTime::VerifyBlobFile(
|
|
|
|
ColumnFamilyData* cfd, uint64_t blob_file_num,
|
|
|
|
const BlobFileAddition& blob_addition) {
|
2022-06-21 03:58:11 +00:00
|
|
|
BlobSource* blob_source = cfd->blob_source();
|
|
|
|
assert(blob_source);
|
2021-04-19 18:55:20 +00:00
|
|
|
CacheHandleGuard<BlobFileReader> blob_file_reader;
|
2023-04-21 16:07:18 +00:00
|
|
|
|
|
|
|
Status s = blob_source->GetBlobFileReader(read_options_, blob_file_num,
|
|
|
|
&blob_file_reader);
|
2021-04-19 18:55:20 +00:00
|
|
|
if (!s.ok()) {
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
// TODO: verify checksum
|
|
|
|
(void)blob_addition;
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
2022-11-23 06:53:31 +00:00
|
|
|
Status VersionEditHandlerPointInTime::LoadTables(
|
|
|
|
ColumnFamilyData* /*cfd*/, bool /*prefetch_index_and_filter_in_cache*/,
|
|
|
|
bool /*is_initial_load*/) {
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
2021-03-10 18:58:07 +00:00
|
|
|
Status ManifestTailer::Initialize() {
|
|
|
|
if (Mode::kRecovery == mode_) {
|
|
|
|
return VersionEditHandler::Initialize();
|
|
|
|
}
|
|
|
|
assert(Mode::kCatchUp == mode_);
|
|
|
|
Status s;
|
|
|
|
if (!initialized_) {
|
|
|
|
ColumnFamilySet* cfd_set = version_set_->GetColumnFamilySet();
|
|
|
|
assert(cfd_set);
|
|
|
|
ColumnFamilyData* default_cfd = cfd_set->GetDefault();
|
|
|
|
assert(default_cfd);
|
|
|
|
auto builder_iter = builders_.find(default_cfd->GetID());
|
|
|
|
assert(builder_iter != builders_.end());
|
|
|
|
|
|
|
|
Version* dummy_version = default_cfd->dummy_versions();
|
|
|
|
assert(dummy_version);
|
|
|
|
Version* base_version = dummy_version->Next();
|
|
|
|
assert(base_version);
|
|
|
|
base_version->Ref();
|
|
|
|
VersionBuilderUPtr new_builder(
|
|
|
|
new BaseReferencedVersionBuilder(default_cfd, base_version));
|
|
|
|
builder_iter->second = std::move(new_builder);
|
|
|
|
|
|
|
|
initialized_ = true;
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
|
|
|
Status ManifestTailer::ApplyVersionEdit(VersionEdit& edit,
|
|
|
|
ColumnFamilyData** cfd) {
|
|
|
|
Status s = VersionEditHandler::ApplyVersionEdit(edit, cfd);
|
|
|
|
if (s.ok()) {
|
|
|
|
assert(cfd);
|
|
|
|
if (*cfd) {
|
|
|
|
cfds_changed_.insert(*cfd);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
|
|
|
Status ManifestTailer::OnColumnFamilyAdd(VersionEdit& edit,
|
|
|
|
ColumnFamilyData** cfd) {
|
|
|
|
if (Mode::kRecovery == mode_) {
|
|
|
|
return VersionEditHandler::OnColumnFamilyAdd(edit, cfd);
|
|
|
|
}
|
|
|
|
assert(Mode::kCatchUp == mode_);
|
|
|
|
ColumnFamilySet* cfd_set = version_set_->GetColumnFamilySet();
|
|
|
|
assert(cfd_set);
|
|
|
|
ColumnFamilyData* tmp_cfd = cfd_set->GetColumnFamily(edit.GetColumnFamily());
|
|
|
|
assert(cfd);
|
|
|
|
*cfd = tmp_cfd;
|
|
|
|
if (!tmp_cfd) {
|
|
|
|
// For now, ignore new column families created after Recover() succeeds.
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
auto builder_iter = builders_.find(edit.GetColumnFamily());
|
|
|
|
assert(builder_iter != builders_.end());
|
|
|
|
|
|
|
|
Version* dummy_version = tmp_cfd->dummy_versions();
|
|
|
|
assert(dummy_version);
|
|
|
|
Version* base_version = dummy_version->Next();
|
|
|
|
assert(base_version);
|
|
|
|
base_version->Ref();
|
|
|
|
VersionBuilderUPtr new_builder(
|
|
|
|
new BaseReferencedVersionBuilder(tmp_cfd, base_version));
|
|
|
|
builder_iter->second = std::move(new_builder);
|
|
|
|
|
|
|
|
#ifndef NDEBUG
|
|
|
|
auto version_iter = versions_.find(edit.GetColumnFamily());
|
2021-11-20 03:52:17 +00:00
|
|
|
assert(version_iter == versions_.end());
|
2021-03-10 18:58:07 +00:00
|
|
|
#endif // !NDEBUG
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
|
|
|
void ManifestTailer::CheckIterationResult(const log::Reader& reader,
|
|
|
|
Status* s) {
|
|
|
|
VersionEditHandlerPointInTime::CheckIterationResult(reader, s);
|
|
|
|
assert(s);
|
|
|
|
if (s->ok()) {
|
|
|
|
if (Mode::kRecovery == mode_) {
|
|
|
|
mode_ = Mode::kCatchUp;
|
|
|
|
} else {
|
|
|
|
assert(Mode::kCatchUp == mode_);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-11-23 06:53:31 +00:00
|
|
|
Status ManifestTailer::VerifyFile(ColumnFamilyData* cfd,
|
|
|
|
const std::string& fpath, int level,
|
2021-03-10 18:58:07 +00:00
|
|
|
const FileMetaData& fmeta) {
|
2022-11-23 06:53:31 +00:00
|
|
|
Status s =
|
|
|
|
VersionEditHandlerPointInTime::VerifyFile(cfd, fpath, level, fmeta);
|
2021-03-10 18:58:07 +00:00
|
|
|
// TODO: Open file or create hard link to prevent the file from being
|
|
|
|
// deleted.
|
|
|
|
return s;
|
|
|
|
}
|
|
|
|
|
2020-11-11 15:58:15 +00:00
|
|
|
void DumpManifestHandler::CheckIterationResult(const log::Reader& reader,
|
|
|
|
Status* s) {
|
|
|
|
VersionEditHandler::CheckIterationResult(reader, s);
|
|
|
|
if (!s->ok()) {
|
|
|
|
fprintf(stdout, "%s\n", s->ToString().c_str());
|
|
|
|
return;
|
|
|
|
}
|
2021-04-22 03:42:02 +00:00
|
|
|
assert(cf_to_cmp_names_);
|
2020-11-11 15:58:15 +00:00
|
|
|
for (auto* cfd : *(version_set_->column_family_set_)) {
|
|
|
|
fprintf(stdout,
|
|
|
|
"--------------- Column family \"%s\" (ID %" PRIu32
|
|
|
|
") --------------\n",
|
|
|
|
cfd->GetName().c_str(), cfd->GetID());
|
|
|
|
fprintf(stdout, "log number: %" PRIu64 "\n", cfd->GetLogNumber());
|
2021-04-22 03:42:02 +00:00
|
|
|
auto it = cf_to_cmp_names_->find(cfd->GetID());
|
|
|
|
if (it != cf_to_cmp_names_->end()) {
|
|
|
|
fprintf(stdout,
|
|
|
|
"comparator: <%s>, but the comparator object is not available.\n",
|
|
|
|
it->second.c_str());
|
|
|
|
} else {
|
|
|
|
fprintf(stdout, "comparator: %s\n", cfd->user_comparator()->Name());
|
|
|
|
}
|
2020-11-11 15:58:15 +00:00
|
|
|
assert(cfd->current());
|
2021-06-10 19:54:13 +00:00
|
|
|
|
|
|
|
// Print out DebugStrings. Can include non-terminating null characters.
|
|
|
|
fwrite(cfd->current()->DebugString(hex_).data(), sizeof(char),
|
|
|
|
cfd->current()->DebugString(hex_).size(), stdout);
|
2020-11-11 15:58:15 +00:00
|
|
|
}
|
|
|
|
fprintf(stdout,
|
|
|
|
"next_file_number %" PRIu64 " last_sequence %" PRIu64
|
|
|
|
" prev_log_number %" PRIu64 " max_column_family %" PRIu32
|
Fix a race condition in WAL tracking causing DB open failure (#9715)
Summary:
There is a race condition if WAL tracking in the MANIFEST is enabled in a database that disables 2PC.
The race condition is between two background flush threads trying to install flush results to the MANIFEST.
Consider an example database with two column families: "default" (cfd0) and "cf1" (cfd1). Initially,
both column families have one mutable (active) memtable whose data backed by 6.log.
1. Trigger a manual flush for "cf1", creating a 7.log
2. Insert another key to "default", and trigger flush for "default", creating 8.log
3. BgFlushThread1 finishes writing 9.sst
4. BgFlushThread2 finishes writing 10.sst
```
Time BgFlushThread1 BgFlushThread2
| mutex_.Lock()
| precompute min_wal_to_keep as 6
| mutex_.Unlock()
| mutex_.Lock()
| precompute min_wal_to_keep as 6
| join MANIFEST write queue and mutex_.Unlock()
| write to MANIFEST
| mutex_.Lock()
| cfd1->log_number = 7
| Signal bg_flush_2 and mutex_.Unlock()
| wake up and mutex_.Lock()
| cfd0->log_number = 8
| FindObsoleteFiles() with job_context->log_number == 7
| mutex_.Unlock()
| PurgeObsoleteFiles() deletes 6.log
V
```
As shown in the above, BgFlushThread2 thinks that the min wal to keep is 6.log because "cf1" has unflushed data in 6.log (cf1.log_number=6).
Similarly, BgThread1 thinks that min wal to keep is also 6.log because "default" has unflushed data (default.log_number=6).
No WAL deletion will be written to MANIFEST because 6 is equal to `versions_->wals_.min_wal_number_to_keep`,
due to https://github.com/facebook/rocksdb/blob/7.1.fb/db/memtable_list.cc#L513:L514.
The bg flush thread that finishes last will perform file purging. `job_context.log_number` will be evaluated as 7, i.e.
the min wal that contains unflushed data, causing 6.log to be deleted. However, MANIFEST thinks 6.log should still exist.
If you close the db at this point, you won't be able to re-open it if `track_and_verify_wal_in_manifest` is true.
We must handle the case of multiple bg flush threads, and it is difficult for one bg flush thread to know
the correct min wal number until the other bg flush threads have finished committing to the manifest and updated
the `cfd::log_number`.
To fix this issue, we rename an existing variable `min_log_number_to_keep_2pc` to `min_log_number_to_keep`,
and use it to track WAL file deletion in non-2pc mode as well.
This variable is updated only 1) during recovery with mutex held, or 2) in the MANIFEST write thread.
`min_log_number_to_keep` means RocksDB will delete WALs below it, although there may be WALs
above it which are also obsolete. Formally, we will have [min_wal_to_keep, max_obsolete_wal]. During recovery, we
make sure that only WALs above max_obsolete_wal are checked and added back to `alive_log_files_`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9715
Test Plan:
```
make check
```
Also ran stress test below (with asan) to make sure it completes successfully.
```
TEST_TMPDIR=/dev/shm/rocksdb OPT=-g ASAN_OPTIONS=disable_coredump=0 \
CRASH_TEST_EXT_ARGS=--compression_type=zstd SKIP_FORMAT_BUCK_CHECKS=1 \
make J=52 -j52 blackbox_asan_crash_test
```
Reviewed By: ltamasi
Differential Revision: D34984412
Pulled By: riversand963
fbshipit-source-id: c7b21a8d84751bb55ea79c9f387103d21b231005
2022-03-24 02:41:31 +00:00
|
|
|
" min_log_number_to_keep %" PRIu64 "\n",
|
2020-11-11 15:58:15 +00:00
|
|
|
version_set_->current_next_file_number(),
|
|
|
|
version_set_->LastSequence(), version_set_->prev_log_number(),
|
|
|
|
version_set_->column_family_set_->GetMaxColumnFamily(),
|
Fix a race condition in WAL tracking causing DB open failure (#9715)
Summary:
There is a race condition if WAL tracking in the MANIFEST is enabled in a database that disables 2PC.
The race condition is between two background flush threads trying to install flush results to the MANIFEST.
Consider an example database with two column families: "default" (cfd0) and "cf1" (cfd1). Initially,
both column families have one mutable (active) memtable whose data backed by 6.log.
1. Trigger a manual flush for "cf1", creating a 7.log
2. Insert another key to "default", and trigger flush for "default", creating 8.log
3. BgFlushThread1 finishes writing 9.sst
4. BgFlushThread2 finishes writing 10.sst
```
Time BgFlushThread1 BgFlushThread2
| mutex_.Lock()
| precompute min_wal_to_keep as 6
| mutex_.Unlock()
| mutex_.Lock()
| precompute min_wal_to_keep as 6
| join MANIFEST write queue and mutex_.Unlock()
| write to MANIFEST
| mutex_.Lock()
| cfd1->log_number = 7
| Signal bg_flush_2 and mutex_.Unlock()
| wake up and mutex_.Lock()
| cfd0->log_number = 8
| FindObsoleteFiles() with job_context->log_number == 7
| mutex_.Unlock()
| PurgeObsoleteFiles() deletes 6.log
V
```
As shown in the above, BgFlushThread2 thinks that the min wal to keep is 6.log because "cf1" has unflushed data in 6.log (cf1.log_number=6).
Similarly, BgThread1 thinks that min wal to keep is also 6.log because "default" has unflushed data (default.log_number=6).
No WAL deletion will be written to MANIFEST because 6 is equal to `versions_->wals_.min_wal_number_to_keep`,
due to https://github.com/facebook/rocksdb/blob/7.1.fb/db/memtable_list.cc#L513:L514.
The bg flush thread that finishes last will perform file purging. `job_context.log_number` will be evaluated as 7, i.e.
the min wal that contains unflushed data, causing 6.log to be deleted. However, MANIFEST thinks 6.log should still exist.
If you close the db at this point, you won't be able to re-open it if `track_and_verify_wal_in_manifest` is true.
We must handle the case of multiple bg flush threads, and it is difficult for one bg flush thread to know
the correct min wal number until the other bg flush threads have finished committing to the manifest and updated
the `cfd::log_number`.
To fix this issue, we rename an existing variable `min_log_number_to_keep_2pc` to `min_log_number_to_keep`,
and use it to track WAL file deletion in non-2pc mode as well.
This variable is updated only 1) during recovery with mutex held, or 2) in the MANIFEST write thread.
`min_log_number_to_keep` means RocksDB will delete WALs below it, although there may be WALs
above it which are also obsolete. Formally, we will have [min_wal_to_keep, max_obsolete_wal]. During recovery, we
make sure that only WALs above max_obsolete_wal are checked and added back to `alive_log_files_`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9715
Test Plan:
```
make check
```
Also ran stress test below (with asan) to make sure it completes successfully.
```
TEST_TMPDIR=/dev/shm/rocksdb OPT=-g ASAN_OPTIONS=disable_coredump=0 \
CRASH_TEST_EXT_ARGS=--compression_type=zstd SKIP_FORMAT_BUCK_CHECKS=1 \
make J=52 -j52 blackbox_asan_crash_test
```
Reviewed By: ltamasi
Differential Revision: D34984412
Pulled By: riversand963
fbshipit-source-id: c7b21a8d84751bb55ea79c9f387103d21b231005
2022-03-24 02:41:31 +00:00
|
|
|
version_set_->min_log_number_to_keep());
|
2020-11-11 15:58:15 +00:00
|
|
|
}
|
|
|
|
|
2020-03-21 02:17:54 +00:00
|
|
|
} // namespace ROCKSDB_NAMESPACE
|