Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
2017-07-15 23:03:42 +00:00
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
//
|
|
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
|
2019-10-16 17:39:00 +00:00
|
|
|
#include <atomic>
|
2021-08-11 01:07:48 +00:00
|
|
|
#include <limits>
|
2019-10-16 17:39:00 +00:00
|
|
|
|
|
|
|
#include "db/db_impl/db_impl.h"
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
#include "db/db_test_util.h"
|
2021-01-26 06:07:26 +00:00
|
|
|
#include "env/mock_env.h"
|
2020-09-15 04:10:09 +00:00
|
|
|
#include "file/filename.h"
|
2019-10-16 17:39:00 +00:00
|
|
|
#include "port/port.h"
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
#include "port/stack_trace.h"
|
2020-12-04 03:21:08 +00:00
|
|
|
#include "rocksdb/utilities/transaction_db.h"
|
2019-05-30 18:21:38 +00:00
|
|
|
#include "test_util/sync_point.h"
|
2021-09-08 14:45:59 +00:00
|
|
|
#include "test_util/testutil.h"
|
2019-10-16 17:39:00 +00:00
|
|
|
#include "util/cast_util.h"
|
|
|
|
#include "util/mutexlock.h"
|
2020-07-09 21:33:42 +00:00
|
|
|
#include "utilities/fault_injection_env.h"
|
2021-02-11 06:18:33 +00:00
|
|
|
#include "utilities/fault_injection_fs.h"
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
namespace ROCKSDB_NAMESPACE {
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
// This is a static filter used for filtering
|
|
|
|
// kvs during the compaction process.
|
|
|
|
static std::string NEW_VALUE = "NewValue";
|
|
|
|
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
class DBFlushTest : public DBTestBase {
|
|
|
|
public:
|
2021-07-23 15:37:27 +00:00
|
|
|
DBFlushTest() : DBTestBase("db_flush_test", /*env_do_fsync=*/true) {}
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
};
|
|
|
|
|
2017-04-13 20:07:33 +00:00
|
|
|
class DBFlushDirectIOTest : public DBFlushTest,
|
|
|
|
public ::testing::WithParamInterface<bool> {
|
|
|
|
public:
|
|
|
|
DBFlushDirectIOTest() : DBFlushTest() {}
|
|
|
|
};
|
|
|
|
|
2018-10-26 22:06:44 +00:00
|
|
|
class DBAtomicFlushTest : public DBFlushTest,
|
|
|
|
public ::testing::WithParamInterface<bool> {
|
|
|
|
public:
|
|
|
|
DBAtomicFlushTest() : DBFlushTest() {}
|
|
|
|
};
|
|
|
|
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
// We had issue when two background threads trying to flush at the same time,
|
|
|
|
// only one of them get committed. The test verifies the issue is fixed.
|
|
|
|
TEST_F(DBFlushTest, FlushWhileWritingManifest) {
|
|
|
|
Options options;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.max_background_flushes = 2;
|
2017-03-13 16:41:30 +00:00
|
|
|
options.env = env_;
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
Reopen(options);
|
|
|
|
FlushOptions no_wait;
|
|
|
|
no_wait.wait = false;
|
2022-11-02 21:34:24 +00:00
|
|
|
no_wait.allow_write_stall = true;
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"VersionSet::LogAndApply:WriteManifest",
|
|
|
|
"DBFlushTest::FlushWhileWritingManifest:1"},
|
2018-10-05 22:37:45 +00:00
|
|
|
{"MemTableList::TryInstallMemtableFlushResults:InProgress",
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
"VersionSet::LogAndApply:WriteManifestDone"}});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
ASSERT_OK(Put("foo", "v"));
|
|
|
|
ASSERT_OK(dbfull()->Flush(no_wait));
|
|
|
|
TEST_SYNC_POINT("DBFlushTest::FlushWhileWritingManifest:1");
|
|
|
|
ASSERT_OK(Put("bar", "v"));
|
|
|
|
ASSERT_OK(dbfull()->Flush(no_wait));
|
|
|
|
// If the issue is hit we will wait here forever.
|
2020-10-12 22:15:58 +00:00
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForFlushMemTable());
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
ASSERT_EQ(2, TotalTableFiles());
|
|
|
|
}
|
|
|
|
|
2018-07-19 01:41:26 +00:00
|
|
|
// Disable this test temporarily on Travis as it fails intermittently.
|
|
|
|
// Github issue: #4151
|
2017-01-20 07:03:45 +00:00
|
|
|
TEST_F(DBFlushTest, SyncFail) {
|
|
|
|
std::unique_ptr<FaultInjectionTestEnv> fault_injection_env(
|
2017-03-13 16:41:30 +00:00
|
|
|
new FaultInjectionTestEnv(env_));
|
2017-01-20 07:03:45 +00:00
|
|
|
Options options;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.env = fault_injection_env.get();
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
2024-05-30 21:53:13 +00:00
|
|
|
{{"DBFlushTest::SyncFail:1", "DBImpl::SyncClosedWals:Start"},
|
|
|
|
{"DBImpl::SyncClosedWals:Failed", "DBFlushTest::SyncFail:2"}});
|
2017-01-20 07:03:45 +00:00
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
2018-11-13 19:27:32 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2020-10-12 22:15:58 +00:00
|
|
|
ASSERT_OK(Put("key", "value"));
|
2017-01-20 07:03:45 +00:00
|
|
|
FlushOptions flush_options;
|
|
|
|
flush_options.wait = false;
|
|
|
|
ASSERT_OK(dbfull()->Flush(flush_options));
|
Skip deleted WALs during recovery
Summary:
This patch record min log number to keep to the manifest while flushing SST files to ignore them and any WAL older than them during recovery. This is to avoid scenarios when we have a gap between the WAL files are fed to the recovery procedure. The gap could happen by for example out-of-order WAL deletion. Such gap could cause problems in 2PC recovery where the prepared and commit entry are placed into two separate WAL and gap in the WALs could result into not processing the WAL with the commit entry and hence breaking the 2PC recovery logic.
Before the commit, for 2PC case, we determined which log number to keep in FindObsoleteFiles(). We looked at the earliest logs with outstanding prepare entries, or prepare entries whose respective commit or abort are in memtable. With the commit, the same calculation is done while we apply the SST flush. Just before installing the flush file, we precompute the earliest log file to keep after the flush finishes using the same logic (but skipping the memtables just flushed), record this information to the manifest entry for this new flushed SST file. This pre-computed value is also remembered in memory, and will later be used to determine whether a log file can be deleted. This value is unlikely to change until next flush because the commit entry will stay in memtable. (In WritePrepared, we could have removed the older log files as soon as all prepared entries are committed. It's not yet done anyway. Even if we do it, the only thing we loss with this new approach is earlier log deletion between two flushes, which does not guarantee to happen anyway because the obsolete file clean-up function is only executed after flush or compaction)
This min log number to keep is stored in the manifest using the safely-ignore customized field of AddFile entry, in order to guarantee that the DB generated using newer release can be opened by previous releases no older than 4.2.
Closes https://github.com/facebook/rocksdb/pull/3765
Differential Revision: D7747618
Pulled By: siying
fbshipit-source-id: d00c92105b4f83852e9754a1b70d6b64cb590729
2018-05-03 22:35:11 +00:00
|
|
|
// Flush installs a new super-version. Get the ref count after that.
|
2017-01-20 07:03:45 +00:00
|
|
|
fault_injection_env->SetFilesystemActive(false);
|
|
|
|
TEST_SYNC_POINT("DBFlushTest::SyncFail:1");
|
|
|
|
TEST_SYNC_POINT("DBFlushTest::SyncFail:2");
|
|
|
|
fault_injection_env->SetFilesystemActive(true);
|
Skip deleted WALs during recovery
Summary:
This patch record min log number to keep to the manifest while flushing SST files to ignore them and any WAL older than them during recovery. This is to avoid scenarios when we have a gap between the WAL files are fed to the recovery procedure. The gap could happen by for example out-of-order WAL deletion. Such gap could cause problems in 2PC recovery where the prepared and commit entry are placed into two separate WAL and gap in the WALs could result into not processing the WAL with the commit entry and hence breaking the 2PC recovery logic.
Before the commit, for 2PC case, we determined which log number to keep in FindObsoleteFiles(). We looked at the earliest logs with outstanding prepare entries, or prepare entries whose respective commit or abort are in memtable. With the commit, the same calculation is done while we apply the SST flush. Just before installing the flush file, we precompute the earliest log file to keep after the flush finishes using the same logic (but skipping the memtables just flushed), record this information to the manifest entry for this new flushed SST file. This pre-computed value is also remembered in memory, and will later be used to determine whether a log file can be deleted. This value is unlikely to change until next flush because the commit entry will stay in memtable. (In WritePrepared, we could have removed the older log files as soon as all prepared entries are committed. It's not yet done anyway. Even if we do it, the only thing we loss with this new approach is earlier log deletion between two flushes, which does not guarantee to happen anyway because the obsolete file clean-up function is only executed after flush or compaction)
This min log number to keep is stored in the manifest using the safely-ignore customized field of AddFile entry, in order to guarantee that the DB generated using newer release can be opened by previous releases no older than 4.2.
Closes https://github.com/facebook/rocksdb/pull/3765
Differential Revision: D7747618
Pulled By: siying
fbshipit-source-id: d00c92105b4f83852e9754a1b70d6b64cb590729
2018-05-03 22:35:11 +00:00
|
|
|
// Now the background job will do the flush; wait for it.
|
2020-10-12 22:15:58 +00:00
|
|
|
// Returns the IO error happend during flush.
|
|
|
|
ASSERT_NOK(dbfull()->TEST_WaitForFlushMemTable());
|
2017-01-20 07:03:45 +00:00
|
|
|
ASSERT_EQ("", FilesPerLevel()); // flush failed.
|
|
|
|
Destroy(options);
|
|
|
|
}
|
2018-11-13 19:27:32 +00:00
|
|
|
|
|
|
|
TEST_F(DBFlushTest, SyncSkip) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
2024-05-30 21:53:13 +00:00
|
|
|
{{"DBFlushTest::SyncSkip:1", "DBImpl::SyncClosedWals:Skip"},
|
|
|
|
{"DBImpl::SyncClosedWals:Skip", "DBFlushTest::SyncSkip:2"}});
|
2018-11-13 19:27:32 +00:00
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
Reopen(options);
|
2020-10-12 22:15:58 +00:00
|
|
|
ASSERT_OK(Put("key", "value"));
|
2018-11-13 19:27:32 +00:00
|
|
|
|
|
|
|
FlushOptions flush_options;
|
|
|
|
flush_options.wait = false;
|
|
|
|
ASSERT_OK(dbfull()->Flush(flush_options));
|
|
|
|
|
|
|
|
TEST_SYNC_POINT("DBFlushTest::SyncSkip:1");
|
|
|
|
TEST_SYNC_POINT("DBFlushTest::SyncSkip:2");
|
|
|
|
|
|
|
|
// Now the background job will do the flush; wait for it.
|
2020-10-12 22:15:58 +00:00
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForFlushMemTable());
|
2018-11-13 19:27:32 +00:00
|
|
|
|
|
|
|
Destroy(options);
|
|
|
|
}
|
2017-01-20 07:03:45 +00:00
|
|
|
|
2017-05-23 18:04:25 +00:00
|
|
|
TEST_F(DBFlushTest, FlushInLowPriThreadPool) {
|
|
|
|
// Verify setting an empty high-pri (flush) thread pool causes flushes to be
|
|
|
|
// scheduled in the low-pri (compaction) thread pool.
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.level0_file_num_compaction_trigger = 4;
|
2021-09-08 14:45:59 +00:00
|
|
|
options.memtable_factory.reset(test::NewSpecialSkipListFactory(1));
|
2017-05-23 18:04:25 +00:00
|
|
|
Reopen(options);
|
|
|
|
env_->SetBackgroundThreads(0, Env::HIGH);
|
|
|
|
|
|
|
|
std::thread::id tid;
|
|
|
|
int num_flushes = 0, num_compactions = 0;
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
2018-04-13 00:55:14 +00:00
|
|
|
"DBImpl::BGWorkFlush", [&](void* /*arg*/) {
|
2017-05-23 18:04:25 +00:00
|
|
|
if (tid == std::thread::id()) {
|
|
|
|
tid = std::this_thread::get_id();
|
|
|
|
} else {
|
|
|
|
ASSERT_EQ(tid, std::this_thread::get_id());
|
|
|
|
}
|
|
|
|
++num_flushes;
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
2018-04-13 00:55:14 +00:00
|
|
|
"DBImpl::BGWorkCompaction", [&](void* /*arg*/) {
|
2017-05-23 18:04:25 +00:00
|
|
|
ASSERT_EQ(tid, std::this_thread::get_id());
|
|
|
|
++num_compactions;
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
ASSERT_OK(Put("key", "val"));
|
|
|
|
for (int i = 0; i < 4; ++i) {
|
|
|
|
ASSERT_OK(Put("key", "val"));
|
2020-10-12 22:15:58 +00:00
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForFlushMemTable());
|
2017-05-23 18:04:25 +00:00
|
|
|
}
|
2020-10-12 22:15:58 +00:00
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForCompact());
|
2017-05-23 18:04:25 +00:00
|
|
|
ASSERT_EQ(4, num_flushes);
|
|
|
|
ASSERT_EQ(1, num_compactions);
|
|
|
|
}
|
|
|
|
|
Fix possible hang issue in ~DBImpl() when flush is scheduled in LOW pool (#8125)
Summary:
In DBImpl::CloseHelper, we wait for bg_compaction_scheduled_
and bg_flush_scheduled_ to drop to 0. Unschedule is called prior
to cancel any unscheduled flushes/compactions. It is assumed that
anything in the high priority is a flush, and anything in the low
priority pool is a compaction. This assumption, however, is broken when
the high-pri pool is full.
As a result, bg_compaction_scheduled_ can go < 0 and bg_flush_scheduled_
will remain > 0 and DB can be in hang state.
The fix is, we decrement the `bg_{flush,compaction,bottom_compaction}_scheduled_`
inside the `Unschedule{Flush,Compaction,BottomCompaction}Callback()`s. DB
`mutex_` will make the counts atomic in `Unschedule`.
Related discussion: https://github.com/facebook/rocksdb/issues/7928
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8125
Test Plan: Added new test case which hangs without the fix.
Reviewed By: jay-zhuang
Differential Revision: D27390043
Pulled By: ajkr
fbshipit-source-id: 78a367fba9a59ac5607ad24bd1c46dc16d5ec110
2021-03-31 01:34:11 +00:00
|
|
|
// Test when flush job is submitted to low priority thread pool and when DB is
|
|
|
|
// closed in the meanwhile, CloseHelper doesn't hang.
|
|
|
|
TEST_F(DBFlushTest, CloseDBWhenFlushInLowPri) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.max_background_flushes = 1;
|
|
|
|
options.max_total_wal_size = 8192;
|
|
|
|
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
CreateColumnFamilies({"cf1", "cf2"}, options);
|
|
|
|
|
|
|
|
env_->SetBackgroundThreads(0, Env::HIGH);
|
|
|
|
env_->SetBackgroundThreads(1, Env::LOW);
|
|
|
|
test::SleepingBackgroundTask sleeping_task_low;
|
|
|
|
int num_flushes = 0;
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->SetCallBack("DBImpl::BGWorkFlush",
|
|
|
|
[&](void* /*arg*/) { ++num_flushes; });
|
|
|
|
|
|
|
|
int num_low_flush_unscheduled = 0;
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::UnscheduleLowFlushCallback", [&](void* /*arg*/) {
|
|
|
|
num_low_flush_unscheduled++;
|
|
|
|
// There should be one flush job in low pool that needs to be
|
|
|
|
// unscheduled
|
|
|
|
ASSERT_EQ(num_low_flush_unscheduled, 1);
|
|
|
|
});
|
|
|
|
|
|
|
|
int num_high_flush_unscheduled = 0;
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::UnscheduleHighFlushCallback", [&](void* /*arg*/) {
|
|
|
|
num_high_flush_unscheduled++;
|
|
|
|
// There should be no flush job in high pool
|
|
|
|
ASSERT_EQ(num_high_flush_unscheduled, 0);
|
|
|
|
});
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
ASSERT_OK(Put(0, "key1", DummyString(8192)));
|
|
|
|
// Block thread so that flush cannot be run and can be removed from the queue
|
|
|
|
// when called Unschedule.
|
|
|
|
env_->Schedule(&test::SleepingBackgroundTask::DoSleepTask, &sleeping_task_low,
|
|
|
|
Env::Priority::LOW);
|
|
|
|
sleeping_task_low.WaitUntilSleeping();
|
|
|
|
|
|
|
|
// Trigger flush and flush job will be scheduled to LOW priority thread.
|
|
|
|
ASSERT_OK(Put(0, "key2", DummyString(8192)));
|
|
|
|
|
|
|
|
// Close DB and flush job in low priority queue will be removed without
|
|
|
|
// running.
|
|
|
|
Close();
|
|
|
|
sleeping_task_low.WakeUp();
|
|
|
|
sleeping_task_low.WaitUntilDone();
|
|
|
|
ASSERT_EQ(0, num_flushes);
|
|
|
|
|
2023-08-09 22:46:44 +00:00
|
|
|
ASSERT_OK(TryReopenWithColumnFamilies({"default", "cf1", "cf2"}, options));
|
Fix possible hang issue in ~DBImpl() when flush is scheduled in LOW pool (#8125)
Summary:
In DBImpl::CloseHelper, we wait for bg_compaction_scheduled_
and bg_flush_scheduled_ to drop to 0. Unschedule is called prior
to cancel any unscheduled flushes/compactions. It is assumed that
anything in the high priority is a flush, and anything in the low
priority pool is a compaction. This assumption, however, is broken when
the high-pri pool is full.
As a result, bg_compaction_scheduled_ can go < 0 and bg_flush_scheduled_
will remain > 0 and DB can be in hang state.
The fix is, we decrement the `bg_{flush,compaction,bottom_compaction}_scheduled_`
inside the `Unschedule{Flush,Compaction,BottomCompaction}Callback()`s. DB
`mutex_` will make the counts atomic in `Unschedule`.
Related discussion: https://github.com/facebook/rocksdb/issues/7928
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8125
Test Plan: Added new test case which hangs without the fix.
Reviewed By: jay-zhuang
Differential Revision: D27390043
Pulled By: ajkr
fbshipit-source-id: 78a367fba9a59ac5607ad24bd1c46dc16d5ec110
2021-03-31 01:34:11 +00:00
|
|
|
ASSERT_OK(Put(0, "key3", DummyString(8192)));
|
|
|
|
ASSERT_OK(Flush(0));
|
|
|
|
ASSERT_EQ(1, num_flushes);
|
|
|
|
}
|
|
|
|
|
2018-01-19 01:32:50 +00:00
|
|
|
TEST_F(DBFlushTest, ManualFlushWithMinWriteBufferNumberToMerge) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.write_buffer_size = 100;
|
|
|
|
options.max_write_buffer_number = 4;
|
|
|
|
options.min_write_buffer_number_to_merge = 3;
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"DBImpl::BGWorkFlush",
|
|
|
|
"DBFlushTest::ManualFlushWithMinWriteBufferNumberToMerge:1"},
|
|
|
|
{"DBFlushTest::ManualFlushWithMinWriteBufferNumberToMerge:2",
|
2018-01-30 02:44:01 +00:00
|
|
|
"FlushJob::WriteLevel0Table"}});
|
2018-01-19 01:32:50 +00:00
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
ASSERT_OK(Put("key1", "value1"));
|
|
|
|
|
|
|
|
port::Thread t([&]() {
|
|
|
|
// The call wait for flush to finish, i.e. with flush_options.wait = true.
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
});
|
|
|
|
|
|
|
|
// Wait for flush start.
|
|
|
|
TEST_SYNC_POINT("DBFlushTest::ManualFlushWithMinWriteBufferNumberToMerge:1");
|
|
|
|
// Insert a second memtable before the manual flush finish.
|
|
|
|
// At the end of the manual flush job, it will check if further flush
|
|
|
|
// is needed, but it will not trigger flush of the second memtable because
|
|
|
|
// min_write_buffer_number_to_merge is not reached.
|
|
|
|
ASSERT_OK(Put("key2", "value2"));
|
|
|
|
ASSERT_OK(dbfull()->TEST_SwitchMemtable());
|
|
|
|
TEST_SYNC_POINT("DBFlushTest::ManualFlushWithMinWriteBufferNumberToMerge:2");
|
|
|
|
|
|
|
|
// Manual flush should return, without waiting for flush indefinitely.
|
|
|
|
t.join();
|
|
|
|
}
|
|
|
|
|
2019-11-27 22:46:38 +00:00
|
|
|
TEST_F(DBFlushTest, ScheduleOnlyOneBgThread) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
Reopen(options);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
int called = 0;
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::MaybeScheduleFlushOrCompaction:AfterSchedule:0", [&](void* arg) {
|
|
|
|
ASSERT_NE(nullptr, arg);
|
Prefer static_cast in place of most reinterpret_cast (#12308)
Summary:
The following are risks associated with pointer-to-pointer reinterpret_cast:
* Can produce the "wrong result" (crash or memory corruption). IIRC, in theory this can happen for any up-cast or down-cast for a non-standard-layout type, though in practice would only happen for multiple inheritance cases (where the base class pointer might be "inside" the derived object). We don't use multiple inheritance a lot, but we do.
* Can mask useful compiler errors upon code change, including converting between unrelated pointer types that you are expecting to be related, and converting between pointer and scalar types unintentionally.
I can only think of some obscure cases where static_cast could be troublesome when it compiles as a replacement:
* Going through `void*` could plausibly cause unnecessary or broken pointer arithmetic. Suppose we have
`struct Derived: public Base1, public Base2`. If we have `Derived*` -> `void*` -> `Base2*` -> `Derived*` through reinterpret casts, this could plausibly work (though technical UB) assuming the `Base2*` is not dereferenced. Changing to static cast could introduce breaking pointer arithmetic.
* Unnecessary (but safe) pointer arithmetic could arise in a case like `Derived*` -> `Base2*` -> `Derived*` where before the Base2 pointer might not have been dereferenced. This could potentially affect performance.
With some light scripting, I tried replacing pointer-to-pointer reinterpret_casts with static_cast and kept the cases that still compile. Most occurrences of reinterpret_cast have successfully been changed (except for java/ and third-party/). 294 changed, 257 remain.
A couple of related interventions included here:
* Previously Cache::Handle was not actually derived from in the implementations and just used as a `void*` stand-in with reinterpret_cast. Now there is a relationship to allow static_cast. In theory, this could introduce pointer arithmetic (as described above) but is unlikely without multiple inheritance AND non-empty Cache::Handle.
* Remove some unnecessary casts to void* as this is allowed to be implicit (for better or worse).
Most of the remaining reinterpret_casts are for converting to/from raw bytes of objects. We could consider better idioms for these patterns in follow-up work.
I wish there were a way to implement a template variant of static_cast that would only compile if no pointer arithmetic is generated, but best I can tell, this is not possible. AFAIK the best you could do is a dynamic check that the void* conversion after the static cast is unchanged.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12308
Test Plan: existing tests, CI
Reviewed By: ltamasi
Differential Revision: D53204947
Pulled By: pdillinger
fbshipit-source-id: 9de23e618263b0d5b9820f4e15966876888a16e2
2024-02-07 18:44:11 +00:00
|
|
|
auto unscheduled_flushes = *static_cast<int*>(arg);
|
2019-11-27 22:46:38 +00:00
|
|
|
ASSERT_EQ(0, unscheduled_flushes);
|
|
|
|
++called;
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
ASSERT_OK(Put("a", "foo"));
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
ASSERT_OK(dbfull()->Flush(flush_opts));
|
|
|
|
ASSERT_EQ(1, called);
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
}
|
|
|
|
|
2021-06-18 11:56:43 +00:00
|
|
|
// The following 3 tests are designed for testing garbage statistics at flush
|
|
|
|
// time.
|
|
|
|
//
|
|
|
|
// ======= General Information ======= (from GitHub Wiki).
|
|
|
|
// There are three scenarios where memtable flush can be triggered:
|
|
|
|
//
|
|
|
|
// 1 - Memtable size exceeds ColumnFamilyOptions::write_buffer_size
|
|
|
|
// after a write.
|
|
|
|
// 2 - Total memtable size across all column families exceeds
|
|
|
|
// DBOptions::db_write_buffer_size,
|
|
|
|
// or DBOptions::write_buffer_manager signals a flush. In this scenario
|
|
|
|
// the largest memtable will be flushed.
|
|
|
|
// 3 - Total WAL file size exceeds DBOptions::max_total_wal_size.
|
|
|
|
// In this scenario the memtable with the oldest data will be flushed,
|
|
|
|
// in order to allow the WAL file with data from this memtable to be
|
|
|
|
// purged.
|
|
|
|
//
|
|
|
|
// As a result, a memtable can be flushed before it is full. This is one
|
|
|
|
// reason the generated SST file can be smaller than the corresponding
|
|
|
|
// memtable. Compression is another factor to make SST file smaller than
|
|
|
|
// corresponding memtable, since data in memtable is uncompressed.
|
|
|
|
|
|
|
|
TEST_F(DBFlushTest, StatisticsGarbageBasic) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
|
|
|
|
// The following options are used to enforce several values that
|
|
|
|
// may already exist as default values to make this test resilient
|
|
|
|
// to default value updates in the future.
|
|
|
|
options.statistics = CreateDBStatistics();
|
|
|
|
|
|
|
|
// Record all statistics.
|
|
|
|
options.statistics->set_stats_level(StatsLevel::kAll);
|
|
|
|
|
|
|
|
// create the DB if it's not already present
|
|
|
|
options.create_if_missing = true;
|
|
|
|
|
|
|
|
// Useful for now as we are trying to compare uncompressed data savings on
|
|
|
|
// flush().
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
|
|
|
|
// Prevent memtable in place updates. Should already be disabled
|
|
|
|
// (from Wiki:
|
|
|
|
// In place updates can be enabled by toggling on the bool
|
|
|
|
// inplace_update_support flag. However, this flag is by default set to
|
|
|
|
// false
|
|
|
|
// because this thread-safe in-place update support is not compatible
|
|
|
|
// with concurrent memtable writes. Note that the bool
|
|
|
|
// allow_concurrent_memtable_write is set to true by default )
|
|
|
|
options.inplace_update_support = false;
|
|
|
|
options.allow_concurrent_memtable_write = true;
|
|
|
|
|
|
|
|
// Enforce size of a single MemTable to 64MB (64MB = 67108864 bytes).
|
|
|
|
options.write_buffer_size = 64 << 20;
|
|
|
|
|
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
|
|
|
|
// Put multiple times the same key-values.
|
|
|
|
// The encoded length of a db entry in the memtable is
|
|
|
|
// defined in db/memtable.cc (MemTable::Add) as the variable:
|
|
|
|
// encoded_len= VarintLength(internal_key_size) --> =
|
|
|
|
// log_256(internal_key).
|
|
|
|
// Min # of bytes
|
|
|
|
// necessary to
|
|
|
|
// store
|
|
|
|
// internal_key_size.
|
|
|
|
// + internal_key_size --> = actual key string,
|
|
|
|
// (size key_size: w/o term null char)
|
|
|
|
// + 8 bytes for
|
|
|
|
// fixed uint64 "seq
|
|
|
|
// number
|
|
|
|
// +
|
|
|
|
// insertion type"
|
|
|
|
// + VarintLength(val_size) --> = min # of bytes to
|
|
|
|
// store val_size
|
|
|
|
// + val_size --> = actual value
|
|
|
|
// string
|
|
|
|
// For example, in our situation, "key1" : size 4, "value1" : size 6
|
|
|
|
// (the terminating null characters are not copied over to the memtable).
|
|
|
|
// And therefore encoded_len = 1 + (4+8) + 1 + 6 = 20 bytes per entry.
|
|
|
|
// However in terms of raw data contained in the memtable, and written
|
|
|
|
// over to the SSTable, we only count internal_key_size and val_size,
|
|
|
|
// because this is the only raw chunk of bytes that contains everything
|
|
|
|
// necessary to reconstruct a user entry: sequence number, insertion type,
|
|
|
|
// key, and value.
|
|
|
|
|
|
|
|
// To test the relevance of our Memtable garbage statistics,
|
|
|
|
// namely MEMTABLE_PAYLOAD_BYTES_AT_FLUSH and MEMTABLE_GARBAGE_BYTES_AT_FLUSH,
|
|
|
|
// we insert K-V pairs with 3 distinct keys (of length 4),
|
|
|
|
// and random values of arbitrary length RAND_VALUES_LENGTH,
|
|
|
|
// and we repeat this step NUM_REPEAT times total.
|
|
|
|
// At the end, we insert 3 final K-V pairs with the same 3 keys
|
|
|
|
// and known values (these will be the final values, of length 6).
|
|
|
|
// I chose NUM_REPEAT=2,000 such that no automatic flush is
|
|
|
|
// triggered (the number of bytes in the memtable is therefore
|
|
|
|
// well below any meaningful heuristic for a memtable of size 64MB).
|
|
|
|
// As a result, since each K-V pair is inserted as a payload
|
|
|
|
// of N meaningful bytes (sequence number, insertion type,
|
|
|
|
// key, and value = 8 + 4 + RAND_VALUE_LENGTH),
|
|
|
|
// MEMTABLE_GARBAGE_BYTES_AT_FLUSH should be equal to 2,000 * N bytes
|
|
|
|
// and MEMTABLE_PAYLAOD_BYTES_AT_FLUSH = MEMTABLE_GARBAGE_BYTES_AT_FLUSH +
|
|
|
|
// (3*(8 + 4 + 6)) bytes. For RAND_VALUE_LENGTH = 172 (arbitrary value), we
|
|
|
|
// expect:
|
|
|
|
// N = 8 + 4 + 172 = 184 bytes
|
|
|
|
// MEMTABLE_GARBAGE_BYTES_AT_FLUSH = 2,000 * 184 = 368,000 bytes.
|
|
|
|
// MEMTABLE_PAYLOAD_BYTES_AT_FLUSH = 368,000 + 3*18 = 368,054 bytes.
|
|
|
|
|
|
|
|
const size_t NUM_REPEAT = 2000;
|
|
|
|
const size_t RAND_VALUES_LENGTH = 172;
|
|
|
|
const std::string KEY1 = "key1";
|
|
|
|
const std::string KEY2 = "key2";
|
|
|
|
const std::string KEY3 = "key3";
|
|
|
|
const std::string VALUE1 = "value1";
|
|
|
|
const std::string VALUE2 = "value2";
|
|
|
|
const std::string VALUE3 = "value3";
|
|
|
|
uint64_t EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH = 0;
|
|
|
|
uint64_t EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH = 0;
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
// Insertion of of K-V pairs, multiple times.
|
|
|
|
for (size_t i = 0; i < NUM_REPEAT; i++) {
|
|
|
|
// Create value strings of arbitrary length RAND_VALUES_LENGTH bytes.
|
|
|
|
std::string p_v1 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
std::string p_v2 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
std::string p_v3 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEY1, p_v1));
|
|
|
|
ASSERT_OK(Put(KEY2, p_v2));
|
|
|
|
ASSERT_OK(Put(KEY3, p_v3));
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
KEY1.size() + p_v1.size() + sizeof(uint64_t);
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
KEY2.size() + p_v2.size() + sizeof(uint64_t);
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
KEY3.size() + p_v3.size() + sizeof(uint64_t);
|
|
|
|
}
|
|
|
|
|
|
|
|
// The memtable data bytes includes the "garbage"
|
|
|
|
// bytes along with the useful payload.
|
|
|
|
EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH =
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH;
|
|
|
|
|
|
|
|
ASSERT_OK(Put(KEY1, VALUE1));
|
|
|
|
ASSERT_OK(Put(KEY2, VALUE2));
|
|
|
|
ASSERT_OK(Put(KEY3, VALUE3));
|
|
|
|
|
|
|
|
// Add useful payload to the memtable data bytes:
|
|
|
|
EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH +=
|
|
|
|
KEY1.size() + VALUE1.size() + KEY2.size() + VALUE2.size() + KEY3.size() +
|
|
|
|
VALUE3.size() + 3 * sizeof(uint64_t);
|
|
|
|
|
|
|
|
// We assert that the last K-V pairs have been successfully inserted,
|
|
|
|
// and that the valid values are VALUE1, VALUE2, VALUE3.
|
|
|
|
PinnableSlice value;
|
|
|
|
ASSERT_OK(Get(KEY1, &value));
|
|
|
|
ASSERT_EQ(value.ToString(), VALUE1);
|
|
|
|
ASSERT_OK(Get(KEY2, &value));
|
|
|
|
ASSERT_EQ(value.ToString(), VALUE2);
|
|
|
|
ASSERT_OK(Get(KEY3, &value));
|
|
|
|
ASSERT_EQ(value.ToString(), VALUE3);
|
|
|
|
|
|
|
|
// Force flush to SST. Increments the statistics counter.
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
// Collect statistics.
|
|
|
|
uint64_t mem_data_bytes =
|
|
|
|
TestGetTickerCount(options, MEMTABLE_PAYLOAD_BYTES_AT_FLUSH);
|
|
|
|
uint64_t mem_garbage_bytes =
|
|
|
|
TestGetTickerCount(options, MEMTABLE_GARBAGE_BYTES_AT_FLUSH);
|
|
|
|
|
|
|
|
EXPECT_EQ(mem_data_bytes, EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH);
|
|
|
|
EXPECT_EQ(mem_garbage_bytes, EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH);
|
|
|
|
|
|
|
|
Close();
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(DBFlushTest, StatisticsGarbageInsertAndDeletes) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.statistics = CreateDBStatistics();
|
|
|
|
options.statistics->set_stats_level(StatsLevel::kAll);
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.inplace_update_support = false;
|
|
|
|
options.allow_concurrent_memtable_write = true;
|
|
|
|
options.write_buffer_size = 67108864;
|
|
|
|
|
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
|
|
|
|
const size_t NUM_REPEAT = 2000;
|
|
|
|
const size_t RAND_VALUES_LENGTH = 37;
|
|
|
|
const std::string KEY1 = "key1";
|
|
|
|
const std::string KEY2 = "key2";
|
|
|
|
const std::string KEY3 = "key3";
|
|
|
|
const std::string KEY4 = "key4";
|
|
|
|
const std::string KEY5 = "key5";
|
|
|
|
const std::string KEY6 = "key6";
|
|
|
|
|
|
|
|
uint64_t EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH = 0;
|
|
|
|
uint64_t EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH = 0;
|
|
|
|
|
|
|
|
WriteBatch batch;
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
// Insertion of of K-V pairs, multiple times.
|
|
|
|
for (size_t i = 0; i < NUM_REPEAT; i++) {
|
|
|
|
// Create value strings of arbitrary length RAND_VALUES_LENGTH bytes.
|
|
|
|
std::string p_v1 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
std::string p_v2 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
std::string p_v3 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEY1, p_v1));
|
|
|
|
ASSERT_OK(Put(KEY2, p_v2));
|
|
|
|
ASSERT_OK(Put(KEY3, p_v3));
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
KEY1.size() + p_v1.size() + sizeof(uint64_t);
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
KEY2.size() + p_v2.size() + sizeof(uint64_t);
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
KEY3.size() + p_v3.size() + sizeof(uint64_t);
|
|
|
|
ASSERT_OK(Delete(KEY1));
|
|
|
|
ASSERT_OK(Delete(KEY2));
|
|
|
|
ASSERT_OK(Delete(KEY3));
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
KEY1.size() + KEY2.size() + KEY3.size() + 3 * sizeof(uint64_t);
|
|
|
|
}
|
|
|
|
|
|
|
|
// The memtable data bytes includes the "garbage"
|
|
|
|
// bytes along with the useful payload.
|
|
|
|
EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH =
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH;
|
|
|
|
|
|
|
|
// Note : one set of delete for KEY1, KEY2, KEY3 is written to
|
|
|
|
// SSTable to propagate the delete operations to K-V pairs
|
|
|
|
// that could have been inserted into the database during past Flush
|
|
|
|
// opeartions.
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH -=
|
|
|
|
KEY1.size() + KEY2.size() + KEY3.size() + 3 * sizeof(uint64_t);
|
|
|
|
|
|
|
|
// Additional useful paylaod.
|
|
|
|
ASSERT_OK(Delete(KEY4));
|
|
|
|
ASSERT_OK(Delete(KEY5));
|
|
|
|
ASSERT_OK(Delete(KEY6));
|
|
|
|
|
|
|
|
// // Add useful payload to the memtable data bytes:
|
|
|
|
EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH +=
|
|
|
|
KEY4.size() + KEY5.size() + KEY6.size() + 3 * sizeof(uint64_t);
|
|
|
|
|
|
|
|
// We assert that the K-V pairs have been successfully deleted.
|
|
|
|
PinnableSlice value;
|
|
|
|
ASSERT_NOK(Get(KEY1, &value));
|
|
|
|
ASSERT_NOK(Get(KEY2, &value));
|
|
|
|
ASSERT_NOK(Get(KEY3, &value));
|
|
|
|
|
|
|
|
// Force flush to SST. Increments the statistics counter.
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
// Collect statistics.
|
|
|
|
uint64_t mem_data_bytes =
|
|
|
|
TestGetTickerCount(options, MEMTABLE_PAYLOAD_BYTES_AT_FLUSH);
|
|
|
|
uint64_t mem_garbage_bytes =
|
|
|
|
TestGetTickerCount(options, MEMTABLE_GARBAGE_BYTES_AT_FLUSH);
|
|
|
|
|
|
|
|
EXPECT_EQ(mem_data_bytes, EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH);
|
|
|
|
EXPECT_EQ(mem_garbage_bytes, EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH);
|
|
|
|
|
|
|
|
Close();
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(DBFlushTest, StatisticsGarbageRangeDeletes) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.statistics = CreateDBStatistics();
|
|
|
|
options.statistics->set_stats_level(StatsLevel::kAll);
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.inplace_update_support = false;
|
|
|
|
options.allow_concurrent_memtable_write = true;
|
|
|
|
options.write_buffer_size = 67108864;
|
|
|
|
|
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
|
|
|
|
const size_t NUM_REPEAT = 1000;
|
|
|
|
const size_t RAND_VALUES_LENGTH = 42;
|
|
|
|
const std::string KEY1 = "key1";
|
|
|
|
const std::string KEY2 = "key2";
|
|
|
|
const std::string KEY3 = "key3";
|
|
|
|
const std::string KEY4 = "key4";
|
|
|
|
const std::string KEY5 = "key5";
|
|
|
|
const std::string KEY6 = "key6";
|
|
|
|
const std::string VALUE3 = "value3";
|
|
|
|
|
|
|
|
uint64_t EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH = 0;
|
|
|
|
uint64_t EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH = 0;
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
// Insertion of of K-V pairs, multiple times.
|
|
|
|
// Also insert DeleteRange
|
|
|
|
for (size_t i = 0; i < NUM_REPEAT; i++) {
|
|
|
|
// Create value strings of arbitrary length RAND_VALUES_LENGTH bytes.
|
|
|
|
std::string p_v1 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
std::string p_v2 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
std::string p_v3 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEY1, p_v1));
|
|
|
|
ASSERT_OK(Put(KEY2, p_v2));
|
|
|
|
ASSERT_OK(Put(KEY3, p_v3));
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
KEY1.size() + p_v1.size() + sizeof(uint64_t);
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
KEY2.size() + p_v2.size() + sizeof(uint64_t);
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
KEY3.size() + p_v3.size() + sizeof(uint64_t);
|
|
|
|
ASSERT_OK(db_->DeleteRange(WriteOptions(), db_->DefaultColumnFamily(), KEY1,
|
|
|
|
KEY2));
|
|
|
|
// Note: DeleteRange have an exclusive upper bound, e.g. here: [KEY2,KEY3)
|
|
|
|
// is deleted.
|
|
|
|
ASSERT_OK(db_->DeleteRange(WriteOptions(), db_->DefaultColumnFamily(), KEY2,
|
|
|
|
KEY3));
|
|
|
|
// Delete ranges are stored as a regular K-V pair, with key=STARTKEY,
|
|
|
|
// value=ENDKEY.
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH +=
|
|
|
|
(KEY1.size() + KEY2.size() + sizeof(uint64_t)) +
|
|
|
|
(KEY2.size() + KEY3.size() + sizeof(uint64_t));
|
|
|
|
}
|
|
|
|
|
|
|
|
// The memtable data bytes includes the "garbage"
|
|
|
|
// bytes along with the useful payload.
|
|
|
|
EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH =
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH;
|
|
|
|
|
|
|
|
// Note : one set of deleteRange for (KEY1, KEY2) and (KEY2, KEY3) is written
|
|
|
|
// to SSTable to propagate the deleteRange operations to K-V pairs that could
|
|
|
|
// have been inserted into the database during past Flush opeartions.
|
|
|
|
EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH -=
|
|
|
|
(KEY1.size() + KEY2.size() + sizeof(uint64_t)) +
|
|
|
|
(KEY2.size() + KEY3.size() + sizeof(uint64_t));
|
|
|
|
|
|
|
|
// Overwrite KEY3 with known value (VALUE3)
|
|
|
|
// Note that during the whole time KEY3 has never been deleted
|
|
|
|
// by the RangeDeletes.
|
|
|
|
ASSERT_OK(Put(KEY3, VALUE3));
|
|
|
|
EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH +=
|
|
|
|
KEY3.size() + VALUE3.size() + sizeof(uint64_t);
|
|
|
|
|
|
|
|
// Additional useful paylaod.
|
|
|
|
ASSERT_OK(
|
|
|
|
db_->DeleteRange(WriteOptions(), db_->DefaultColumnFamily(), KEY4, KEY5));
|
|
|
|
ASSERT_OK(
|
|
|
|
db_->DeleteRange(WriteOptions(), db_->DefaultColumnFamily(), KEY5, KEY6));
|
|
|
|
|
|
|
|
// Add useful payload to the memtable data bytes:
|
|
|
|
EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH +=
|
|
|
|
(KEY4.size() + KEY5.size() + sizeof(uint64_t)) +
|
|
|
|
(KEY5.size() + KEY6.size() + sizeof(uint64_t));
|
|
|
|
|
|
|
|
// We assert that the K-V pairs have been successfully deleted.
|
|
|
|
PinnableSlice value;
|
|
|
|
ASSERT_NOK(Get(KEY1, &value));
|
|
|
|
ASSERT_NOK(Get(KEY2, &value));
|
|
|
|
// And that KEY3's value is correct.
|
|
|
|
ASSERT_OK(Get(KEY3, &value));
|
|
|
|
ASSERT_EQ(value, VALUE3);
|
|
|
|
|
|
|
|
// Force flush to SST. Increments the statistics counter.
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
// Collect statistics.
|
|
|
|
uint64_t mem_data_bytes =
|
|
|
|
TestGetTickerCount(options, MEMTABLE_PAYLOAD_BYTES_AT_FLUSH);
|
|
|
|
uint64_t mem_garbage_bytes =
|
|
|
|
TestGetTickerCount(options, MEMTABLE_GARBAGE_BYTES_AT_FLUSH);
|
|
|
|
|
|
|
|
EXPECT_EQ(mem_data_bytes, EXPECTED_MEMTABLE_PAYLOAD_BYTES_AT_FLUSH);
|
|
|
|
EXPECT_EQ(mem_garbage_bytes, EXPECTED_MEMTABLE_GARBAGE_BYTES_AT_FLUSH);
|
|
|
|
|
|
|
|
Close();
|
|
|
|
}
|
|
|
|
|
2021-08-19 00:39:00 +00:00
|
|
|
// This simple Listener can only handle one flush at a time.
|
|
|
|
class TestFlushListener : public EventListener {
|
|
|
|
public:
|
|
|
|
TestFlushListener(Env* env, DBFlushTest* test)
|
|
|
|
: slowdown_count(0), stop_count(0), db_closed(), env_(env), test_(test) {
|
|
|
|
db_closed = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
~TestFlushListener() override {
|
|
|
|
prev_fc_info_.status.PermitUncheckedError(); // Ignore the status
|
|
|
|
}
|
2022-03-03 05:03:14 +00:00
|
|
|
|
2021-08-19 00:39:00 +00:00
|
|
|
void OnTableFileCreated(const TableFileCreationInfo& info) override {
|
|
|
|
// remember the info for later checking the FlushJobInfo.
|
|
|
|
prev_fc_info_ = info;
|
|
|
|
ASSERT_GT(info.db_name.size(), 0U);
|
|
|
|
ASSERT_GT(info.cf_name.size(), 0U);
|
|
|
|
ASSERT_GT(info.file_path.size(), 0U);
|
|
|
|
ASSERT_GT(info.job_id, 0);
|
|
|
|
ASSERT_GT(info.table_properties.data_size, 0U);
|
|
|
|
ASSERT_GT(info.table_properties.raw_key_size, 0U);
|
|
|
|
ASSERT_GT(info.table_properties.raw_value_size, 0U);
|
|
|
|
ASSERT_GT(info.table_properties.num_data_blocks, 0U);
|
|
|
|
ASSERT_GT(info.table_properties.num_entries, 0U);
|
|
|
|
ASSERT_EQ(info.file_checksum, kUnknownFileChecksum);
|
|
|
|
ASSERT_EQ(info.file_checksum_func_name, kUnknownFileChecksumFuncName);
|
|
|
|
}
|
|
|
|
|
|
|
|
void OnFlushCompleted(DB* db, const FlushJobInfo& info) override {
|
|
|
|
flushed_dbs_.push_back(db);
|
|
|
|
flushed_column_family_names_.push_back(info.cf_name);
|
|
|
|
if (info.triggered_writes_slowdown) {
|
|
|
|
slowdown_count++;
|
|
|
|
}
|
|
|
|
if (info.triggered_writes_stop) {
|
|
|
|
stop_count++;
|
|
|
|
}
|
|
|
|
// verify whether the previously created file matches the flushed file.
|
|
|
|
ASSERT_EQ(prev_fc_info_.db_name, db->GetName());
|
|
|
|
ASSERT_EQ(prev_fc_info_.cf_name, info.cf_name);
|
|
|
|
ASSERT_EQ(prev_fc_info_.job_id, info.job_id);
|
|
|
|
ASSERT_EQ(prev_fc_info_.file_path, info.file_path);
|
|
|
|
ASSERT_EQ(TableFileNameToNumber(info.file_path), info.file_number);
|
|
|
|
|
|
|
|
// Note: the following chunk relies on the notification pertaining to the
|
|
|
|
// database pointed to by DBTestBase::db_, and is thus bypassed when
|
|
|
|
// that assumption does not hold (see the test case MultiDBMultiListeners
|
|
|
|
// below).
|
|
|
|
ASSERT_TRUE(test_);
|
|
|
|
if (db == test_->db_) {
|
|
|
|
std::vector<std::vector<FileMetaData>> files_by_level;
|
|
|
|
test_->dbfull()->TEST_GetFilesMetaData(db->DefaultColumnFamily(),
|
|
|
|
&files_by_level);
|
|
|
|
|
|
|
|
ASSERT_FALSE(files_by_level.empty());
|
|
|
|
auto it = std::find_if(files_by_level[0].begin(), files_by_level[0].end(),
|
|
|
|
[&](const FileMetaData& meta) {
|
|
|
|
return meta.fd.GetNumber() == info.file_number;
|
|
|
|
});
|
|
|
|
ASSERT_NE(it, files_by_level[0].end());
|
|
|
|
ASSERT_EQ(info.oldest_blob_file_number, it->oldest_blob_file_number);
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_EQ(db->GetEnv()->GetThreadID(), info.thread_id);
|
|
|
|
ASSERT_GT(info.thread_id, 0U);
|
|
|
|
}
|
|
|
|
|
|
|
|
std::vector<std::string> flushed_column_family_names_;
|
|
|
|
std::vector<DB*> flushed_dbs_;
|
|
|
|
int slowdown_count;
|
|
|
|
int stop_count;
|
|
|
|
bool db_closing;
|
|
|
|
std::atomic_bool db_closed;
|
|
|
|
TableFileCreationInfo prev_fc_info_;
|
|
|
|
|
|
|
|
protected:
|
|
|
|
Env* env_;
|
|
|
|
DBFlushTest* test_;
|
|
|
|
};
|
|
|
|
|
Fix bug of prematurely excluded CF in atomic flush contains unflushed data that should've been included in the atomic flush (#11148)
Summary:
**Context:**
Atomic flush should guarantee recoverability of all data of seqno up to the max seqno of the flush. It achieves this by ensuring all such data are flushed by the time this atomic flush finishes through `SelectColumnFamiliesForAtomicFlush()`. However, our crash test exposed the following case where an excluded CF from an atomic flush contains unflushed data of seqno less than the max seqno of that atomic flush and loses its data with `WriteOptions::DisableWAL=true` in face of a crash right after the atomic flush finishes .
```
./db_stress --preserve_unverified_changes=1 --reopen=0 --acquire_snapshot_one_in=0 --adaptive_readahead=1 --allow_data_in_errors=True --async_io=1 --atomic_flush=1 --avoid_flush_during_recovery=0 --avoid_unnecessary_blocking_io=0 --backup_max_size=104857600 --backup_one_in=0 --batch_protection_bytes_per_key=0 --block_size=16384 --bloom_bits=15 --bottommost_compression_type=none --bytes_per_sync=262144 --cache_index_and_filter_blocks=0 --cache_size=8388608 --cache_type=lru_cache --charge_compression_dictionary_building_buffer=0 --charge_file_metadata=1 --charge_filter_construction=0 --charge_table_reader=0 --checkpoint_one_in=0 --checksum_type=kXXH3 --clear_column_family_one_in=0 --compact_files_one_in=0 --compact_range_one_in=0 --compaction_pri=1 --compaction_ttl=100 --compression_max_dict_buffer_bytes=134217727 --compression_max_dict_bytes=16384 --compression_parallel_threads=1 --compression_type=lz4hc --compression_use_zstd_dict_trainer=0 --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --data_block_index_type=0 --db=$db --db_write_buffer_size=1048576 --delpercent=4 --delrangepercent=1 --destroy_db_initially=0 --detect_filter_construct_corruption=0 --disable_wal=1 --enable_compaction_filter=0 --enable_pipelined_write=0 --expected_values_dir=$exp --fail_if_options_file_error=0 --fifo_allow_compaction=0 --file_checksum_impl=none --flush_one_in=0 --format_version=5 --get_current_wal_file_one_in=0 --get_live_files_one_in=100 --get_property_one_in=0 --get_sorted_wal_files_one_in=0 --index_block_restart_interval=2 --index_type=0 --ingest_external_file_one_in=0 --initial_auto_readahead_size=524288 --iterpercent=10 --key_len_percent_dist=1,30,69 --level_compaction_dynamic_level_bytes=True --long_running_snapshots=1 --manual_wal_flush_one_in=100 --mark_for_compaction_one_file_in=0 --max_auto_readahead_size=0 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --max_key=10000 --max_key_len=3 --max_manifest_file_size=1073741824 --max_write_batch_group_size_bytes=64 --max_write_buffer_number=3 --max_write_buffer_size_to_maintain=0 --memtable_prefix_bloom_size_ratio=0.01 --memtable_protection_bytes_per_key=4 --memtable_whole_key_filtering=0 --memtablerep=skip_list --min_write_buffer_number_to_merge=2 --mmap_read=1 --mock_direct_io=False --nooverwritepercent=1 --num_file_reads_for_auto_readahead=0 --open_files=-1 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --ops_per_thread=100000000 --optimize_filters_for_memory=1 --paranoid_file_checks=1 --partition_filters=0 --partition_pinning=3 --pause_background_one_in=0 --periodic_compaction_seconds=100 --prefix_size=8 --prefixpercent=5 --prepopulate_block_cache=0 --preserve_internal_time_seconds=3600 --progress_reports=0 --read_fault_one_in=32 --readahead_size=16384 --readpercent=50 --recycle_log_file_num=0 --ribbon_starting_level=6 --secondary_cache_fault_one_in=0 --set_options_one_in=10000 --snapshot_hold_ops=100000 --sst_file_manager_bytes_per_sec=104857600 --sst_file_manager_bytes_per_truncate=1048576 --stats_dump_period_sec=10 --subcompactions=1 --sync=0 --sync_fault_injection=0 --target_file_size_base=524288 --target_file_size_multiplier=2 --test_batches_snapshots=0 --top_level_index_pinning=0 --unpartitioned_pinning=1 --use_direct_io_for_flush_and_compaction=0 --use_direct_reads=0 --use_full_merge_v1=0 --use_merge=0 --use_multiget=1 --use_put_entity_one_in=0 --user_timestamp_size=0 --value_size_mult=32 --verify_checksum=1 --verify_checksum_one_in=0 --verify_db_one_in=1000 --verify_sst_unique_id_in_manifest=1 --wal_bytes_per_sync=524288 --wal_compression=none --write_buffer_size=524288 --write_dbid_to_manifest=1 --write_fault_one_in=0 --writepercent=30 &
pid=$!
sleep 0.2
sleep 10
kill $pid
sleep 0.2
./db_stress --ops_per_thread=1 --preserve_unverified_changes=1 --reopen=0 --acquire_snapshot_one_in=0 --adaptive_readahead=1 --allow_data_in_errors=True --async_io=1 --atomic_flush=1 --avoid_flush_during_recovery=0 --avoid_unnecessary_blocking_io=0 --backup_max_size=104857600 --backup_one_in=0 --batch_protection_bytes_per_key=0 --block_size=16384 --bloom_bits=15 --bottommost_compression_type=none --bytes_per_sync=262144 --cache_index_and_filter_blocks=0 --cache_size=8388608 --cache_type=lru_cache --charge_compression_dictionary_building_buffer=0 --charge_file_metadata=1 --charge_filter_construction=0 --charge_table_reader=0 --checkpoint_one_in=0 --checksum_type=kXXH3 --clear_column_family_one_in=0 --compact_files_one_in=0 --compact_range_one_in=0 --compaction_pri=1 --compaction_ttl=100 --compression_max_dict_buffer_bytes=134217727 --compression_max_dict_bytes=16384 --compression_parallel_threads=1 --compression_type=lz4hc --compression_use_zstd_dict_trainer=0 --compression_zstd_max_train_bytes=0 --continuous_verification_interval=0 --data_block_index_type=0 --db=$db --db_write_buffer_size=1048576 --delpercent=4 --delrangepercent=1 --destroy_db_initially=0 --detect_filter_construct_corruption=0 --disable_wal=1 --enable_compaction_filter=0 --enable_pipelined_write=0 --expected_values_dir=$exp --fail_if_options_file_error=0 --fifo_allow_compaction=0 --file_checksum_impl=none --flush_one_in=0 --format_version=5 --get_current_wal_file_one_in=0 --get_live_files_one_in=100 --get_property_one_in=0 --get_sorted_wal_files_one_in=0 --index_block_restart_interval=2 --index_type=0 --ingest_external_file_one_in=0 --initial_auto_readahead_size=524288 --iterpercent=10 --key_len_percent_dist=1,30,69 --level_compaction_dynamic_level_bytes=True --long_running_snapshots=1 --manual_wal_flush_one_in=100 --mark_for_compaction_one_file_in=0 --max_auto_readahead_size=0 --max_background_compactions=20 --max_bytes_for_level_base=10485760 --max_key=10000 --max_key_len=3 --max_manifest_file_size=1073741824 --max_write_batch_group_size_bytes=64 --max_write_buffer_number=3 --max_write_buffer_size_to_maintain=0 --memtable_prefix_bloom_size_ratio=0.01 --memtable_protection_bytes_per_key=4 --memtable_whole_key_filtering=0 --memtablerep=skip_list --min_write_buffer_number_to_merge=2 --mmap_read=1 --mock_direct_io=False --nooverwritepercent=1 --num_file_reads_for_auto_readahead=0 --open_files=-1 --open_metadata_write_fault_one_in=0 --open_read_fault_one_in=0 --open_write_fault_one_in=0 --ops_per_thread=100000000 --optimize_filters_for_memory=1 --paranoid_file_checks=1 --partition_filters=0 --partition_pinning=3 --pause_background_one_in=0 --periodic_compaction_seconds=100 --prefix_size=8 --prefixpercent=5 --prepopulate_block_cache=0 --preserve_internal_time_seconds=3600 --progress_reports=0 --read_fault_one_in=32 --readahead_size=16384 --readpercent=50 --recycle_log_file_num=0 --ribbon_starting_level=6 --secondary_cache_fault_one_in=0 --set_options_one_in=10000 --snapshot_hold_ops=100000 --sst_file_manager_bytes_per_sec=104857600 --sst_file_manager_bytes_per_truncate=1048576 --stats_dump_period_sec=10 --subcompactions=1 --sync=0 --sync_fault_injection=0 --target_file_size_base=524288 --target_file_size_multiplier=2 --test_batches_snapshots=0 --top_level_index_pinning=0 --unpartitioned_pinning=1 --use_direct_io_for_flush_and_compaction=0 --use_direct_reads=0 --use_full_merge_v1=0 --use_merge=0 --use_multiget=1 --use_put_entity_one_in=0 --user_timestamp_size=0 --value_size_mult=32 --verify_checksum=1 --verify_checksum_one_in=0 --verify_db_one_in=1000 --verify_sst_unique_id_in_manifest=1 --wal_bytes_per_sync=524288 --wal_compression=none --write_buffer_size=524288 --write_dbid_to_manifest=1 --write_fault_one_in=0 --writepercent=30 &
pid=$!
sleep 0.2
sleep 40
kill $pid
sleep 0.2
Verification failed for column family 6 key 0000000000000239000000000000012B0000000000000138 (56622): value_from_db: , value_from_expected: 4A6331754E4F4C4D42434041464744455A5B58595E5F5C5D5253505156575455, msg: Value not found: NotFound:
Crash-recovery verification failed :(
No writes or ops?
Verification failed :(
```
The bug is due to the following:
- When atomic flush is used, an empty CF is legally [excluded](https://github.com/facebook/rocksdb/blob/7.10.fb/db/db_filesnapshot.cc#L39) in `SelectColumnFamiliesForAtomicFlush` as the first step of `DBImpl::FlushForGetLiveFiles` before [passing](https://github.com/facebook/rocksdb/blob/7.10.fb/db/db_filesnapshot.cc#L42) the included CFDs to `AtomicFlushMemTables`.
- But [later](https://github.com/facebook/rocksdb/blob/7.10.fb/db/db_impl/db_impl_compaction_flush.cc#L2133) in `AtomicFlushMemTables`, `WaitUntilFlushWouldNotStallWrites` will [release the db mutex](https://github.com/facebook/rocksdb/blob/7.10.fb/db/db_impl/db_impl_compaction_flush.cc#L2403), during which data@seqno N can be inserted into the excluded CF and data@seqno M can be inserted into one of the included CFs, where M > N.
- However, data@seqno N in an already-excluded CF is thus excluded from this atomic flush while we seqno N is less than seqno M.
**Summary:**
- Replace `SelectColumnFamiliesForAtomicFlush()`-before-`AtomicFlushMemTables()` with `SelectColumnFamiliesForAtomicFlush()`-after-wait-within-`AtomicFlushMemTables()` so we ensure no write affecting the recoverability of this atomic job (i.e, change to max seqno of this atomic flush or insertion of data with less seqno than the max seqno of the atomic flush to excluded CF) can happen after calling `SelectColumnFamiliesForAtomicFlush()`.
- For above, refactored and clarified comments on `SelectColumnFamiliesForAtomicFlush()` and `AtomicFlushMemTables()` for clearer semantics of passed-in CFDs to atomic-flush
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11148
Test Plan:
- New unit test failed before the fix and passes after
- Make check
- Rehearsal stress test
Reviewed By: ajkr
Differential Revision: D42799871
Pulled By: hx235
fbshipit-source-id: 13636b63e9c25c5895857afc36ea580d57f6d644
2023-03-14 23:53:20 +00:00
|
|
|
TEST_F(
|
|
|
|
DBFlushTest,
|
|
|
|
FixUnrecoverableWriteDuringAtomicFlushWaitUntilFlushWouldNotStallWrites) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.atomic_flush = true;
|
|
|
|
|
|
|
|
// To simulate a real-life crash where we can't flush during db's shutdown
|
|
|
|
options.avoid_flush_during_shutdown = true;
|
|
|
|
|
|
|
|
// Set 3 low thresholds (while `disable_auto_compactions=false`) here so flush
|
|
|
|
// adding one more L0 file during `GetLiveFiles()` will have to wait till such
|
|
|
|
// flush will not stall writes
|
|
|
|
options.level0_stop_writes_trigger = 2;
|
|
|
|
options.level0_slowdown_writes_trigger = 2;
|
|
|
|
// Disable level-0 compaction triggered by number of files to avoid
|
|
|
|
// stalling check being skipped (resulting in the flush mentioned above didn't
|
|
|
|
// wait)
|
|
|
|
options.level0_file_num_compaction_trigger = -1;
|
|
|
|
|
|
|
|
CreateAndReopenWithCF({"cf1"}, options);
|
|
|
|
|
|
|
|
// Manually pause compaction thread to ensure enough L0 files as
|
|
|
|
// `disable_auto_compactions=false`is needed, in order to meet the 3 low
|
|
|
|
// thresholds above
|
|
|
|
std::unique_ptr<test::SleepingBackgroundTask> sleeping_task_;
|
|
|
|
sleeping_task_.reset(new test::SleepingBackgroundTask());
|
|
|
|
env_->SetBackgroundThreads(1, Env::LOW);
|
|
|
|
env_->Schedule(&test::SleepingBackgroundTask::DoSleepTask,
|
|
|
|
sleeping_task_.get(), Env::Priority::LOW);
|
|
|
|
sleeping_task_->WaitUntilSleeping();
|
|
|
|
|
|
|
|
// Create some initial file to help meet the 3 low thresholds above
|
|
|
|
ASSERT_OK(Put(1, "dontcare", "dontcare"));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
|
|
|
|
// Insert some initial data so we have something to atomic-flush later
|
|
|
|
// triggered by `GetLiveFiles()`
|
|
|
|
WriteOptions write_opts;
|
|
|
|
write_opts.disableWAL = true;
|
|
|
|
ASSERT_OK(Put(1, "k1", "v1", write_opts));
|
|
|
|
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->LoadDependency({{
|
|
|
|
"DBImpl::WaitUntilFlushWouldNotStallWrites:StallWait",
|
|
|
|
"DBFlushTest::"
|
|
|
|
"UnrecoverableWriteInAtomicFlushWaitUntilFlushWouldNotStallWrites::Write",
|
|
|
|
}});
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
// Write to db when atomic flush releases the lock to wait on write stall
|
|
|
|
// condition to be gone in `WaitUntilFlushWouldNotStallWrites()`
|
|
|
|
port::Thread write_thread([&] {
|
|
|
|
TEST_SYNC_POINT(
|
|
|
|
"DBFlushTest::"
|
|
|
|
"UnrecoverableWriteInAtomicFlushWaitUntilFlushWouldNotStallWrites::"
|
|
|
|
"Write");
|
|
|
|
// Before the fix, the empty default CF would've been prematurely excluded
|
|
|
|
// from this atomic flush. The following two writes together make default CF
|
|
|
|
// later contain data that should've been included in the atomic flush.
|
|
|
|
ASSERT_OK(Put(0, "k2", "v2", write_opts));
|
|
|
|
// The following write increases the max seqno of this atomic flush to be 3,
|
|
|
|
// which is greater than the seqno of default CF's data. This then violates
|
|
|
|
// the invariant that all entries of seqno less than the max seqno
|
|
|
|
// of this atomic flush should've been flushed by the time of this atomic
|
|
|
|
// flush finishes.
|
|
|
|
ASSERT_OK(Put(1, "k3", "v3", write_opts));
|
|
|
|
|
|
|
|
// Resume compaction threads and reduce L0 files so `GetLiveFiles()` can
|
|
|
|
// resume from the wait
|
|
|
|
sleeping_task_->WakeUp();
|
|
|
|
sleeping_task_->WaitUntilDone();
|
|
|
|
MoveFilesToLevel(1, 1);
|
|
|
|
});
|
|
|
|
|
|
|
|
// Trigger an atomic flush by `GetLiveFiles()`
|
|
|
|
std::vector<std::string> files;
|
|
|
|
uint64_t manifest_file_size;
|
|
|
|
ASSERT_OK(db_->GetLiveFiles(files, &manifest_file_size, /*flush*/ true));
|
|
|
|
|
|
|
|
write_thread.join();
|
|
|
|
|
|
|
|
ReopenWithColumnFamilies({"default", "cf1"}, options);
|
|
|
|
|
|
|
|
ASSERT_EQ(Get(1, "k3"), "v3");
|
|
|
|
// Prior to the fix, `Get()` will return `NotFound as "k2" entry in default CF
|
|
|
|
// can't be recovered from a crash right after the atomic flush finishes,
|
|
|
|
// resulting in a "recovery hole" as "k3" can be recovered. It's due to the
|
|
|
|
// invariant violation described above.
|
|
|
|
ASSERT_EQ(Get(0, "k2"), "v2");
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
2023-01-24 17:54:04 +00:00
|
|
|
TEST_F(DBFlushTest, FixFlushReasonRaceFromConcurrentFlushes) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.atomic_flush = true;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
CreateAndReopenWithCF({"cf1"}, options);
|
|
|
|
|
|
|
|
for (int idx = 0; idx < 1; ++idx) {
|
|
|
|
ASSERT_OK(Put(0, Key(idx), std::string(1, 'v')));
|
|
|
|
ASSERT_OK(Put(1, Key(idx), std::string(1, 'v')));
|
|
|
|
}
|
|
|
|
|
|
|
|
// To coerce a manual flush happenning in the middle of GetLiveFiles's flush,
|
|
|
|
// we need to pause background flush thread and enable it later.
|
|
|
|
std::shared_ptr<test::SleepingBackgroundTask> sleeping_task =
|
|
|
|
std::make_shared<test::SleepingBackgroundTask>();
|
|
|
|
env_->SetBackgroundThreads(1, Env::HIGH);
|
|
|
|
env_->Schedule(&test::SleepingBackgroundTask::DoSleepTask,
|
|
|
|
sleeping_task.get(), Env::Priority::HIGH);
|
|
|
|
sleeping_task->WaitUntilSleeping();
|
|
|
|
|
|
|
|
// Coerce a manual flush happenning in the middle of GetLiveFiles's flush
|
|
|
|
bool get_live_files_paused_at_sync_point = false;
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::AtomicFlushMemTables:AfterScheduleFlush", [&](void* /* arg */) {
|
|
|
|
if (get_live_files_paused_at_sync_point) {
|
|
|
|
// To prevent non-GetLiveFiles() flush from pausing at this sync point
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
get_live_files_paused_at_sync_point = true;
|
|
|
|
|
|
|
|
FlushOptions fo;
|
|
|
|
fo.wait = false;
|
|
|
|
fo.allow_write_stall = true;
|
|
|
|
ASSERT_OK(dbfull()->Flush(fo));
|
|
|
|
|
|
|
|
// Resume background flush thread so GetLiveFiles() can finish
|
|
|
|
sleeping_task->WakeUp();
|
|
|
|
sleeping_task->WaitUntilDone();
|
|
|
|
});
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
std::vector<std::string> files;
|
|
|
|
uint64_t manifest_file_size;
|
|
|
|
// Before the fix, a race condition on default cf's flush reason due to
|
|
|
|
// concurrent GetLiveFiles's flush and manual flush will fail
|
|
|
|
// an internal assertion.
|
|
|
|
// After the fix, such race condition is fixed and there is no assertion
|
|
|
|
// failure.
|
|
|
|
ASSERT_OK(db_->GetLiveFiles(files, &manifest_file_size, /*flush*/ true));
|
|
|
|
ASSERT_TRUE(get_live_files_paused_at_sync_point);
|
|
|
|
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
TEST_F(DBFlushTest, MemPurgeBasic) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
|
|
|
|
// The following options are used to enforce several values that
|
|
|
|
// may already exist as default values to make this test resilient
|
|
|
|
// to default value updates in the future.
|
|
|
|
options.statistics = CreateDBStatistics();
|
|
|
|
|
|
|
|
// Record all statistics.
|
|
|
|
options.statistics->set_stats_level(StatsLevel::kAll);
|
|
|
|
|
|
|
|
// create the DB if it's not already present
|
|
|
|
options.create_if_missing = true;
|
|
|
|
|
|
|
|
// Useful for now as we are trying to compare uncompressed data savings on
|
|
|
|
// flush().
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
|
|
|
|
// Prevent memtable in place updates. Should already be disabled
|
|
|
|
// (from Wiki:
|
|
|
|
// In place updates can be enabled by toggling on the bool
|
|
|
|
// inplace_update_support flag. However, this flag is by default set to
|
|
|
|
// false
|
|
|
|
// because this thread-safe in-place update support is not compatible
|
|
|
|
// with concurrent memtable writes. Note that the bool
|
|
|
|
// allow_concurrent_memtable_write is set to true by default )
|
|
|
|
options.inplace_update_support = false;
|
|
|
|
options.allow_concurrent_memtable_write = true;
|
|
|
|
|
|
|
|
// Enforce size of a single MemTable to 64MB (64MB = 67108864 bytes).
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
options.write_buffer_size = 1 << 20;
|
2022-06-23 16:42:18 +00:00
|
|
|
// Initially deactivate the MemPurge prototype.
|
|
|
|
options.experimental_mempurge_threshold = 0.0;
|
2021-08-19 00:39:00 +00:00
|
|
|
TestFlushListener* listener = new TestFlushListener(options.env, this);
|
|
|
|
options.listeners.emplace_back(listener);
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_OK(TryReopen(options));
|
2022-06-23 16:42:18 +00:00
|
|
|
|
|
|
|
// RocksDB lite does not support dynamic options
|
|
|
|
// Dynamically activate the MemPurge prototype without restarting the DB.
|
|
|
|
ColumnFamilyHandle* cfh = db_->DefaultColumnFamily();
|
|
|
|
ASSERT_OK(db_->SetOptions(cfh, {{"experimental_mempurge_threshold", "1.0"}}));
|
|
|
|
|
2021-11-03 04:53:23 +00:00
|
|
|
std::atomic<uint32_t> mempurge_count{0};
|
|
|
|
std::atomic<uint32_t> sst_count{0};
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
"DBImpl::FlushJob:MemPurgeSuccessful",
|
|
|
|
[&](void* /*arg*/) { mempurge_count++; });
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
"DBImpl::FlushJob:SSTFileCreated", [&](void* /*arg*/) { sst_count++; });
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
std::string KEY1 = "IamKey1";
|
|
|
|
std::string KEY2 = "IamKey2";
|
|
|
|
std::string KEY3 = "IamKey3";
|
|
|
|
std::string KEY4 = "IamKey4";
|
|
|
|
std::string KEY5 = "IamKey5";
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
std::string KEY6 = "IamKey6";
|
|
|
|
std::string KEY7 = "IamKey7";
|
|
|
|
std::string KEY8 = "IamKey8";
|
|
|
|
std::string KEY9 = "IamKey9";
|
|
|
|
std::string RNDKEY1, RNDKEY2, RNDKEY3;
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
const std::string NOT_FOUND = "NOT_FOUND";
|
|
|
|
|
|
|
|
// Heavy overwrite workload,
|
|
|
|
// more than would fit in maximum allowed memtables.
|
|
|
|
Random rnd(719);
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
const size_t NUM_REPEAT = 100;
|
|
|
|
const size_t RAND_KEYS_LENGTH = 57;
|
|
|
|
const size_t RAND_VALUES_LENGTH = 10240;
|
|
|
|
std::string p_v1, p_v2, p_v3, p_v4, p_v5, p_v6, p_v7, p_v8, p_v9, p_rv1,
|
|
|
|
p_rv2, p_rv3;
|
|
|
|
|
|
|
|
// Insert a very first set of keys that will be
|
|
|
|
// mempurged at least once.
|
|
|
|
p_v1 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v2 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v3 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v4 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEY1, p_v1));
|
|
|
|
ASSERT_OK(Put(KEY2, p_v2));
|
|
|
|
ASSERT_OK(Put(KEY3, p_v3));
|
|
|
|
ASSERT_OK(Put(KEY4, p_v4));
|
|
|
|
ASSERT_EQ(Get(KEY1), p_v1);
|
|
|
|
ASSERT_EQ(Get(KEY2), p_v2);
|
|
|
|
ASSERT_EQ(Get(KEY3), p_v3);
|
|
|
|
ASSERT_EQ(Get(KEY4), p_v4);
|
|
|
|
|
|
|
|
// Insertion of of K-V pairs, multiple times (overwrites).
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
for (size_t i = 0; i < NUM_REPEAT; i++) {
|
|
|
|
// Create value strings of arbitrary length RAND_VALUES_LENGTH bytes.
|
|
|
|
p_v5 = rnd.RandomString(RAND_VALUES_LENGTH);
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
p_v6 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v7 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v8 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v9 = rnd.RandomString(RAND_VALUES_LENGTH);
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
|
|
|
ASSERT_OK(Put(KEY5, p_v5));
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
ASSERT_OK(Put(KEY6, p_v6));
|
|
|
|
ASSERT_OK(Put(KEY7, p_v7));
|
|
|
|
ASSERT_OK(Put(KEY8, p_v8));
|
|
|
|
ASSERT_OK(Put(KEY9, p_v9));
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
|
|
|
ASSERT_EQ(Get(KEY1), p_v1);
|
|
|
|
ASSERT_EQ(Get(KEY2), p_v2);
|
|
|
|
ASSERT_EQ(Get(KEY3), p_v3);
|
|
|
|
ASSERT_EQ(Get(KEY4), p_v4);
|
|
|
|
ASSERT_EQ(Get(KEY5), p_v5);
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
ASSERT_EQ(Get(KEY6), p_v6);
|
|
|
|
ASSERT_EQ(Get(KEY7), p_v7);
|
|
|
|
ASSERT_EQ(Get(KEY8), p_v8);
|
|
|
|
ASSERT_EQ(Get(KEY9), p_v9);
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Check that there was at least one mempurge
|
|
|
|
const uint32_t EXPECTED_MIN_MEMPURGE_COUNT = 1;
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
// Check that there was no SST files created during flush.
|
|
|
|
const uint32_t EXPECTED_SST_COUNT = 0;
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
2021-11-03 04:53:23 +00:00
|
|
|
EXPECT_GE(mempurge_count.exchange(0), EXPECTED_MIN_MEMPURGE_COUNT);
|
|
|
|
EXPECT_EQ(sst_count.exchange(0), EXPECTED_SST_COUNT);
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
|
|
|
|
// Insertion of of K-V pairs, no overwrites.
|
|
|
|
for (size_t i = 0; i < NUM_REPEAT; i++) {
|
|
|
|
// Create value strings of arbitrary length RAND_VALUES_LENGTH bytes.
|
|
|
|
RNDKEY1 = rnd.RandomString(RAND_KEYS_LENGTH);
|
|
|
|
RNDKEY2 = rnd.RandomString(RAND_KEYS_LENGTH);
|
|
|
|
RNDKEY3 = rnd.RandomString(RAND_KEYS_LENGTH);
|
|
|
|
p_rv1 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_rv2 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_rv3 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
|
|
|
|
ASSERT_OK(Put(RNDKEY1, p_rv1));
|
|
|
|
ASSERT_OK(Put(RNDKEY2, p_rv2));
|
|
|
|
ASSERT_OK(Put(RNDKEY3, p_rv3));
|
|
|
|
|
|
|
|
ASSERT_EQ(Get(KEY1), p_v1);
|
|
|
|
ASSERT_EQ(Get(KEY2), p_v2);
|
|
|
|
ASSERT_EQ(Get(KEY3), p_v3);
|
|
|
|
ASSERT_EQ(Get(KEY4), p_v4);
|
|
|
|
ASSERT_EQ(Get(KEY5), p_v5);
|
|
|
|
ASSERT_EQ(Get(KEY6), p_v6);
|
|
|
|
ASSERT_EQ(Get(KEY7), p_v7);
|
|
|
|
ASSERT_EQ(Get(KEY8), p_v8);
|
|
|
|
ASSERT_EQ(Get(KEY9), p_v9);
|
|
|
|
ASSERT_EQ(Get(RNDKEY1), p_rv1);
|
|
|
|
ASSERT_EQ(Get(RNDKEY2), p_rv2);
|
|
|
|
ASSERT_EQ(Get(RNDKEY3), p_rv3);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Assert that at least one flush to storage has been performed
|
2021-11-03 04:53:23 +00:00
|
|
|
EXPECT_GT(sst_count.exchange(0), EXPECTED_SST_COUNT);
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
// (which will consequently increase the number of mempurges recorded too).
|
2021-11-03 04:53:23 +00:00
|
|
|
EXPECT_GE(mempurge_count.exchange(0), EXPECTED_MIN_MEMPURGE_COUNT);
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
|
|
|
|
// Assert that there is no data corruption, even with
|
|
|
|
// a flush to storage.
|
|
|
|
ASSERT_EQ(Get(KEY1), p_v1);
|
|
|
|
ASSERT_EQ(Get(KEY2), p_v2);
|
|
|
|
ASSERT_EQ(Get(KEY3), p_v3);
|
|
|
|
ASSERT_EQ(Get(KEY4), p_v4);
|
|
|
|
ASSERT_EQ(Get(KEY5), p_v5);
|
|
|
|
ASSERT_EQ(Get(KEY6), p_v6);
|
|
|
|
ASSERT_EQ(Get(KEY7), p_v7);
|
|
|
|
ASSERT_EQ(Get(KEY8), p_v8);
|
|
|
|
ASSERT_EQ(Get(KEY9), p_v9);
|
|
|
|
ASSERT_EQ(Get(RNDKEY1), p_rv1);
|
|
|
|
ASSERT_EQ(Get(RNDKEY2), p_rv2);
|
|
|
|
ASSERT_EQ(Get(RNDKEY3), p_rv3);
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
|
|
|
Close();
|
|
|
|
}
|
|
|
|
|
2022-06-23 16:42:18 +00:00
|
|
|
// RocksDB lite does not support dynamic options
|
|
|
|
TEST_F(DBFlushTest, MemPurgeBasicToggle) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
|
|
|
|
// The following options are used to enforce several values that
|
|
|
|
// may already exist as default values to make this test resilient
|
|
|
|
// to default value updates in the future.
|
|
|
|
options.statistics = CreateDBStatistics();
|
|
|
|
|
|
|
|
// Record all statistics.
|
|
|
|
options.statistics->set_stats_level(StatsLevel::kAll);
|
|
|
|
|
|
|
|
// create the DB if it's not already present
|
|
|
|
options.create_if_missing = true;
|
|
|
|
|
|
|
|
// Useful for now as we are trying to compare uncompressed data savings on
|
|
|
|
// flush().
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
|
|
|
|
// Prevent memtable in place updates. Should already be disabled
|
|
|
|
// (from Wiki:
|
|
|
|
// In place updates can be enabled by toggling on the bool
|
|
|
|
// inplace_update_support flag. However, this flag is by default set to
|
|
|
|
// false
|
|
|
|
// because this thread-safe in-place update support is not compatible
|
|
|
|
// with concurrent memtable writes. Note that the bool
|
|
|
|
// allow_concurrent_memtable_write is set to true by default )
|
|
|
|
options.inplace_update_support = false;
|
|
|
|
options.allow_concurrent_memtable_write = true;
|
|
|
|
|
|
|
|
// Enforce size of a single MemTable to 64MB (64MB = 67108864 bytes).
|
|
|
|
options.write_buffer_size = 1 << 20;
|
|
|
|
// Initially deactivate the MemPurge prototype.
|
|
|
|
// (negative values are equivalent to 0.0).
|
|
|
|
options.experimental_mempurge_threshold = -25.3;
|
|
|
|
TestFlushListener* listener = new TestFlushListener(options.env, this);
|
|
|
|
options.listeners.emplace_back(listener);
|
|
|
|
|
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
// Dynamically activate the MemPurge prototype without restarting the DB.
|
|
|
|
ColumnFamilyHandle* cfh = db_->DefaultColumnFamily();
|
|
|
|
// Values greater than 1.0 are equivalent to 1.0
|
|
|
|
ASSERT_OK(
|
|
|
|
db_->SetOptions(cfh, {{"experimental_mempurge_threshold", "3.7898"}}));
|
|
|
|
std::atomic<uint32_t> mempurge_count{0};
|
|
|
|
std::atomic<uint32_t> sst_count{0};
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushJob:MemPurgeSuccessful",
|
|
|
|
[&](void* /*arg*/) { mempurge_count++; });
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushJob:SSTFileCreated", [&](void* /*arg*/) { sst_count++; });
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
const size_t KVSIZE = 3;
|
|
|
|
std::vector<std::string> KEYS(KVSIZE);
|
|
|
|
for (size_t k = 0; k < KVSIZE; k++) {
|
|
|
|
KEYS[k] = "IamKey" + std::to_string(k);
|
|
|
|
}
|
|
|
|
|
|
|
|
std::vector<std::string> RNDVALS(KVSIZE);
|
|
|
|
const std::string NOT_FOUND = "NOT_FOUND";
|
|
|
|
|
|
|
|
// Heavy overwrite workload,
|
|
|
|
// more than would fit in maximum allowed memtables.
|
|
|
|
Random rnd(719);
|
|
|
|
const size_t NUM_REPEAT = 100;
|
|
|
|
const size_t RAND_VALUES_LENGTH = 10240;
|
|
|
|
|
|
|
|
// Insertion of of K-V pairs, multiple times (overwrites).
|
|
|
|
for (size_t i = 0; i < NUM_REPEAT; i++) {
|
|
|
|
for (size_t j = 0; j < KEYS.size(); j++) {
|
|
|
|
RNDVALS[j] = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEYS[j], RNDVALS[j]));
|
|
|
|
ASSERT_EQ(Get(KEYS[j]), RNDVALS[j]);
|
|
|
|
}
|
|
|
|
for (size_t j = 0; j < KEYS.size(); j++) {
|
|
|
|
ASSERT_EQ(Get(KEYS[j]), RNDVALS[j]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check that there was at least one mempurge
|
|
|
|
const uint32_t EXPECTED_MIN_MEMPURGE_COUNT = 1;
|
|
|
|
// Check that there was no SST files created during flush.
|
|
|
|
const uint32_t EXPECTED_SST_COUNT = 0;
|
|
|
|
|
|
|
|
EXPECT_GE(mempurge_count.exchange(0), EXPECTED_MIN_MEMPURGE_COUNT);
|
|
|
|
EXPECT_EQ(sst_count.exchange(0), EXPECTED_SST_COUNT);
|
|
|
|
|
|
|
|
// Dynamically deactivate MemPurge.
|
|
|
|
ASSERT_OK(
|
|
|
|
db_->SetOptions(cfh, {{"experimental_mempurge_threshold", "-1023.0"}}));
|
|
|
|
|
|
|
|
// Insertion of of K-V pairs, multiple times (overwrites).
|
|
|
|
for (size_t i = 0; i < NUM_REPEAT; i++) {
|
|
|
|
for (size_t j = 0; j < KEYS.size(); j++) {
|
|
|
|
RNDVALS[j] = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEYS[j], RNDVALS[j]));
|
|
|
|
ASSERT_EQ(Get(KEYS[j]), RNDVALS[j]);
|
|
|
|
}
|
|
|
|
for (size_t j = 0; j < KEYS.size(); j++) {
|
|
|
|
ASSERT_EQ(Get(KEYS[j]), RNDVALS[j]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check that there was at least one mempurge
|
|
|
|
const uint32_t ZERO = 0;
|
|
|
|
// Assert that at least one flush to storage has been performed
|
|
|
|
EXPECT_GT(sst_count.exchange(0), EXPECTED_SST_COUNT);
|
|
|
|
// The mempurge count is expected to be set to 0 when the options are updated.
|
|
|
|
// We expect no mempurge at all.
|
|
|
|
EXPECT_EQ(mempurge_count.exchange(0), ZERO);
|
|
|
|
|
|
|
|
Close();
|
|
|
|
}
|
|
|
|
// End of MemPurgeBasicToggle, which is not
|
|
|
|
// supported with RocksDB LITE because it
|
|
|
|
// relies on dynamically changing the option
|
|
|
|
// flag experimental_mempurge_threshold.
|
|
|
|
|
|
|
|
// At the moment, MemPurge feature is deactivated
|
|
|
|
// when atomic_flush is enabled. This is because the level
|
|
|
|
// of garbage between Column Families is not guaranteed to
|
|
|
|
// be consistent, therefore a CF could hypothetically
|
|
|
|
// trigger a MemPurge while another CF would trigger
|
|
|
|
// a regular Flush.
|
|
|
|
TEST_F(DBFlushTest, MemPurgeWithAtomicFlush) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
|
|
|
|
// The following options are used to enforce several values that
|
|
|
|
// may already exist as default values to make this test resilient
|
|
|
|
// to default value updates in the future.
|
|
|
|
options.statistics = CreateDBStatistics();
|
|
|
|
|
|
|
|
// Record all statistics.
|
|
|
|
options.statistics->set_stats_level(StatsLevel::kAll);
|
|
|
|
|
|
|
|
// create the DB if it's not already present
|
|
|
|
options.create_if_missing = true;
|
|
|
|
|
|
|
|
// Useful for now as we are trying to compare uncompressed data savings on
|
|
|
|
// flush().
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
|
|
|
|
// Prevent memtable in place updates. Should already be disabled
|
|
|
|
// (from Wiki:
|
|
|
|
// In place updates can be enabled by toggling on the bool
|
|
|
|
// inplace_update_support flag. However, this flag is by default set to
|
|
|
|
// false
|
|
|
|
// because this thread-safe in-place update support is not compatible
|
|
|
|
// with concurrent memtable writes. Note that the bool
|
|
|
|
// allow_concurrent_memtable_write is set to true by default )
|
|
|
|
options.inplace_update_support = false;
|
|
|
|
options.allow_concurrent_memtable_write = true;
|
|
|
|
|
|
|
|
// Enforce size of a single MemTable to 64KB (64KB = 65,536 bytes).
|
|
|
|
options.write_buffer_size = 1 << 20;
|
|
|
|
// Activate the MemPurge prototype.
|
|
|
|
options.experimental_mempurge_threshold = 153.245;
|
|
|
|
// Activate atomic_flush.
|
|
|
|
options.atomic_flush = true;
|
|
|
|
|
|
|
|
const std::vector<std::string> new_cf_names = {"pikachu", "eevie"};
|
|
|
|
CreateColumnFamilies(new_cf_names, options);
|
|
|
|
|
|
|
|
Close();
|
|
|
|
|
|
|
|
// 3 CFs: default will be filled with overwrites (would normally trigger
|
|
|
|
// mempurge)
|
|
|
|
// new_cf_names[1] will be filled with random values (would trigger
|
|
|
|
// flush) new_cf_names[2] not filled with anything.
|
|
|
|
ReopenWithColumnFamilies(
|
|
|
|
{kDefaultColumnFamilyName, new_cf_names[0], new_cf_names[1]}, options);
|
|
|
|
size_t num_cfs = handles_.size();
|
|
|
|
ASSERT_EQ(3, num_cfs);
|
|
|
|
ASSERT_OK(Put(1, "foo", "bar"));
|
|
|
|
ASSERT_OK(Put(2, "bar", "baz"));
|
|
|
|
|
|
|
|
std::atomic<uint32_t> mempurge_count{0};
|
|
|
|
std::atomic<uint32_t> sst_count{0};
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushJob:MemPurgeSuccessful",
|
|
|
|
[&](void* /*arg*/) { mempurge_count++; });
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushJob:SSTFileCreated", [&](void* /*arg*/) { sst_count++; });
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
const size_t KVSIZE = 3;
|
|
|
|
std::vector<std::string> KEYS(KVSIZE);
|
|
|
|
for (size_t k = 0; k < KVSIZE; k++) {
|
|
|
|
KEYS[k] = "IamKey" + std::to_string(k);
|
|
|
|
}
|
|
|
|
|
|
|
|
std::string RNDKEY;
|
|
|
|
std::vector<std::string> RNDVALS(KVSIZE);
|
|
|
|
const std::string NOT_FOUND = "NOT_FOUND";
|
|
|
|
|
|
|
|
// Heavy overwrite workload,
|
|
|
|
// more than would fit in maximum allowed memtables.
|
|
|
|
Random rnd(106);
|
|
|
|
const size_t NUM_REPEAT = 100;
|
|
|
|
const size_t RAND_KEY_LENGTH = 128;
|
|
|
|
const size_t RAND_VALUES_LENGTH = 10240;
|
|
|
|
|
|
|
|
// Insertion of of K-V pairs, multiple times (overwrites).
|
|
|
|
for (size_t i = 0; i < NUM_REPEAT; i++) {
|
|
|
|
for (size_t j = 0; j < KEYS.size(); j++) {
|
|
|
|
RNDKEY = rnd.RandomString(RAND_KEY_LENGTH);
|
|
|
|
RNDVALS[j] = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEYS[j], RNDVALS[j]));
|
|
|
|
ASSERT_OK(Put(1, RNDKEY, RNDVALS[j]));
|
|
|
|
ASSERT_EQ(Get(KEYS[j]), RNDVALS[j]);
|
|
|
|
ASSERT_EQ(Get(1, RNDKEY), RNDVALS[j]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check that there was no mempurge because atomic_flush option is true.
|
|
|
|
const uint32_t EXPECTED_MIN_MEMPURGE_COUNT = 0;
|
|
|
|
// Check that there was at least one SST files created during flush.
|
|
|
|
const uint32_t EXPECTED_SST_COUNT = 1;
|
|
|
|
|
|
|
|
EXPECT_EQ(mempurge_count.exchange(0), EXPECTED_MIN_MEMPURGE_COUNT);
|
|
|
|
EXPECT_GE(sst_count.exchange(0), EXPECTED_SST_COUNT);
|
|
|
|
|
|
|
|
Close();
|
|
|
|
}
|
|
|
|
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
TEST_F(DBFlushTest, MemPurgeDeleteAndDeleteRange) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
|
|
|
|
options.statistics = CreateDBStatistics();
|
|
|
|
options.statistics->set_stats_level(StatsLevel::kAll);
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.inplace_update_support = false;
|
|
|
|
options.allow_concurrent_memtable_write = true;
|
2021-08-19 00:39:00 +00:00
|
|
|
TestFlushListener* listener = new TestFlushListener(options.env, this);
|
|
|
|
options.listeners.emplace_back(listener);
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
// Enforce size of a single MemTable to 64MB (64MB = 67108864 bytes).
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
options.write_buffer_size = 1 << 20;
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
// Activate the MemPurge prototype.
|
2022-06-23 16:42:18 +00:00
|
|
|
options.experimental_mempurge_threshold = 15.0;
|
2021-08-19 00:39:00 +00:00
|
|
|
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
|
2021-11-03 04:53:23 +00:00
|
|
|
std::atomic<uint32_t> mempurge_count{0};
|
|
|
|
std::atomic<uint32_t> sst_count{0};
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
"DBImpl::FlushJob:MemPurgeSuccessful",
|
|
|
|
[&](void* /*arg*/) { mempurge_count++; });
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
"DBImpl::FlushJob:SSTFileCreated", [&](void* /*arg*/) { sst_count++; });
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
std::string KEY1 = "ThisIsKey1";
|
|
|
|
std::string KEY2 = "ThisIsKey2";
|
|
|
|
std::string KEY3 = "ThisIsKey3";
|
|
|
|
std::string KEY4 = "ThisIsKey4";
|
|
|
|
std::string KEY5 = "ThisIsKey5";
|
|
|
|
const std::string NOT_FOUND = "NOT_FOUND";
|
|
|
|
|
|
|
|
Random rnd(117);
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
const size_t NUM_REPEAT = 100;
|
|
|
|
const size_t RAND_VALUES_LENGTH = 10240;
|
|
|
|
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
std::string key, value, p_v1, p_v2, p_v3, p_v3b, p_v4, p_v5;
|
|
|
|
int count = 0;
|
|
|
|
const int EXPECTED_COUNT_FORLOOP = 3;
|
|
|
|
const int EXPECTED_COUNT_END = 4;
|
|
|
|
|
|
|
|
ReadOptions ropt;
|
|
|
|
ropt.pin_data = true;
|
|
|
|
ropt.total_order_seek = true;
|
|
|
|
Iterator* iter = nullptr;
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
// Insertion of of K-V pairs, multiple times.
|
|
|
|
// Also insert DeleteRange
|
|
|
|
for (size_t i = 0; i < NUM_REPEAT; i++) {
|
|
|
|
// Create value strings of arbitrary length RAND_VALUES_LENGTH bytes.
|
|
|
|
p_v1 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v2 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v3 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v3b = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v4 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v5 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEY1, p_v1));
|
|
|
|
ASSERT_OK(Put(KEY2, p_v2));
|
|
|
|
ASSERT_OK(Put(KEY3, p_v3));
|
|
|
|
ASSERT_OK(Put(KEY4, p_v4));
|
|
|
|
ASSERT_OK(Put(KEY5, p_v5));
|
|
|
|
ASSERT_OK(Delete(KEY2));
|
|
|
|
ASSERT_OK(db_->DeleteRange(WriteOptions(), db_->DefaultColumnFamily(), KEY2,
|
|
|
|
KEY4));
|
|
|
|
ASSERT_OK(Put(KEY3, p_v3b));
|
|
|
|
ASSERT_OK(db_->DeleteRange(WriteOptions(), db_->DefaultColumnFamily(), KEY1,
|
|
|
|
KEY3));
|
|
|
|
ASSERT_OK(Delete(KEY1));
|
|
|
|
|
|
|
|
ASSERT_EQ(Get(KEY1), NOT_FOUND);
|
|
|
|
ASSERT_EQ(Get(KEY2), NOT_FOUND);
|
|
|
|
ASSERT_EQ(Get(KEY3), p_v3b);
|
|
|
|
ASSERT_EQ(Get(KEY4), p_v4);
|
|
|
|
ASSERT_EQ(Get(KEY5), p_v5);
|
|
|
|
|
|
|
|
iter = db_->NewIterator(ropt);
|
|
|
|
iter->SeekToFirst();
|
|
|
|
count = 0;
|
|
|
|
for (; iter->Valid(); iter->Next()) {
|
|
|
|
ASSERT_OK(iter->status());
|
|
|
|
key = (iter->key()).ToString(false);
|
|
|
|
value = (iter->value()).ToString(false);
|
2024-03-04 18:08:32 +00:00
|
|
|
if (key.compare(KEY3) == 0) {
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_EQ(value, p_v3b);
|
2024-03-04 18:08:32 +00:00
|
|
|
} else if (key.compare(KEY4) == 0) {
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_EQ(value, p_v4);
|
2024-03-04 18:08:32 +00:00
|
|
|
} else if (key.compare(KEY5) == 0) {
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_EQ(value, p_v5);
|
2024-03-04 18:08:32 +00:00
|
|
|
} else {
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_EQ(value, NOT_FOUND);
|
2024-03-04 18:08:32 +00:00
|
|
|
}
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
count++;
|
|
|
|
}
|
2023-10-18 16:38:38 +00:00
|
|
|
ASSERT_OK(iter->status());
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
|
|
|
// Expected count here is 3: KEY3, KEY4, KEY5.
|
|
|
|
ASSERT_EQ(count, EXPECTED_COUNT_FORLOOP);
|
|
|
|
if (iter) {
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check that there was at least one mempurge
|
|
|
|
const uint32_t EXPECTED_MIN_MEMPURGE_COUNT = 1;
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
// Check that there was no SST files created during flush.
|
|
|
|
const uint32_t EXPECTED_SST_COUNT = 0;
|
|
|
|
|
2021-11-03 04:53:23 +00:00
|
|
|
EXPECT_GE(mempurge_count.exchange(0), EXPECTED_MIN_MEMPURGE_COUNT);
|
|
|
|
EXPECT_EQ(sst_count.exchange(0), EXPECTED_SST_COUNT);
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
|
|
|
// Additional test for the iterator+memPurge.
|
|
|
|
ASSERT_OK(Put(KEY2, p_v2));
|
|
|
|
iter = db_->NewIterator(ropt);
|
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_OK(Put(KEY4, p_v4));
|
|
|
|
count = 0;
|
|
|
|
for (; iter->Valid(); iter->Next()) {
|
|
|
|
ASSERT_OK(iter->status());
|
|
|
|
key = (iter->key()).ToString(false);
|
|
|
|
value = (iter->value()).ToString(false);
|
2024-03-04 18:08:32 +00:00
|
|
|
if (key.compare(KEY2) == 0) {
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_EQ(value, p_v2);
|
2024-03-04 18:08:32 +00:00
|
|
|
} else if (key.compare(KEY3) == 0) {
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_EQ(value, p_v3b);
|
2024-03-04 18:08:32 +00:00
|
|
|
} else if (key.compare(KEY4) == 0) {
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_EQ(value, p_v4);
|
2024-03-04 18:08:32 +00:00
|
|
|
} else if (key.compare(KEY5) == 0) {
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_EQ(value, p_v5);
|
2024-03-04 18:08:32 +00:00
|
|
|
} else {
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_EQ(value, NOT_FOUND);
|
2024-03-04 18:08:32 +00:00
|
|
|
}
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
count++;
|
|
|
|
}
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
// Expected count here is 4: KEY2, KEY3, KEY4, KEY5.
|
|
|
|
ASSERT_EQ(count, EXPECTED_COUNT_END);
|
2024-03-04 18:08:32 +00:00
|
|
|
if (iter) {
|
|
|
|
delete iter;
|
|
|
|
}
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
|
|
|
Close();
|
|
|
|
}
|
|
|
|
|
|
|
|
// Create a Compaction Fitler that will be invoked
|
|
|
|
// at flush time and will update the value of a KV pair
|
|
|
|
// if the key string is "lower" than the filter_key_ string.
|
|
|
|
class ConditionalUpdateFilter : public CompactionFilter {
|
|
|
|
public:
|
|
|
|
explicit ConditionalUpdateFilter(const std::string* filtered_key)
|
|
|
|
: filtered_key_(filtered_key) {}
|
|
|
|
bool Filter(int /*level*/, const Slice& key, const Slice& /*value*/,
|
|
|
|
std::string* new_value, bool* value_changed) const override {
|
|
|
|
// If key<filtered_key_, update the value of the KV-pair.
|
|
|
|
if (key.compare(*filtered_key_) < 0) {
|
|
|
|
assert(new_value != nullptr);
|
|
|
|
*new_value = NEW_VALUE;
|
|
|
|
*value_changed = true;
|
|
|
|
}
|
|
|
|
return false /*do not remove this KV-pair*/;
|
|
|
|
}
|
|
|
|
|
|
|
|
const char* Name() const override { return "ConditionalUpdateFilter"; }
|
|
|
|
|
|
|
|
private:
|
|
|
|
const std::string* filtered_key_;
|
|
|
|
};
|
|
|
|
|
|
|
|
class ConditionalUpdateFilterFactory : public CompactionFilterFactory {
|
|
|
|
public:
|
|
|
|
explicit ConditionalUpdateFilterFactory(const Slice& filtered_key)
|
|
|
|
: filtered_key_(filtered_key.ToString()) {}
|
|
|
|
|
|
|
|
std::unique_ptr<CompactionFilter> CreateCompactionFilter(
|
|
|
|
const CompactionFilter::Context& /*context*/) override {
|
|
|
|
return std::unique_ptr<CompactionFilter>(
|
|
|
|
new ConditionalUpdateFilter(&filtered_key_));
|
|
|
|
}
|
|
|
|
|
|
|
|
const char* Name() const override { return "ConditionalUpdateFilterFactory"; }
|
|
|
|
|
|
|
|
bool ShouldFilterTableFileCreation(
|
|
|
|
TableFileCreationReason reason) const override {
|
|
|
|
// This compaction filter will be invoked
|
|
|
|
// at flush time (and therefore at MemPurge time).
|
|
|
|
return (reason == TableFileCreationReason::kFlush);
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
std::string filtered_key_;
|
|
|
|
};
|
|
|
|
|
|
|
|
TEST_F(DBFlushTest, MemPurgeAndCompactionFilter) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
|
|
|
|
std::string KEY1 = "ThisIsKey1";
|
|
|
|
std::string KEY2 = "ThisIsKey2";
|
|
|
|
std::string KEY3 = "ThisIsKey3";
|
|
|
|
std::string KEY4 = "ThisIsKey4";
|
|
|
|
std::string KEY5 = "ThisIsKey5";
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
std::string KEY6 = "ThisIsKey6";
|
|
|
|
std::string KEY7 = "ThisIsKey7";
|
|
|
|
std::string KEY8 = "ThisIsKey8";
|
|
|
|
std::string KEY9 = "ThisIsKey9";
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
const std::string NOT_FOUND = "NOT_FOUND";
|
|
|
|
|
|
|
|
options.statistics = CreateDBStatistics();
|
|
|
|
options.statistics->set_stats_level(StatsLevel::kAll);
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.inplace_update_support = false;
|
|
|
|
options.allow_concurrent_memtable_write = true;
|
2021-08-19 00:39:00 +00:00
|
|
|
TestFlushListener* listener = new TestFlushListener(options.env, this);
|
|
|
|
options.listeners.emplace_back(listener);
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
// Create a ConditionalUpdate compaction filter
|
|
|
|
// that will update all the values of the KV pairs
|
|
|
|
// where the keys are "lower" than KEY4.
|
|
|
|
options.compaction_filter_factory =
|
|
|
|
std::make_shared<ConditionalUpdateFilterFactory>(KEY4);
|
|
|
|
|
|
|
|
// Enforce size of a single MemTable to 64MB (64MB = 67108864 bytes).
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
options.write_buffer_size = 1 << 20;
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
// Activate the MemPurge prototype.
|
2022-06-23 16:42:18 +00:00
|
|
|
options.experimental_mempurge_threshold = 26.55;
|
2021-08-19 00:39:00 +00:00
|
|
|
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
|
2021-11-03 04:53:23 +00:00
|
|
|
std::atomic<uint32_t> mempurge_count{0};
|
|
|
|
std::atomic<uint32_t> sst_count{0};
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushJob:MemPurgeSuccessful",
|
|
|
|
[&](void* /*arg*/) { mempurge_count++; });
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushJob:SSTFileCreated", [&](void* /*arg*/) { sst_count++; });
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
Random rnd(53);
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
const size_t NUM_REPEAT = 1000;
|
|
|
|
const size_t RAND_VALUES_LENGTH = 10240;
|
|
|
|
std::string p_v1, p_v2, p_v3, p_v4, p_v5, p_v6, p_v7, p_v8, p_v9;
|
|
|
|
|
|
|
|
p_v1 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v2 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v3 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v4 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v5 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEY1, p_v1));
|
|
|
|
ASSERT_OK(Put(KEY2, p_v2));
|
|
|
|
ASSERT_OK(Put(KEY3, p_v3));
|
|
|
|
ASSERT_OK(Put(KEY4, p_v4));
|
|
|
|
ASSERT_OK(Put(KEY5, p_v5));
|
|
|
|
ASSERT_OK(Delete(KEY1));
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
|
|
|
// Insertion of of K-V pairs, multiple times.
|
|
|
|
for (size_t i = 0; i < NUM_REPEAT; i++) {
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
// Create value strings of arbitrary
|
|
|
|
// length RAND_VALUES_LENGTH bytes.
|
|
|
|
p_v6 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v7 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v8 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
p_v9 = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEY6, p_v6));
|
|
|
|
ASSERT_OK(Put(KEY7, p_v7));
|
|
|
|
ASSERT_OK(Put(KEY8, p_v8));
|
|
|
|
ASSERT_OK(Put(KEY9, p_v9));
|
|
|
|
|
|
|
|
ASSERT_OK(Delete(KEY7));
|
|
|
|
}
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
// Check that there was at least one mempurge
|
|
|
|
const uint32_t EXPECTED_MIN_MEMPURGE_COUNT = 1;
|
|
|
|
// Check that there was no SST files created during flush.
|
|
|
|
const uint32_t EXPECTED_SST_COUNT = 0;
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
2021-11-03 04:53:23 +00:00
|
|
|
EXPECT_GE(mempurge_count.exchange(0), EXPECTED_MIN_MEMPURGE_COUNT);
|
|
|
|
EXPECT_EQ(sst_count.exchange(0), EXPECTED_SST_COUNT);
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
// Verify that the ConditionalUpdateCompactionFilter
|
|
|
|
// updated the values of KEY2 and KEY3, and not KEY4 and KEY5.
|
|
|
|
ASSERT_EQ(Get(KEY1), NOT_FOUND);
|
|
|
|
ASSERT_EQ(Get(KEY2), NEW_VALUE);
|
|
|
|
ASSERT_EQ(Get(KEY3), NEW_VALUE);
|
|
|
|
ASSERT_EQ(Get(KEY4), p_v4);
|
|
|
|
ASSERT_EQ(Get(KEY5), p_v5);
|
Memtable "MemPurge" prototype (#8454)
Summary:
Implement an experimental feature called "MemPurge", which consists in purging "garbage" bytes out of a memtable and reuse the memtable struct instead of making it immutable and eventually flushing its content to storage.
The prototype is by default deactivated and is not intended for use. It is intended for correctness and validation testing. At the moment, the "MemPurge" feature can be switched on by using the `options.experimental_allow_mempurge` flag. For this early stage, when the allow_mempurge flag is set to `true`, all the flush operations will be rerouted to perform a MemPurge. This is a temporary design decision that will give us the time to explore meaningful heuristics to use MemPurge at the right time for relevant workloads . Moreover, the current MemPurge operation only supports `Puts`, `Deletes`, `DeleteRange` operations, and handles `Iterators` as well as `CompactionFilter`s that are invoked at flush time .
Three unit tests are added to `db_flush_test.cc` to test if MemPurge works correctly (and checks that the previously mentioned operations are fully supported thoroughly tested).
One noticeable design decision is the timing of the MemPurge operation in the memtable workflow: for this prototype, the mempurge happens when the memtable is switched (and usually made immutable). This is an inefficient process because it implies that the entirety of the MemPurge operation happens while holding the db_mutex. Future commits will make the MemPurge operation a background task (akin to the regular flush operation) and aim at drastically enhancing the performance of this operation. The MemPurge is also not fully "WAL-compatible" yet, but when the WAL is full, or when the regular MemPurge operation fails (or when the purged memtable still needs to be flushed), a regular flush operation takes place. Later commits will also correct these behaviors.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8454
Reviewed By: anand1976
Differential Revision: D29433971
Pulled By: bjlemaire
fbshipit-source-id: 6af48213554e35048a7e03816955100a80a26dc5
2021-07-02 12:22:03 +00:00
|
|
|
}
|
|
|
|
|
2021-11-19 17:55:10 +00:00
|
|
|
TEST_F(DBFlushTest, DISABLED_MemPurgeWALSupport) {
|
2021-07-16 00:48:17 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
|
|
|
|
options.statistics = CreateDBStatistics();
|
|
|
|
options.statistics->set_stats_level(StatsLevel::kAll);
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.inplace_update_support = false;
|
|
|
|
options.allow_concurrent_memtable_write = true;
|
|
|
|
|
Add simple heuristics for experimental mempurge. (#8583)
Summary:
Add `experimental_mempurge_policy` option flag and introduce two new `MemPurge` (Memtable Garbage Collection) policies: 'ALWAYS' and 'ALTERNATE'. Default value: ALTERNATE.
`ALWAYS`: every flush will first go through a `MemPurge` process. If the output is too big to fit into a single memtable, then the mempurge is aborted and a regular flush process carries on. `ALWAYS` is designed for user that need to reduce the number of L0 SST file created to a strict minimum, and can afford a small dent in performance (possibly hits to CPU usage, read efficiency, and maximum burst write throughput).
`ALTERNATE`: a flush is transformed into a `MemPurge` except if one of the memtables being flushed is the product of a previous `MemPurge`. `ALTERNATE` is a good tradeoff between reduction in number of L0 SST files created and performance. `ALTERNATE` perform particularly well for completely random garbage ratios, or garbage ratios anywhere in (0%,50%], and even higher when there is a wild variability in garbage ratios.
This PR also includes support for `experimental_mempurge_policy` in `db_bench`.
Testing was done locally by replacing all the `MemPurge` policies of the unit tests with `ALTERNATE`, as well as local testing with `db_crashtest.py` `whitebox` and `blackbox`. Overall, if an `ALWAYS` mempurge policy passes the tests, there is no reasons why an `ALTERNATE` policy would fail, and therefore the mempurge policy was set to `ALWAYS` for all mempurge unit tests.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8583
Reviewed By: pdillinger
Differential Revision: D29888050
Pulled By: bjlemaire
fbshipit-source-id: e2cf26646d66679f6f5fb29842624615610759c1
2021-07-26 18:55:27 +00:00
|
|
|
// Enforce size of a single MemTable to 128KB.
|
2021-07-16 00:48:17 +00:00
|
|
|
options.write_buffer_size = 128 << 10;
|
2022-06-23 16:42:18 +00:00
|
|
|
// Activate the MemPurge prototype
|
|
|
|
// (values >1.0 are equivalent to 1.0).
|
|
|
|
options.experimental_mempurge_threshold = 2.5;
|
2021-08-19 00:39:00 +00:00
|
|
|
|
2021-07-16 00:48:17 +00:00
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
|
|
|
|
const size_t KVSIZE = 10;
|
|
|
|
|
|
|
|
do {
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
ASSERT_OK(Put(1, "baz", "v5"));
|
|
|
|
|
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, options);
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
|
|
|
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("v5", Get(1, "baz"));
|
|
|
|
ASSERT_OK(Put(0, "bar", "v2"));
|
|
|
|
ASSERT_OK(Put(1, "bar", "v2"));
|
|
|
|
ASSERT_OK(Put(1, "foo", "v3"));
|
2021-11-03 04:53:23 +00:00
|
|
|
std::atomic<uint32_t> mempurge_count{0};
|
|
|
|
std::atomic<uint32_t> sst_count{0};
|
2021-07-16 00:48:17 +00:00
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushJob:MemPurgeSuccessful",
|
|
|
|
[&](void* /*arg*/) { mempurge_count++; });
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushJob:SSTFileCreated", [&](void* /*arg*/) { sst_count++; });
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
std::vector<std::string> keys;
|
|
|
|
for (size_t k = 0; k < KVSIZE; k++) {
|
|
|
|
keys.push_back("IamKey" + std::to_string(k));
|
|
|
|
}
|
|
|
|
|
|
|
|
std::string RNDKEY, RNDVALUE;
|
|
|
|
const std::string NOT_FOUND = "NOT_FOUND";
|
|
|
|
|
|
|
|
// Heavy overwrite workload,
|
|
|
|
// more than would fit in maximum allowed memtables.
|
|
|
|
Random rnd(719);
|
|
|
|
const size_t NUM_REPEAT = 100;
|
Add simple heuristics for experimental mempurge. (#8583)
Summary:
Add `experimental_mempurge_policy` option flag and introduce two new `MemPurge` (Memtable Garbage Collection) policies: 'ALWAYS' and 'ALTERNATE'. Default value: ALTERNATE.
`ALWAYS`: every flush will first go through a `MemPurge` process. If the output is too big to fit into a single memtable, then the mempurge is aborted and a regular flush process carries on. `ALWAYS` is designed for user that need to reduce the number of L0 SST file created to a strict minimum, and can afford a small dent in performance (possibly hits to CPU usage, read efficiency, and maximum burst write throughput).
`ALTERNATE`: a flush is transformed into a `MemPurge` except if one of the memtables being flushed is the product of a previous `MemPurge`. `ALTERNATE` is a good tradeoff between reduction in number of L0 SST files created and performance. `ALTERNATE` perform particularly well for completely random garbage ratios, or garbage ratios anywhere in (0%,50%], and even higher when there is a wild variability in garbage ratios.
This PR also includes support for `experimental_mempurge_policy` in `db_bench`.
Testing was done locally by replacing all the `MemPurge` policies of the unit tests with `ALTERNATE`, as well as local testing with `db_crashtest.py` `whitebox` and `blackbox`. Overall, if an `ALWAYS` mempurge policy passes the tests, there is no reasons why an `ALTERNATE` policy would fail, and therefore the mempurge policy was set to `ALWAYS` for all mempurge unit tests.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8583
Reviewed By: pdillinger
Differential Revision: D29888050
Pulled By: bjlemaire
fbshipit-source-id: e2cf26646d66679f6f5fb29842624615610759c1
2021-07-26 18:55:27 +00:00
|
|
|
const size_t RAND_KEY_LENGTH = 4096;
|
2021-07-16 00:48:17 +00:00
|
|
|
const size_t RAND_VALUES_LENGTH = 1024;
|
|
|
|
std::vector<std::string> values_default(KVSIZE), values_pikachu(KVSIZE);
|
|
|
|
|
|
|
|
// Insert a very first set of keys that will be
|
|
|
|
// mempurged at least once.
|
|
|
|
for (size_t k = 0; k < KVSIZE / 2; k++) {
|
|
|
|
values_default[k] = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
values_pikachu[k] = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Insert keys[0:KVSIZE/2] to
|
|
|
|
// both 'default' and 'pikachu' CFs.
|
|
|
|
for (size_t k = 0; k < KVSIZE / 2; k++) {
|
|
|
|
ASSERT_OK(Put(0, keys[k], values_default[k]));
|
|
|
|
ASSERT_OK(Put(1, keys[k], values_pikachu[k]));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check that the insertion was seamless.
|
|
|
|
for (size_t k = 0; k < KVSIZE / 2; k++) {
|
|
|
|
ASSERT_EQ(Get(0, keys[k]), values_default[k]);
|
|
|
|
ASSERT_EQ(Get(1, keys[k]), values_pikachu[k]);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Insertion of of K-V pairs, multiple times (overwrites)
|
|
|
|
// into 'default' CF. Will trigger mempurge.
|
|
|
|
for (size_t j = 0; j < NUM_REPEAT; j++) {
|
|
|
|
// Create value strings of arbitrary length RAND_VALUES_LENGTH bytes.
|
|
|
|
for (size_t k = KVSIZE / 2; k < KVSIZE; k++) {
|
|
|
|
values_default[k] = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Insert K-V into default CF.
|
|
|
|
for (size_t k = KVSIZE / 2; k < KVSIZE; k++) {
|
|
|
|
ASSERT_OK(Put(0, keys[k], values_default[k]));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check key validity, for all keys, both in
|
|
|
|
// default and pikachu CFs.
|
|
|
|
for (size_t k = 0; k < KVSIZE; k++) {
|
|
|
|
ASSERT_EQ(Get(0, keys[k]), values_default[k]);
|
|
|
|
}
|
|
|
|
// Note that at this point, only keys[0:KVSIZE/2]
|
|
|
|
// have been inserted into Pikachu.
|
|
|
|
for (size_t k = 0; k < KVSIZE / 2; k++) {
|
|
|
|
ASSERT_EQ(Get(1, keys[k]), values_pikachu[k]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Insertion of of K-V pairs, multiple times (overwrites)
|
|
|
|
// into 'pikachu' CF. Will trigger mempurge.
|
|
|
|
// Check that we keep the older logs for 'default' imm().
|
|
|
|
for (size_t j = 0; j < NUM_REPEAT; j++) {
|
|
|
|
// Create value strings of arbitrary length RAND_VALUES_LENGTH bytes.
|
|
|
|
for (size_t k = KVSIZE / 2; k < KVSIZE; k++) {
|
|
|
|
values_pikachu[k] = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Insert K-V into pikachu CF.
|
|
|
|
for (size_t k = KVSIZE / 2; k < KVSIZE; k++) {
|
|
|
|
ASSERT_OK(Put(1, keys[k], values_pikachu[k]));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check key validity, for all keys,
|
|
|
|
// both in default and pikachu.
|
|
|
|
for (size_t k = 0; k < KVSIZE; k++) {
|
|
|
|
ASSERT_EQ(Get(0, keys[k]), values_default[k]);
|
|
|
|
ASSERT_EQ(Get(1, keys[k]), values_pikachu[k]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check that there was at least one mempurge
|
|
|
|
const uint32_t EXPECTED_MIN_MEMPURGE_COUNT = 1;
|
|
|
|
// Check that there was no SST files created during flush.
|
|
|
|
const uint32_t EXPECTED_SST_COUNT = 0;
|
|
|
|
|
2021-11-03 04:53:23 +00:00
|
|
|
EXPECT_GE(mempurge_count.exchange(0), EXPECTED_MIN_MEMPURGE_COUNT);
|
2021-08-11 01:07:48 +00:00
|
|
|
if (options.experimental_mempurge_threshold ==
|
|
|
|
std::numeric_limits<double>::max()) {
|
2021-11-03 04:53:23 +00:00
|
|
|
EXPECT_EQ(sst_count.exchange(0), EXPECTED_SST_COUNT);
|
Add simple heuristics for experimental mempurge. (#8583)
Summary:
Add `experimental_mempurge_policy` option flag and introduce two new `MemPurge` (Memtable Garbage Collection) policies: 'ALWAYS' and 'ALTERNATE'. Default value: ALTERNATE.
`ALWAYS`: every flush will first go through a `MemPurge` process. If the output is too big to fit into a single memtable, then the mempurge is aborted and a regular flush process carries on. `ALWAYS` is designed for user that need to reduce the number of L0 SST file created to a strict minimum, and can afford a small dent in performance (possibly hits to CPU usage, read efficiency, and maximum burst write throughput).
`ALTERNATE`: a flush is transformed into a `MemPurge` except if one of the memtables being flushed is the product of a previous `MemPurge`. `ALTERNATE` is a good tradeoff between reduction in number of L0 SST files created and performance. `ALTERNATE` perform particularly well for completely random garbage ratios, or garbage ratios anywhere in (0%,50%], and even higher when there is a wild variability in garbage ratios.
This PR also includes support for `experimental_mempurge_policy` in `db_bench`.
Testing was done locally by replacing all the `MemPurge` policies of the unit tests with `ALTERNATE`, as well as local testing with `db_crashtest.py` `whitebox` and `blackbox`. Overall, if an `ALWAYS` mempurge policy passes the tests, there is no reasons why an `ALTERNATE` policy would fail, and therefore the mempurge policy was set to `ALWAYS` for all mempurge unit tests.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8583
Reviewed By: pdillinger
Differential Revision: D29888050
Pulled By: bjlemaire
fbshipit-source-id: e2cf26646d66679f6f5fb29842624615610759c1
2021-07-26 18:55:27 +00:00
|
|
|
}
|
2021-07-16 00:48:17 +00:00
|
|
|
|
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, options);
|
|
|
|
// Check that there was no data corruption anywhere,
|
|
|
|
// not in 'default' nor in 'Pikachu' CFs.
|
|
|
|
ASSERT_EQ("v3", Get(1, "foo"));
|
|
|
|
ASSERT_OK(Put(1, "foo", "v4"));
|
|
|
|
ASSERT_EQ("v4", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("v2", Get(1, "bar"));
|
|
|
|
ASSERT_EQ("v5", Get(1, "baz"));
|
|
|
|
// Check keys in 'Default' and 'Pikachu'.
|
|
|
|
// keys[0:KVSIZE/2] were for sure contained
|
|
|
|
// in the imm() at Reopen/recovery time.
|
|
|
|
for (size_t k = 0; k < KVSIZE; k++) {
|
|
|
|
ASSERT_EQ(Get(0, keys[k]), values_default[k]);
|
|
|
|
ASSERT_EQ(Get(1, keys[k]), values_pikachu[k]);
|
|
|
|
}
|
|
|
|
// Insertion of random K-V pairs to trigger
|
|
|
|
// a flush in the Pikachu CF.
|
|
|
|
for (size_t j = 0; j < NUM_REPEAT; j++) {
|
|
|
|
RNDKEY = rnd.RandomString(RAND_KEY_LENGTH);
|
|
|
|
RNDVALUE = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(1, RNDKEY, RNDVALUE));
|
|
|
|
}
|
|
|
|
// ASsert than there was at least one flush to storage.
|
2021-11-03 04:53:23 +00:00
|
|
|
EXPECT_GT(sst_count.exchange(0), EXPECTED_SST_COUNT);
|
2021-07-16 00:48:17 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, options);
|
|
|
|
ASSERT_EQ("v4", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("v2", Get(1, "bar"));
|
|
|
|
ASSERT_EQ("v5", Get(1, "baz"));
|
|
|
|
// Since values in default are held in mutable mem()
|
|
|
|
// and imm(), check if the flush in pikachu didn't
|
|
|
|
// affect these values.
|
|
|
|
for (size_t k = 0; k < KVSIZE; k++) {
|
|
|
|
ASSERT_EQ(Get(0, keys[k]), values_default[k]);
|
|
|
|
ASSERT_EQ(Get(1, keys[k]), values_pikachu[k]);
|
|
|
|
}
|
|
|
|
ASSERT_EQ(Get(1, RNDKEY), RNDVALUE);
|
|
|
|
} while (ChangeWalOptions());
|
|
|
|
}
|
|
|
|
|
Fix mempurge crash reported in #8958 (#9671)
Summary:
Change the `MemPurge` code to address a failure during a crash test reported in https://github.com/facebook/rocksdb/issues/8958.
### Details and results of the crash investigation:
These failures happened in a specific scenario where the list of immutable tables was composed of 2 or more memtables, and the last memtable was the output of a previous `Mempurge` operation. Because the `PickMemtablesToFlush` function included a sorting of the memtables (previous PR related to the Mempurge project), and because the `VersionEdit` of the flush class is piggybacked onto a single one of these memtables, the `VersionEdit` was not properly selected and applied to the `VersionSet` of the DB. Since the `VersionSet` was not edited properly, the database was losing track of the SST file created during the flush process, which was subsequently deleted (and as you can expect, caused the tests to crash).
The following command consistently failed, which was quite convenient to investigate the issue:
`$ while rm -rf /dev/shm/single_stress && ./db_stress --clear_column_family_one_in=0 --column_families=1 --db=/dev/shm/single_stress --experimental_mempurge_threshold=5.493146827397074 --flush_one_in=10000 --reopen=0 --write_buffer_size=262144 --value_size_mult=33 --max_write_buffer_number=3 -ops_per_thread=10000; do : ; done`
### Solution proposed
The memtables are no longer sorted based on their `memtableID` in the `PickMemtablesToFlush` function. Additionally, the `next_log_number` of the memtable created as an output of the `Mempurge` function now takes in the correct value (the log number of the first memtable being mempurged). Finally, the VersionEdit object of the flush class now takes the maximum `next_log_number` of the stack of memtables being flushed, which doesnt change anything when Mempurge is `off` but becomes necessary when Mempurge is `on`.
### Testing of the solution
The following command no longer fails:
``$ while rm -rf /dev/shm/single_stress && ./db_stress --clear_column_family_one_in=0 --column_families=1 --db=/dev/shm/single_stress --experimental_mempurge_threshold=5.493146827397074 --flush_one_in=10000 --reopen=0 --write_buffer_size=262144 --value_size_mult=33 --max_write_buffer_number=3 -ops_per_thread=10000; do : ; done``
Additionally, I ran `db_crashtest` (`whitebox` and `blackbox`) for 2.5 hours with MemPurge on and did not observe any crash.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9671
Reviewed By: pdillinger
Differential Revision: D34697424
Pulled By: bjlemaire
fbshipit-source-id: d1ab675b361904351ac81a35c184030e52222874
2022-03-10 23:16:55 +00:00
|
|
|
TEST_F(DBFlushTest, MemPurgeCorrectLogNumberAndSSTFileCreation) {
|
|
|
|
// Before our bug fix, we noticed that when 2 memtables were
|
|
|
|
// being flushed (with one memtable being the output of a
|
|
|
|
// previous MemPurge and one memtable being a newly-sealed memtable),
|
|
|
|
// the SST file created was not properly added to the DB version
|
|
|
|
// (via the VersionEdit obj), leading to data loss (the SST file
|
|
|
|
// was later being purged as an obsolete file).
|
|
|
|
// Therefore, we reproduce this scenario to test our fix.
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.inplace_update_support = false;
|
|
|
|
options.allow_concurrent_memtable_write = true;
|
|
|
|
|
|
|
|
// Enforce size of a single MemTable to 1MB (64MB = 1048576 bytes).
|
|
|
|
options.write_buffer_size = 1 << 20;
|
|
|
|
// Activate the MemPurge prototype.
|
|
|
|
options.experimental_mempurge_threshold = 1.0;
|
|
|
|
|
|
|
|
// Force to have more than one memtable to trigger a flush.
|
|
|
|
// For some reason this option does not seem to be enforced,
|
|
|
|
// so the following test is designed to make sure that we
|
|
|
|
// are testing the correct test case.
|
|
|
|
options.min_write_buffer_number_to_merge = 3;
|
|
|
|
options.max_write_buffer_number = 5;
|
|
|
|
options.max_write_buffer_size_to_maintain = 2 * (options.write_buffer_size);
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
|
|
|
|
std::atomic<uint32_t> mempurge_count{0};
|
|
|
|
std::atomic<uint32_t> sst_count{0};
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushJob:MemPurgeSuccessful",
|
|
|
|
[&](void* /*arg*/) { mempurge_count++; });
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushJob:SSTFileCreated", [&](void* /*arg*/) { sst_count++; });
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
// Dummy variable used for the following callback function.
|
|
|
|
uint64_t ZERO = 0;
|
|
|
|
// We will first execute mempurge operations exclusively.
|
|
|
|
// Therefore, when the first flush is triggered, we want to make
|
|
|
|
// sure there is at least 2 memtables being flushed: one output
|
|
|
|
// from a previous mempurge, and one newly sealed memtable.
|
|
|
|
// This is when we observed in the past that some SST files created
|
|
|
|
// were not properly added to the DB version (via the VersionEdit obj).
|
|
|
|
std::atomic<uint64_t> num_memtable_at_first_flush(0);
|
|
|
|
ROCKSDB_NAMESPACE::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"FlushJob::WriteLevel0Table:num_memtables", [&](void* arg) {
|
Prefer static_cast in place of most reinterpret_cast (#12308)
Summary:
The following are risks associated with pointer-to-pointer reinterpret_cast:
* Can produce the "wrong result" (crash or memory corruption). IIRC, in theory this can happen for any up-cast or down-cast for a non-standard-layout type, though in practice would only happen for multiple inheritance cases (where the base class pointer might be "inside" the derived object). We don't use multiple inheritance a lot, but we do.
* Can mask useful compiler errors upon code change, including converting between unrelated pointer types that you are expecting to be related, and converting between pointer and scalar types unintentionally.
I can only think of some obscure cases where static_cast could be troublesome when it compiles as a replacement:
* Going through `void*` could plausibly cause unnecessary or broken pointer arithmetic. Suppose we have
`struct Derived: public Base1, public Base2`. If we have `Derived*` -> `void*` -> `Base2*` -> `Derived*` through reinterpret casts, this could plausibly work (though technical UB) assuming the `Base2*` is not dereferenced. Changing to static cast could introduce breaking pointer arithmetic.
* Unnecessary (but safe) pointer arithmetic could arise in a case like `Derived*` -> `Base2*` -> `Derived*` where before the Base2 pointer might not have been dereferenced. This could potentially affect performance.
With some light scripting, I tried replacing pointer-to-pointer reinterpret_casts with static_cast and kept the cases that still compile. Most occurrences of reinterpret_cast have successfully been changed (except for java/ and third-party/). 294 changed, 257 remain.
A couple of related interventions included here:
* Previously Cache::Handle was not actually derived from in the implementations and just used as a `void*` stand-in with reinterpret_cast. Now there is a relationship to allow static_cast. In theory, this could introduce pointer arithmetic (as described above) but is unlikely without multiple inheritance AND non-empty Cache::Handle.
* Remove some unnecessary casts to void* as this is allowed to be implicit (for better or worse).
Most of the remaining reinterpret_casts are for converting to/from raw bytes of objects. We could consider better idioms for these patterns in follow-up work.
I wish there were a way to implement a template variant of static_cast that would only compile if no pointer arithmetic is generated, but best I can tell, this is not possible. AFAIK the best you could do is a dynamic check that the void* conversion after the static cast is unchanged.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12308
Test Plan: existing tests, CI
Reviewed By: ltamasi
Differential Revision: D53204947
Pulled By: pdillinger
fbshipit-source-id: 9de23e618263b0d5b9820f4e15966876888a16e2
2024-02-07 18:44:11 +00:00
|
|
|
uint64_t* mems_size = static_cast<uint64_t*>(arg);
|
Fix mempurge crash reported in #8958 (#9671)
Summary:
Change the `MemPurge` code to address a failure during a crash test reported in https://github.com/facebook/rocksdb/issues/8958.
### Details and results of the crash investigation:
These failures happened in a specific scenario where the list of immutable tables was composed of 2 or more memtables, and the last memtable was the output of a previous `Mempurge` operation. Because the `PickMemtablesToFlush` function included a sorting of the memtables (previous PR related to the Mempurge project), and because the `VersionEdit` of the flush class is piggybacked onto a single one of these memtables, the `VersionEdit` was not properly selected and applied to the `VersionSet` of the DB. Since the `VersionSet` was not edited properly, the database was losing track of the SST file created during the flush process, which was subsequently deleted (and as you can expect, caused the tests to crash).
The following command consistently failed, which was quite convenient to investigate the issue:
`$ while rm -rf /dev/shm/single_stress && ./db_stress --clear_column_family_one_in=0 --column_families=1 --db=/dev/shm/single_stress --experimental_mempurge_threshold=5.493146827397074 --flush_one_in=10000 --reopen=0 --write_buffer_size=262144 --value_size_mult=33 --max_write_buffer_number=3 -ops_per_thread=10000; do : ; done`
### Solution proposed
The memtables are no longer sorted based on their `memtableID` in the `PickMemtablesToFlush` function. Additionally, the `next_log_number` of the memtable created as an output of the `Mempurge` function now takes in the correct value (the log number of the first memtable being mempurged). Finally, the VersionEdit object of the flush class now takes the maximum `next_log_number` of the stack of memtables being flushed, which doesnt change anything when Mempurge is `off` but becomes necessary when Mempurge is `on`.
### Testing of the solution
The following command no longer fails:
``$ while rm -rf /dev/shm/single_stress && ./db_stress --clear_column_family_one_in=0 --column_families=1 --db=/dev/shm/single_stress --experimental_mempurge_threshold=5.493146827397074 --flush_one_in=10000 --reopen=0 --write_buffer_size=262144 --value_size_mult=33 --max_write_buffer_number=3 -ops_per_thread=10000; do : ; done``
Additionally, I ran `db_crashtest` (`whitebox` and `blackbox`) for 2.5 hours with MemPurge on and did not observe any crash.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9671
Reviewed By: pdillinger
Differential Revision: D34697424
Pulled By: bjlemaire
fbshipit-source-id: d1ab675b361904351ac81a35c184030e52222874
2022-03-10 23:16:55 +00:00
|
|
|
// atomic_compare_exchange_strong sometimes updates the value
|
|
|
|
// of ZERO (the "expected" object), so we make sure ZERO is indeed...
|
|
|
|
// zero.
|
|
|
|
ZERO = 0;
|
|
|
|
std::atomic_compare_exchange_strong(&num_memtable_at_first_flush, &ZERO,
|
|
|
|
*mems_size);
|
|
|
|
});
|
|
|
|
|
|
|
|
const std::vector<std::string> KEYS = {
|
|
|
|
"ThisIsKey1", "ThisIsKey2", "ThisIsKey3", "ThisIsKey4", "ThisIsKey5",
|
|
|
|
"ThisIsKey6", "ThisIsKey7", "ThisIsKey8", "ThisIsKey9"};
|
|
|
|
const std::string NOT_FOUND = "NOT_FOUND";
|
|
|
|
|
|
|
|
Random rnd(117);
|
|
|
|
const uint64_t NUM_REPEAT_OVERWRITES = 100;
|
|
|
|
const uint64_t NUM_RAND_INSERTS = 500;
|
|
|
|
const uint64_t RAND_VALUES_LENGTH = 10240;
|
|
|
|
|
|
|
|
std::string key, value;
|
|
|
|
std::vector<std::string> values(9, "");
|
|
|
|
|
|
|
|
// Keys used to check that no SST file disappeared.
|
|
|
|
for (uint64_t k = 0; k < 5; k++) {
|
|
|
|
values[k] = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEYS[k], values[k]));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Insertion of of K-V pairs, multiple times.
|
|
|
|
// Trigger at least one mempurge and no SST file creation.
|
|
|
|
for (size_t i = 0; i < NUM_REPEAT_OVERWRITES; i++) {
|
|
|
|
// Create value strings of arbitrary length RAND_VALUES_LENGTH bytes.
|
|
|
|
for (uint64_t k = 5; k < values.size(); k++) {
|
|
|
|
values[k] = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(KEYS[k], values[k]));
|
|
|
|
}
|
|
|
|
// Check database consistency.
|
|
|
|
for (uint64_t k = 0; k < values.size(); k++) {
|
|
|
|
ASSERT_EQ(Get(KEYS[k]), values[k]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check that there was at least one mempurge
|
|
|
|
uint32_t expected_min_mempurge_count = 1;
|
|
|
|
// Check that there was no SST files created during flush.
|
|
|
|
uint32_t expected_sst_count = 0;
|
|
|
|
EXPECT_GE(mempurge_count.load(), expected_min_mempurge_count);
|
|
|
|
EXPECT_EQ(sst_count.load(), expected_sst_count);
|
|
|
|
|
|
|
|
// Trigger an SST file creation and no mempurge.
|
|
|
|
for (size_t i = 0; i < NUM_RAND_INSERTS; i++) {
|
|
|
|
key = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
// Create value strings of arbitrary length RAND_VALUES_LENGTH bytes.
|
|
|
|
value = rnd.RandomString(RAND_VALUES_LENGTH);
|
|
|
|
ASSERT_OK(Put(key, value));
|
|
|
|
// Check database consistency.
|
|
|
|
for (uint64_t k = 0; k < values.size(); k++) {
|
|
|
|
ASSERT_EQ(Get(KEYS[k]), values[k]);
|
|
|
|
}
|
|
|
|
ASSERT_EQ(Get(key), value);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check that there was at least one SST files created during flush.
|
|
|
|
expected_sst_count = 1;
|
|
|
|
EXPECT_GE(sst_count.load(), expected_sst_count);
|
|
|
|
|
|
|
|
// Oddly enough, num_memtable_at_first_flush is not enforced to be
|
|
|
|
// equal to min_write_buffer_number_to_merge. So by asserting that
|
|
|
|
// the first SST file creation comes from one output memtable
|
|
|
|
// from a previous mempurge, and one newly sealed memtable. This
|
|
|
|
// is the scenario where we observed that some SST files created
|
|
|
|
// were not properly added to the DB version before our bug fix.
|
|
|
|
ASSERT_GE(num_memtable_at_first_flush.load(), 2);
|
|
|
|
|
|
|
|
// Check that no data was lost after SST file creation.
|
|
|
|
for (uint64_t k = 0; k < values.size(); k++) {
|
|
|
|
ASSERT_EQ(Get(KEYS[k]), values[k]);
|
|
|
|
}
|
|
|
|
// Extra check of database consistency.
|
|
|
|
ASSERT_EQ(Get(key), value);
|
|
|
|
|
|
|
|
Close();
|
|
|
|
}
|
|
|
|
|
2017-04-13 20:07:33 +00:00
|
|
|
TEST_P(DBFlushDirectIOTest, DirectIO) {
|
|
|
|
Options options;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.max_background_flushes = 2;
|
|
|
|
options.use_direct_io_for_flush_and_compaction = GetParam();
|
2021-09-21 15:53:03 +00:00
|
|
|
options.env = MockEnv::Create(Env::Default());
|
2017-04-13 20:07:33 +00:00
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"BuildTable:create_file", [&](void* arg) {
|
|
|
|
bool* use_direct_writes = static_cast<bool*>(arg);
|
|
|
|
ASSERT_EQ(*use_direct_writes,
|
|
|
|
options.use_direct_io_for_flush_and_compaction);
|
|
|
|
});
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
Reopen(options);
|
|
|
|
ASSERT_OK(Put("foo", "v"));
|
|
|
|
FlushOptions flush_options;
|
|
|
|
flush_options.wait = true;
|
|
|
|
ASSERT_OK(dbfull()->Flush(flush_options));
|
|
|
|
Destroy(options);
|
|
|
|
delete options.env;
|
|
|
|
}
|
|
|
|
|
2018-02-05 21:48:25 +00:00
|
|
|
TEST_F(DBFlushTest, FlushError) {
|
|
|
|
Options options;
|
|
|
|
std::unique_ptr<FaultInjectionTestEnv> fault_injection_env(
|
|
|
|
new FaultInjectionTestEnv(env_));
|
|
|
|
options.write_buffer_size = 100;
|
|
|
|
options.max_write_buffer_number = 4;
|
|
|
|
options.min_write_buffer_number_to_merge = 3;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.env = fault_injection_env.get();
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
ASSERT_OK(Put("key1", "value1"));
|
|
|
|
ASSERT_OK(Put("key2", "value2"));
|
|
|
|
fault_injection_env->SetFilesystemActive(false);
|
|
|
|
Status s = dbfull()->TEST_SwitchMemtable();
|
|
|
|
fault_injection_env->SetFilesystemActive(true);
|
|
|
|
Destroy(options);
|
|
|
|
ASSERT_NE(s, Status::OK());
|
|
|
|
}
|
|
|
|
|
2018-11-01 22:23:20 +00:00
|
|
|
TEST_F(DBFlushTest, ManualFlushFailsInReadOnlyMode) {
|
|
|
|
// Regression test for bug where manual flush hangs forever when the DB
|
|
|
|
// is in read-only mode. Verify it now at least returns, despite failing.
|
|
|
|
Options options;
|
|
|
|
std::unique_ptr<FaultInjectionTestEnv> fault_injection_env(
|
|
|
|
new FaultInjectionTestEnv(env_));
|
|
|
|
options.env = fault_injection_env.get();
|
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
// Trigger a first flush but don't let it run
|
|
|
|
ASSERT_OK(db_->PauseBackgroundWork());
|
|
|
|
ASSERT_OK(Put("key1", "value1"));
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
flush_opts.wait = false;
|
|
|
|
ASSERT_OK(db_->Flush(flush_opts));
|
|
|
|
|
|
|
|
// Write a key to the second memtable so we have something to flush later
|
|
|
|
// after the DB is in read-only mode.
|
|
|
|
ASSERT_OK(Put("key2", "value2"));
|
|
|
|
|
|
|
|
// Let the first flush continue, hit an error, and put the DB in read-only
|
|
|
|
// mode.
|
|
|
|
fault_injection_env->SetFilesystemActive(false);
|
|
|
|
ASSERT_OK(db_->ContinueBackgroundWork());
|
2020-10-12 22:15:58 +00:00
|
|
|
// We ingested the error to env, so the returned status is not OK.
|
|
|
|
ASSERT_NOK(dbfull()->TEST_WaitForFlushMemTable());
|
2018-11-01 22:23:20 +00:00
|
|
|
uint64_t num_bg_errors;
|
2022-11-02 21:34:24 +00:00
|
|
|
ASSERT_TRUE(
|
|
|
|
db_->GetIntProperty(DB::Properties::kBackgroundErrors, &num_bg_errors));
|
2018-11-01 22:23:20 +00:00
|
|
|
ASSERT_GT(num_bg_errors, 0);
|
|
|
|
|
|
|
|
// In the bug scenario, triggering another flush would cause the second flush
|
|
|
|
// to hang forever. After the fix we expect it to return an error.
|
|
|
|
ASSERT_NOK(db_->Flush(FlushOptions()));
|
|
|
|
|
|
|
|
Close();
|
|
|
|
}
|
|
|
|
|
2019-07-01 21:04:10 +00:00
|
|
|
TEST_F(DBFlushTest, CFDropRaceWithWaitForFlushMemTables) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"DBImpl::FlushMemTable:AfterScheduleFlush",
|
|
|
|
"DBFlushTest::CFDropRaceWithWaitForFlushMemTables:BeforeDrop"},
|
|
|
|
{"DBFlushTest::CFDropRaceWithWaitForFlushMemTables:AfterFree",
|
|
|
|
"DBImpl::BackgroundCallFlush:start"},
|
|
|
|
{"DBImpl::BackgroundCallFlush:start",
|
|
|
|
"DBImpl::FlushMemTable:BeforeWaitForBgFlush"}});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
ASSERT_EQ(2, handles_.size());
|
|
|
|
ASSERT_OK(Put(1, "key", "value"));
|
|
|
|
auto* cfd = static_cast<ColumnFamilyHandleImpl*>(handles_[1])->cfd();
|
|
|
|
port::Thread drop_cf_thr([&]() {
|
|
|
|
TEST_SYNC_POINT(
|
|
|
|
"DBFlushTest::CFDropRaceWithWaitForFlushMemTables:BeforeDrop");
|
|
|
|
ASSERT_OK(dbfull()->DropColumnFamily(handles_[1]));
|
|
|
|
ASSERT_OK(dbfull()->DestroyColumnFamilyHandle(handles_[1]));
|
|
|
|
handles_.resize(1);
|
|
|
|
TEST_SYNC_POINT(
|
|
|
|
"DBFlushTest::CFDropRaceWithWaitForFlushMemTables:AfterFree");
|
|
|
|
});
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
flush_opts.allow_write_stall = true;
|
|
|
|
ASSERT_NOK(dbfull()->TEST_FlushMemTable(cfd, flush_opts));
|
|
|
|
drop_cf_thr.join();
|
|
|
|
Close();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
2019-10-16 17:39:00 +00:00
|
|
|
TEST_F(DBFlushTest, FireOnFlushCompletedAfterCommittedResult) {
|
|
|
|
class TestListener : public EventListener {
|
|
|
|
public:
|
|
|
|
void OnFlushCompleted(DB* db, const FlushJobInfo& info) override {
|
|
|
|
// There's only one key in each flush.
|
|
|
|
ASSERT_EQ(info.smallest_seqno, info.largest_seqno);
|
|
|
|
ASSERT_NE(0, info.smallest_seqno);
|
|
|
|
if (info.smallest_seqno == seq1) {
|
|
|
|
// First flush completed
|
|
|
|
ASSERT_FALSE(completed1);
|
|
|
|
completed1 = true;
|
|
|
|
CheckFlushResultCommitted(db, seq1);
|
|
|
|
} else {
|
|
|
|
// Second flush completed
|
|
|
|
ASSERT_FALSE(completed2);
|
|
|
|
completed2 = true;
|
|
|
|
ASSERT_EQ(info.smallest_seqno, seq2);
|
|
|
|
CheckFlushResultCommitted(db, seq2);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void CheckFlushResultCommitted(DB* db, SequenceNumber seq) {
|
|
|
|
DBImpl* db_impl = static_cast_with_check<DBImpl>(db);
|
|
|
|
InstrumentedMutex* mutex = db_impl->mutex();
|
|
|
|
mutex->Lock();
|
2020-07-03 02:24:25 +00:00
|
|
|
auto* cfd = static_cast_with_check<ColumnFamilyHandleImpl>(
|
|
|
|
db->DefaultColumnFamily())
|
|
|
|
->cfd();
|
2019-10-16 17:39:00 +00:00
|
|
|
ASSERT_LT(seq, cfd->imm()->current()->GetEarliestSequenceNumber());
|
|
|
|
mutex->Unlock();
|
|
|
|
}
|
|
|
|
|
|
|
|
std::atomic<SequenceNumber> seq1{0};
|
|
|
|
std::atomic<SequenceNumber> seq2{0};
|
|
|
|
std::atomic<bool> completed1{false};
|
|
|
|
std::atomic<bool> completed2{false};
|
|
|
|
};
|
|
|
|
std::shared_ptr<TestListener> listener = std::make_shared<TestListener>();
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
2021-11-01 05:06:04 +00:00
|
|
|
{{"DBImpl::FlushMemTableToOutputFile:AfterPickMemtables",
|
2019-10-16 17:39:00 +00:00
|
|
|
"DBFlushTest::FireOnFlushCompletedAfterCommittedResult:WaitFirst"},
|
|
|
|
{"DBImpl::FlushMemTableToOutputFile:Finish",
|
|
|
|
"DBFlushTest::FireOnFlushCompletedAfterCommittedResult:WaitSecond"}});
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"FlushJob::WriteLevel0Table", [&listener](void* arg) {
|
|
|
|
// Wait for the second flush finished, out of mutex.
|
Prefer static_cast in place of most reinterpret_cast (#12308)
Summary:
The following are risks associated with pointer-to-pointer reinterpret_cast:
* Can produce the "wrong result" (crash or memory corruption). IIRC, in theory this can happen for any up-cast or down-cast for a non-standard-layout type, though in practice would only happen for multiple inheritance cases (where the base class pointer might be "inside" the derived object). We don't use multiple inheritance a lot, but we do.
* Can mask useful compiler errors upon code change, including converting between unrelated pointer types that you are expecting to be related, and converting between pointer and scalar types unintentionally.
I can only think of some obscure cases where static_cast could be troublesome when it compiles as a replacement:
* Going through `void*` could plausibly cause unnecessary or broken pointer arithmetic. Suppose we have
`struct Derived: public Base1, public Base2`. If we have `Derived*` -> `void*` -> `Base2*` -> `Derived*` through reinterpret casts, this could plausibly work (though technical UB) assuming the `Base2*` is not dereferenced. Changing to static cast could introduce breaking pointer arithmetic.
* Unnecessary (but safe) pointer arithmetic could arise in a case like `Derived*` -> `Base2*` -> `Derived*` where before the Base2 pointer might not have been dereferenced. This could potentially affect performance.
With some light scripting, I tried replacing pointer-to-pointer reinterpret_casts with static_cast and kept the cases that still compile. Most occurrences of reinterpret_cast have successfully been changed (except for java/ and third-party/). 294 changed, 257 remain.
A couple of related interventions included here:
* Previously Cache::Handle was not actually derived from in the implementations and just used as a `void*` stand-in with reinterpret_cast. Now there is a relationship to allow static_cast. In theory, this could introduce pointer arithmetic (as described above) but is unlikely without multiple inheritance AND non-empty Cache::Handle.
* Remove some unnecessary casts to void* as this is allowed to be implicit (for better or worse).
Most of the remaining reinterpret_casts are for converting to/from raw bytes of objects. We could consider better idioms for these patterns in follow-up work.
I wish there were a way to implement a template variant of static_cast that would only compile if no pointer arithmetic is generated, but best I can tell, this is not possible. AFAIK the best you could do is a dynamic check that the void* conversion after the static cast is unchanged.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12308
Test Plan: existing tests, CI
Reviewed By: ltamasi
Differential Revision: D53204947
Pulled By: pdillinger
fbshipit-source-id: 9de23e618263b0d5b9820f4e15966876888a16e2
2024-02-07 18:44:11 +00:00
|
|
|
auto* mems = static_cast<autovector<MemTable*>*>(arg);
|
2019-10-16 17:39:00 +00:00
|
|
|
if (mems->front()->GetEarliestSequenceNumber() == listener->seq1 - 1) {
|
|
|
|
TEST_SYNC_POINT(
|
|
|
|
"DBFlushTest::FireOnFlushCompletedAfterCommittedResult:"
|
|
|
|
"WaitSecond");
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.listeners.push_back(listener);
|
|
|
|
// Setting max_flush_jobs = max_background_jobs / 4 = 2.
|
|
|
|
options.max_background_jobs = 8;
|
|
|
|
// Allow 2 immutable memtables.
|
|
|
|
options.max_write_buffer_number = 3;
|
|
|
|
Reopen(options);
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
ASSERT_OK(Put("foo", "v"));
|
|
|
|
listener->seq1 = db_->GetLatestSequenceNumber();
|
|
|
|
// t1 will wait for the second flush complete before committing flush result.
|
|
|
|
auto t1 = port::Thread([&]() {
|
|
|
|
// flush_opts.wait = true
|
|
|
|
ASSERT_OK(db_->Flush(FlushOptions()));
|
|
|
|
});
|
2019-11-08 21:45:31 +00:00
|
|
|
// Wait for first flush started.
|
2019-10-16 17:39:00 +00:00
|
|
|
TEST_SYNC_POINT(
|
|
|
|
"DBFlushTest::FireOnFlushCompletedAfterCommittedResult:WaitFirst");
|
|
|
|
// The second flush will exit early without commit its result. The work
|
|
|
|
// is delegated to the first flush.
|
|
|
|
ASSERT_OK(Put("bar", "v"));
|
|
|
|
listener->seq2 = db_->GetLatestSequenceNumber();
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
flush_opts.wait = false;
|
|
|
|
ASSERT_OK(db_->Flush(flush_opts));
|
|
|
|
t1.join();
|
2022-02-22 20:13:39 +00:00
|
|
|
// Ensure background work is fully finished including listener callbacks
|
|
|
|
// before accessing listener state.
|
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForBackgroundWork());
|
2019-10-16 17:39:00 +00:00
|
|
|
ASSERT_TRUE(listener->completed1);
|
|
|
|
ASSERT_TRUE(listener->completed2);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
}
|
|
|
|
|
2020-09-15 04:10:09 +00:00
|
|
|
TEST_F(DBFlushTest, FlushWithBlob) {
|
|
|
|
constexpr uint64_t min_blob_size = 10;
|
|
|
|
|
|
|
|
Options options;
|
|
|
|
options.enable_blob_files = true;
|
|
|
|
options.min_blob_size = min_blob_size;
|
|
|
|
options.disable_auto_compactions = true;
|
2020-10-26 20:50:03 +00:00
|
|
|
options.env = env_;
|
2020-09-15 04:10:09 +00:00
|
|
|
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
constexpr char short_value[] = "short";
|
|
|
|
static_assert(sizeof(short_value) - 1 < min_blob_size,
|
|
|
|
"short_value too long");
|
|
|
|
|
|
|
|
constexpr char long_value[] = "long_value";
|
|
|
|
static_assert(sizeof(long_value) - 1 >= min_blob_size,
|
|
|
|
"long_value too short");
|
|
|
|
|
|
|
|
ASSERT_OK(Put("key1", short_value));
|
|
|
|
ASSERT_OK(Put("key2", long_value));
|
|
|
|
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
ASSERT_EQ(Get("key1"), short_value);
|
2020-10-15 20:02:44 +00:00
|
|
|
ASSERT_EQ(Get("key2"), long_value);
|
2020-09-15 04:10:09 +00:00
|
|
|
|
2021-12-10 19:03:39 +00:00
|
|
|
VersionSet* const versions = dbfull()->GetVersionSet();
|
2020-09-15 04:10:09 +00:00
|
|
|
assert(versions);
|
|
|
|
|
|
|
|
ColumnFamilyData* const cfd = versions->GetColumnFamilySet()->GetDefault();
|
|
|
|
assert(cfd);
|
|
|
|
|
|
|
|
Version* const current = cfd->current();
|
|
|
|
assert(current);
|
|
|
|
|
|
|
|
const VersionStorageInfo* const storage_info = current->storage_info();
|
|
|
|
assert(storage_info);
|
|
|
|
|
|
|
|
const auto& l0_files = storage_info->LevelFiles(0);
|
|
|
|
ASSERT_EQ(l0_files.size(), 1);
|
|
|
|
|
|
|
|
const FileMetaData* const table_file = l0_files[0];
|
|
|
|
assert(table_file);
|
|
|
|
|
|
|
|
const auto& blob_files = storage_info->GetBlobFiles();
|
|
|
|
ASSERT_EQ(blob_files.size(), 1);
|
|
|
|
|
Use a sorted vector instead of a map to store blob file metadata (#9526)
Summary:
The patch replaces `std::map` with a sorted `std::vector` for
`VersionStorageInfo::blob_files_` and preallocates the space
for the `vector` before saving the `BlobFileMetaData` into the
new `VersionStorageInfo` in `VersionBuilder::Rep::SaveBlobFilesTo`.
These changes reduce the time the DB mutex is held while
saving new `Version`s, and using a sorted `vector` also makes
lookups faster thanks to better memory locality.
In addition, the patch introduces helper methods
`VersionStorageInfo::GetBlobFileMetaData` and
`VersionStorageInfo::GetBlobFileMetaDataLB` that can be used by
clients to perform lookups in the `vector`, and does some general
cleanup in the parts of code where blob file metadata are used.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9526
Test Plan:
Ran `make check` and the crash test script for a while.
Performance was tested using a load-optimized benchmark (`fillseq` with vector memtable, no WAL) and small file sizes so that a significant number of files are produced:
```
numactl --interleave=all ./db_bench --benchmarks=fillseq --allow_concurrent_memtable_write=false --level0_file_num_compaction_trigger=4 --level0_slowdown_writes_trigger=20 --level0_stop_writes_trigger=30 --max_background_jobs=8 --max_write_buffer_number=8 --db=/data/ltamasi-dbbench --wal_dir=/data/ltamasi-dbbench --num=800000000 --num_levels=8 --key_size=20 --value_size=400 --block_size=8192 --cache_size=51539607552 --cache_numshardbits=6 --compression_max_dict_bytes=0 --compression_ratio=0.5 --compression_type=lz4 --bytes_per_sync=8388608 --cache_index_and_filter_blocks=1 --cache_high_pri_pool_ratio=0.5 --benchmark_write_rate_limit=0 --write_buffer_size=16777216 --target_file_size_base=16777216 --max_bytes_for_level_base=67108864 --verify_checksum=1 --delete_obsolete_files_period_micros=62914560 --max_bytes_for_level_multiplier=8 --statistics=0 --stats_per_interval=1 --stats_interval_seconds=20 --histogram=1 --memtablerep=skip_list --bloom_bits=10 --open_files=-1 --subcompactions=1 --compaction_style=0 --min_level_to_compress=3 --level_compaction_dynamic_level_bytes=true --pin_l0_filter_and_index_blocks_in_cache=1 --soft_pending_compaction_bytes_limit=167503724544 --hard_pending_compaction_bytes_limit=335007449088 --min_level_to_compress=0 --use_existing_db=0 --sync=0 --threads=1 --memtablerep=vector --allow_concurrent_memtable_write=false --disable_wal=1 --enable_blob_files=1 --blob_file_size=16777216 --min_blob_size=0 --blob_compression_type=lz4 --enable_blob_garbage_collection=1 --seed=<some value>
```
Final statistics before the patch:
```
Cumulative writes: 0 writes, 700M keys, 0 commit groups, 0.0 writes per commit group, ingest: 284.62 GB, 121.27 MB/s
Interval writes: 0 writes, 334K keys, 0 commit groups, 0.0 writes per commit group, ingest: 139.28 MB, 72.46 MB/s
```
With the patch:
```
Cumulative writes: 0 writes, 760M keys, 0 commit groups, 0.0 writes per commit group, ingest: 308.66 GB, 131.52 MB/s
Interval writes: 0 writes, 445K keys, 0 commit groups, 0.0 writes per commit group, ingest: 185.35 MB, 93.15 MB/s
```
Total time to complete the benchmark is 2611 seconds with the patch, down from 2986 secs.
Reviewed By: riversand963
Differential Revision: D34082728
Pulled By: ltamasi
fbshipit-source-id: fc598abf676dce436734d06bb9d2d99a26a004fc
2022-02-09 20:35:39 +00:00
|
|
|
const auto& blob_file = blob_files.front();
|
2020-09-15 04:10:09 +00:00
|
|
|
assert(blob_file);
|
|
|
|
|
|
|
|
ASSERT_EQ(table_file->smallest.user_key(), "key1");
|
|
|
|
ASSERT_EQ(table_file->largest.user_key(), "key2");
|
|
|
|
ASSERT_EQ(table_file->fd.smallest_seqno, 1);
|
|
|
|
ASSERT_EQ(table_file->fd.largest_seqno, 2);
|
|
|
|
ASSERT_EQ(table_file->oldest_blob_file_number,
|
|
|
|
blob_file->GetBlobFileNumber());
|
|
|
|
|
|
|
|
ASSERT_EQ(blob_file->GetTotalBlobCount(), 1);
|
|
|
|
|
|
|
|
const InternalStats* const internal_stats = cfd->internal_stats();
|
|
|
|
assert(internal_stats);
|
|
|
|
|
|
|
|
const auto& compaction_stats = internal_stats->TEST_GetCompactionStats();
|
|
|
|
ASSERT_FALSE(compaction_stats.empty());
|
2021-03-02 17:46:10 +00:00
|
|
|
ASSERT_EQ(compaction_stats[0].bytes_written, table_file->fd.GetFileSize());
|
|
|
|
ASSERT_EQ(compaction_stats[0].bytes_written_blob,
|
|
|
|
blob_file->GetTotalBlobBytes());
|
|
|
|
ASSERT_EQ(compaction_stats[0].num_output_files, 1);
|
|
|
|
ASSERT_EQ(compaction_stats[0].num_output_files_blob, 1);
|
2020-09-15 04:10:09 +00:00
|
|
|
|
|
|
|
const uint64_t* const cf_stats_value = internal_stats->TEST_GetCFStatsValue();
|
2021-03-02 17:46:10 +00:00
|
|
|
ASSERT_EQ(cf_stats_value[InternalStats::BYTES_FLUSHED],
|
|
|
|
compaction_stats[0].bytes_written +
|
|
|
|
compaction_stats[0].bytes_written_blob);
|
2020-09-15 04:10:09 +00:00
|
|
|
}
|
|
|
|
|
2021-02-11 06:18:33 +00:00
|
|
|
TEST_F(DBFlushTest, FlushWithChecksumHandoff1) {
|
|
|
|
if (mem_env_ || encrypted_env_) {
|
|
|
|
ROCKSDB_GTEST_SKIP("Test requires non-mem or non-encrypted environment");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
std::shared_ptr<FaultInjectionTestFS> fault_fs(
|
|
|
|
new FaultInjectionTestFS(FileSystem::Default()));
|
|
|
|
std::unique_ptr<Env> fault_fs_env(NewCompositeEnv(fault_fs));
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.write_buffer_size = 100;
|
|
|
|
options.max_write_buffer_number = 4;
|
|
|
|
options.min_write_buffer_number_to_merge = 3;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.env = fault_fs_env.get();
|
|
|
|
options.checksum_handoff_file_types.Add(FileType::kTableFile);
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kCRC32c);
|
|
|
|
ASSERT_OK(Put("key1", "value1"));
|
|
|
|
ASSERT_OK(Put("key2", "value2"));
|
|
|
|
ASSERT_OK(dbfull()->TEST_SwitchMemtable());
|
|
|
|
|
|
|
|
// The hash does not match, write fails
|
|
|
|
// fault_fs->SetChecksumHandoffFuncType(ChecksumType::kxxHash);
|
|
|
|
// Since the file system returns IOStatus::Corruption, it is an
|
|
|
|
// unrecoverable error.
|
|
|
|
SyncPoint::GetInstance()->SetCallBack("FlushJob::Start", [&](void*) {
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kxxHash);
|
|
|
|
});
|
|
|
|
ASSERT_OK(Put("key3", "value3"));
|
|
|
|
ASSERT_OK(Put("key4", "value4"));
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
Status s = Flush();
|
|
|
|
ASSERT_EQ(s.severity(),
|
|
|
|
ROCKSDB_NAMESPACE::Status::Severity::kUnrecoverableError);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
Destroy(options);
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
// The file system does not support checksum handoff. The check
|
|
|
|
// will be ignored.
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kNoChecksum);
|
|
|
|
ASSERT_OK(Put("key5", "value5"));
|
|
|
|
ASSERT_OK(Put("key6", "value6"));
|
|
|
|
ASSERT_OK(dbfull()->TEST_SwitchMemtable());
|
|
|
|
|
|
|
|
// Each write will be similated as corrupted.
|
|
|
|
// Since the file system returns IOStatus::Corruption, it is an
|
|
|
|
// unrecoverable error.
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kCRC32c);
|
|
|
|
SyncPoint::GetInstance()->SetCallBack("FlushJob::Start", [&](void*) {
|
|
|
|
fault_fs->IngestDataCorruptionBeforeWrite();
|
|
|
|
});
|
|
|
|
ASSERT_OK(Put("key7", "value7"));
|
|
|
|
ASSERT_OK(Put("key8", "value8"));
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
s = Flush();
|
|
|
|
ASSERT_EQ(s.severity(),
|
|
|
|
ROCKSDB_NAMESPACE::Status::Severity::kUnrecoverableError);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
|
|
|
|
Destroy(options);
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(DBFlushTest, FlushWithChecksumHandoff2) {
|
|
|
|
if (mem_env_ || encrypted_env_) {
|
|
|
|
ROCKSDB_GTEST_SKIP("Test requires non-mem or non-encrypted environment");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
std::shared_ptr<FaultInjectionTestFS> fault_fs(
|
|
|
|
new FaultInjectionTestFS(FileSystem::Default()));
|
|
|
|
std::unique_ptr<Env> fault_fs_env(NewCompositeEnv(fault_fs));
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.write_buffer_size = 100;
|
|
|
|
options.max_write_buffer_number = 4;
|
|
|
|
options.min_write_buffer_number_to_merge = 3;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.env = fault_fs_env.get();
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kCRC32c);
|
|
|
|
ASSERT_OK(Put("key1", "value1"));
|
|
|
|
ASSERT_OK(Put("key2", "value2"));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
// options is not set, the checksum handoff will not be triggered
|
|
|
|
SyncPoint::GetInstance()->SetCallBack("FlushJob::Start", [&](void*) {
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kxxHash);
|
|
|
|
});
|
|
|
|
ASSERT_OK(Put("key3", "value3"));
|
|
|
|
ASSERT_OK(Put("key4", "value4"));
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
Destroy(options);
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
// The file system does not support checksum handoff. The check
|
|
|
|
// will be ignored.
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kNoChecksum);
|
|
|
|
ASSERT_OK(Put("key5", "value5"));
|
|
|
|
ASSERT_OK(Put("key6", "value6"));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
// options is not set, the checksum handoff will not be triggered
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kCRC32c);
|
|
|
|
SyncPoint::GetInstance()->SetCallBack("FlushJob::Start", [&](void*) {
|
|
|
|
fault_fs->IngestDataCorruptionBeforeWrite();
|
|
|
|
});
|
|
|
|
ASSERT_OK(Put("key7", "value7"));
|
|
|
|
ASSERT_OK(Put("key8", "value8"));
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
|
|
|
|
Destroy(options);
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(DBFlushTest, FlushWithChecksumHandoffManifest1) {
|
|
|
|
if (mem_env_ || encrypted_env_) {
|
|
|
|
ROCKSDB_GTEST_SKIP("Test requires non-mem or non-encrypted environment");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
std::shared_ptr<FaultInjectionTestFS> fault_fs(
|
|
|
|
new FaultInjectionTestFS(FileSystem::Default()));
|
|
|
|
std::unique_ptr<Env> fault_fs_env(NewCompositeEnv(fault_fs));
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.write_buffer_size = 100;
|
|
|
|
options.max_write_buffer_number = 4;
|
|
|
|
options.min_write_buffer_number_to_merge = 3;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.env = fault_fs_env.get();
|
|
|
|
options.checksum_handoff_file_types.Add(FileType::kDescriptorFile);
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kCRC32c);
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
ASSERT_OK(Put("key1", "value1"));
|
|
|
|
ASSERT_OK(Put("key2", "value2"));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
// The hash does not match, write fails
|
|
|
|
// fault_fs->SetChecksumHandoffFuncType(ChecksumType::kxxHash);
|
|
|
|
// Since the file system returns IOStatus::Corruption, it is mapped to
|
|
|
|
// kFatalError error.
|
|
|
|
ASSERT_OK(Put("key3", "value3"));
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"VersionSet::LogAndApply:WriteManifest", [&](void*) {
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kxxHash);
|
|
|
|
});
|
|
|
|
ASSERT_OK(Put("key3", "value3"));
|
|
|
|
ASSERT_OK(Put("key4", "value4"));
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
Status s = Flush();
|
|
|
|
ASSERT_EQ(s.severity(), ROCKSDB_NAMESPACE::Status::Severity::kFatalError);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
Destroy(options);
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(DBFlushTest, FlushWithChecksumHandoffManifest2) {
|
|
|
|
if (mem_env_ || encrypted_env_) {
|
|
|
|
ROCKSDB_GTEST_SKIP("Test requires non-mem or non-encrypted environment");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
std::shared_ptr<FaultInjectionTestFS> fault_fs(
|
|
|
|
new FaultInjectionTestFS(FileSystem::Default()));
|
|
|
|
std::unique_ptr<Env> fault_fs_env(NewCompositeEnv(fault_fs));
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.write_buffer_size = 100;
|
|
|
|
options.max_write_buffer_number = 4;
|
|
|
|
options.min_write_buffer_number_to_merge = 3;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.env = fault_fs_env.get();
|
|
|
|
options.checksum_handoff_file_types.Add(FileType::kDescriptorFile);
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kNoChecksum);
|
|
|
|
Reopen(options);
|
|
|
|
// The file system does not support checksum handoff. The check
|
|
|
|
// will be ignored.
|
|
|
|
ASSERT_OK(Put("key5", "value5"));
|
|
|
|
ASSERT_OK(Put("key6", "value6"));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
// Each write will be similated as corrupted.
|
|
|
|
// Since the file system returns IOStatus::Corruption, it is mapped to
|
|
|
|
// kFatalError error.
|
|
|
|
fault_fs->SetChecksumHandoffFuncType(ChecksumType::kCRC32c);
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"VersionSet::LogAndApply:WriteManifest",
|
|
|
|
[&](void*) { fault_fs->IngestDataCorruptionBeforeWrite(); });
|
|
|
|
ASSERT_OK(Put("key7", "value7"));
|
|
|
|
ASSERT_OK(Put("key8", "value8"));
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
Status s = Flush();
|
|
|
|
ASSERT_EQ(s.severity(), ROCKSDB_NAMESPACE::Status::Severity::kFatalError);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
|
|
|
|
Destroy(options);
|
|
|
|
}
|
|
|
|
|
2021-11-19 17:55:10 +00:00
|
|
|
TEST_F(DBFlushTest, PickRightMemtables) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
options.create_if_missing = true;
|
|
|
|
|
|
|
|
const std::string test_cf_name = "test_cf";
|
|
|
|
options.max_write_buffer_number = 128;
|
|
|
|
CreateColumnFamilies({test_cf_name}, options);
|
|
|
|
|
|
|
|
Close();
|
|
|
|
|
|
|
|
ReopenWithColumnFamilies({kDefaultColumnFamilyName, test_cf_name}, options);
|
|
|
|
|
|
|
|
ASSERT_OK(db_->Put(WriteOptions(), "key", "value"));
|
|
|
|
|
|
|
|
ASSERT_OK(db_->Put(WriteOptions(), handles_[1], "key", "value"));
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
2024-05-30 21:53:13 +00:00
|
|
|
"DBImpl::SyncClosedWals:BeforeReLock", [&](void* /*arg*/) {
|
2021-11-19 17:55:10 +00:00
|
|
|
ASSERT_OK(db_->Put(WriteOptions(), handles_[1], "what", "v"));
|
|
|
|
auto* cfhi =
|
|
|
|
static_cast_with_check<ColumnFamilyHandleImpl>(handles_[1]);
|
|
|
|
assert(cfhi);
|
|
|
|
ASSERT_OK(dbfull()->TEST_SwitchMemtable(cfhi->cfd()));
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::FlushMemTableToOutputFile:AfterPickMemtables", [&](void* arg) {
|
Prefer static_cast in place of most reinterpret_cast (#12308)
Summary:
The following are risks associated with pointer-to-pointer reinterpret_cast:
* Can produce the "wrong result" (crash or memory corruption). IIRC, in theory this can happen for any up-cast or down-cast for a non-standard-layout type, though in practice would only happen for multiple inheritance cases (where the base class pointer might be "inside" the derived object). We don't use multiple inheritance a lot, but we do.
* Can mask useful compiler errors upon code change, including converting between unrelated pointer types that you are expecting to be related, and converting between pointer and scalar types unintentionally.
I can only think of some obscure cases where static_cast could be troublesome when it compiles as a replacement:
* Going through `void*` could plausibly cause unnecessary or broken pointer arithmetic. Suppose we have
`struct Derived: public Base1, public Base2`. If we have `Derived*` -> `void*` -> `Base2*` -> `Derived*` through reinterpret casts, this could plausibly work (though technical UB) assuming the `Base2*` is not dereferenced. Changing to static cast could introduce breaking pointer arithmetic.
* Unnecessary (but safe) pointer arithmetic could arise in a case like `Derived*` -> `Base2*` -> `Derived*` where before the Base2 pointer might not have been dereferenced. This could potentially affect performance.
With some light scripting, I tried replacing pointer-to-pointer reinterpret_casts with static_cast and kept the cases that still compile. Most occurrences of reinterpret_cast have successfully been changed (except for java/ and third-party/). 294 changed, 257 remain.
A couple of related interventions included here:
* Previously Cache::Handle was not actually derived from in the implementations and just used as a `void*` stand-in with reinterpret_cast. Now there is a relationship to allow static_cast. In theory, this could introduce pointer arithmetic (as described above) but is unlikely without multiple inheritance AND non-empty Cache::Handle.
* Remove some unnecessary casts to void* as this is allowed to be implicit (for better or worse).
Most of the remaining reinterpret_casts are for converting to/from raw bytes of objects. We could consider better idioms for these patterns in follow-up work.
I wish there were a way to implement a template variant of static_cast that would only compile if no pointer arithmetic is generated, but best I can tell, this is not possible. AFAIK the best you could do is a dynamic check that the void* conversion after the static cast is unchanged.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12308
Test Plan: existing tests, CI
Reviewed By: ltamasi
Differential Revision: D53204947
Pulled By: pdillinger
fbshipit-source-id: 9de23e618263b0d5b9820f4e15966876888a16e2
2024-02-07 18:44:11 +00:00
|
|
|
auto* job = static_cast<FlushJob*>(arg);
|
2021-11-19 17:55:10 +00:00
|
|
|
assert(job);
|
|
|
|
const auto& mems = job->GetMemTables();
|
|
|
|
assert(mems.size() == 1);
|
|
|
|
assert(mems[0]);
|
|
|
|
ASSERT_EQ(1, mems[0]->GetID());
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
ASSERT_OK(db_->Flush(FlushOptions(), handles_[1]));
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
}
|
|
|
|
|
2020-09-15 04:10:09 +00:00
|
|
|
class DBFlushTestBlobError : public DBFlushTest,
|
|
|
|
public testing::WithParamInterface<std::string> {
|
|
|
|
public:
|
Do not explicitly flush blob files when using the integrated BlobDB (#7892)
Summary:
In the original stacked BlobDB implementation, which writes blobs to blob files
immediately and treats blob files as logs, it makes sense to flush the file after
writing each blob to protect against process crashes; however, in the integrated
implementation, which builds blob files in the background jobs, this unnecessarily
reduces performance. This patch fixes this by simply adding a `do_flush` flag to
`BlobLogWriter`, which is set to `true` by the stacked implementation and to `false`
by the new code. Note: the change itself is trivial but the tests needed some work;
since in the new implementation, blobs are now buffered, adding a blob to
`BlobFileBuilder` is no longer guaranteed to result in an actual I/O. Therefore, we can
no longer rely on `FaultInjectionTestEnv` when testing failure cases; instead, we
manipulate the return values of I/O methods directly using `SyncPoint`s.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7892
Test Plan: `make check`
Reviewed By: jay-zhuang
Differential Revision: D26022814
Pulled By: ltamasi
fbshipit-source-id: b3dce419f312137fa70d84cdd9b908fd5d60d8cd
2021-01-25 21:30:17 +00:00
|
|
|
DBFlushTestBlobError() : sync_point_(GetParam()) {}
|
2020-09-15 04:10:09 +00:00
|
|
|
|
2020-10-26 20:50:03 +00:00
|
|
|
std::string sync_point_;
|
2020-09-15 04:10:09 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
INSTANTIATE_TEST_CASE_P(DBFlushTestBlobError, DBFlushTestBlobError,
|
|
|
|
::testing::ValuesIn(std::vector<std::string>{
|
|
|
|
"BlobFileBuilder::WriteBlobToFile:AddRecord",
|
|
|
|
"BlobFileBuilder::WriteBlobToFile:AppendFooter"}));
|
|
|
|
|
|
|
|
TEST_P(DBFlushTestBlobError, FlushError) {
|
|
|
|
Options options;
|
|
|
|
options.enable_blob_files = true;
|
|
|
|
options.disable_auto_compactions = true;
|
Do not explicitly flush blob files when using the integrated BlobDB (#7892)
Summary:
In the original stacked BlobDB implementation, which writes blobs to blob files
immediately and treats blob files as logs, it makes sense to flush the file after
writing each blob to protect against process crashes; however, in the integrated
implementation, which builds blob files in the background jobs, this unnecessarily
reduces performance. This patch fixes this by simply adding a `do_flush` flag to
`BlobLogWriter`, which is set to `true` by the stacked implementation and to `false`
by the new code. Note: the change itself is trivial but the tests needed some work;
since in the new implementation, blobs are now buffered, adding a blob to
`BlobFileBuilder` is no longer guaranteed to result in an actual I/O. Therefore, we can
no longer rely on `FaultInjectionTestEnv` when testing failure cases; instead, we
manipulate the return values of I/O methods directly using `SyncPoint`s.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7892
Test Plan: `make check`
Reviewed By: jay-zhuang
Differential Revision: D26022814
Pulled By: ltamasi
fbshipit-source-id: b3dce419f312137fa70d84cdd9b908fd5d60d8cd
2021-01-25 21:30:17 +00:00
|
|
|
options.env = env_;
|
2020-09-15 04:10:09 +00:00
|
|
|
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
ASSERT_OK(Put("key", "blob"));
|
|
|
|
|
Do not explicitly flush blob files when using the integrated BlobDB (#7892)
Summary:
In the original stacked BlobDB implementation, which writes blobs to blob files
immediately and treats blob files as logs, it makes sense to flush the file after
writing each blob to protect against process crashes; however, in the integrated
implementation, which builds blob files in the background jobs, this unnecessarily
reduces performance. This patch fixes this by simply adding a `do_flush` flag to
`BlobLogWriter`, which is set to `true` by the stacked implementation and to `false`
by the new code. Note: the change itself is trivial but the tests needed some work;
since in the new implementation, blobs are now buffered, adding a blob to
`BlobFileBuilder` is no longer guaranteed to result in an actual I/O. Therefore, we can
no longer rely on `FaultInjectionTestEnv` when testing failure cases; instead, we
manipulate the return values of I/O methods directly using `SyncPoint`s.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7892
Test Plan: `make check`
Reviewed By: jay-zhuang
Differential Revision: D26022814
Pulled By: ltamasi
fbshipit-source-id: b3dce419f312137fa70d84cdd9b908fd5d60d8cd
2021-01-25 21:30:17 +00:00
|
|
|
SyncPoint::GetInstance()->SetCallBack(sync_point_, [this](void* arg) {
|
|
|
|
Status* const s = static_cast<Status*>(arg);
|
|
|
|
assert(s);
|
|
|
|
|
|
|
|
(*s) = Status::IOError(sync_point_);
|
2020-09-15 04:10:09 +00:00
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
ASSERT_NOK(Flush());
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
|
2021-12-10 19:03:39 +00:00
|
|
|
VersionSet* const versions = dbfull()->GetVersionSet();
|
2020-09-15 04:10:09 +00:00
|
|
|
assert(versions);
|
|
|
|
|
|
|
|
ColumnFamilyData* const cfd = versions->GetColumnFamilySet()->GetDefault();
|
|
|
|
assert(cfd);
|
|
|
|
|
|
|
|
Version* const current = cfd->current();
|
|
|
|
assert(current);
|
|
|
|
|
|
|
|
const VersionStorageInfo* const storage_info = current->storage_info();
|
|
|
|
assert(storage_info);
|
|
|
|
|
|
|
|
const auto& l0_files = storage_info->LevelFiles(0);
|
|
|
|
ASSERT_TRUE(l0_files.empty());
|
|
|
|
|
|
|
|
const auto& blob_files = storage_info->GetBlobFiles();
|
|
|
|
ASSERT_TRUE(blob_files.empty());
|
|
|
|
|
|
|
|
// Make sure the files generated by the failed job have been deleted
|
|
|
|
std::vector<std::string> files;
|
|
|
|
ASSERT_OK(env_->GetChildren(dbname_, &files));
|
|
|
|
for (const auto& file : files) {
|
|
|
|
uint64_t number = 0;
|
|
|
|
FileType type = kTableFile;
|
|
|
|
|
|
|
|
if (!ParseFileName(file, &number, &type)) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_NE(type, kTableFile);
|
|
|
|
ASSERT_NE(type, kBlobFile);
|
|
|
|
}
|
|
|
|
|
|
|
|
const InternalStats* const internal_stats = cfd->internal_stats();
|
|
|
|
assert(internal_stats);
|
|
|
|
|
|
|
|
const auto& compaction_stats = internal_stats->TEST_GetCompactionStats();
|
|
|
|
ASSERT_FALSE(compaction_stats.empty());
|
2020-10-26 20:50:03 +00:00
|
|
|
|
|
|
|
if (sync_point_ == "BlobFileBuilder::WriteBlobToFile:AddRecord") {
|
|
|
|
ASSERT_EQ(compaction_stats[0].bytes_written, 0);
|
2021-03-02 17:46:10 +00:00
|
|
|
ASSERT_EQ(compaction_stats[0].bytes_written_blob, 0);
|
2020-10-26 20:50:03 +00:00
|
|
|
ASSERT_EQ(compaction_stats[0].num_output_files, 0);
|
2021-03-02 17:46:10 +00:00
|
|
|
ASSERT_EQ(compaction_stats[0].num_output_files_blob, 0);
|
2020-10-26 20:50:03 +00:00
|
|
|
} else {
|
|
|
|
// SST file writing succeeded; blob file writing failed (during Finish)
|
|
|
|
ASSERT_GT(compaction_stats[0].bytes_written, 0);
|
2021-03-02 17:46:10 +00:00
|
|
|
ASSERT_EQ(compaction_stats[0].bytes_written_blob, 0);
|
2020-10-26 20:50:03 +00:00
|
|
|
ASSERT_EQ(compaction_stats[0].num_output_files, 1);
|
2021-03-02 17:46:10 +00:00
|
|
|
ASSERT_EQ(compaction_stats[0].num_output_files_blob, 0);
|
2020-10-26 20:50:03 +00:00
|
|
|
}
|
2020-09-15 04:10:09 +00:00
|
|
|
|
|
|
|
const uint64_t* const cf_stats_value = internal_stats->TEST_GetCFStatsValue();
|
2020-10-26 20:50:03 +00:00
|
|
|
ASSERT_EQ(cf_stats_value[InternalStats::BYTES_FLUSHED],
|
2021-03-02 17:46:10 +00:00
|
|
|
compaction_stats[0].bytes_written +
|
|
|
|
compaction_stats[0].bytes_written_blob);
|
2020-09-15 04:10:09 +00:00
|
|
|
}
|
|
|
|
|
2022-03-03 05:03:14 +00:00
|
|
|
TEST_F(DBFlushTest, TombstoneVisibleInSnapshot) {
|
|
|
|
class SimpleTestFlushListener : public EventListener {
|
|
|
|
public:
|
|
|
|
explicit SimpleTestFlushListener(DBFlushTest* _test) : test_(_test) {}
|
2024-03-04 18:08:32 +00:00
|
|
|
~SimpleTestFlushListener() override = default;
|
2022-03-03 05:03:14 +00:00
|
|
|
|
|
|
|
void OnFlushBegin(DB* db, const FlushJobInfo& info) override {
|
|
|
|
ASSERT_EQ(static_cast<uint32_t>(0), info.cf_id);
|
|
|
|
|
|
|
|
ASSERT_OK(db->Delete(WriteOptions(), "foo"));
|
|
|
|
snapshot_ = db->GetSnapshot();
|
|
|
|
ASSERT_OK(db->Put(WriteOptions(), "foo", "value"));
|
|
|
|
|
|
|
|
auto* dbimpl = static_cast_with_check<DBImpl>(db);
|
|
|
|
assert(dbimpl);
|
|
|
|
|
|
|
|
ColumnFamilyHandle* cfh = db->DefaultColumnFamily();
|
|
|
|
auto* cfhi = static_cast_with_check<ColumnFamilyHandleImpl>(cfh);
|
|
|
|
assert(cfhi);
|
|
|
|
ASSERT_OK(dbimpl->TEST_SwitchMemtable(cfhi->cfd()));
|
|
|
|
}
|
|
|
|
|
|
|
|
DBFlushTest* test_ = nullptr;
|
|
|
|
const Snapshot* snapshot_ = nullptr;
|
|
|
|
};
|
|
|
|
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
auto* listener = new SimpleTestFlushListener(this);
|
|
|
|
options.listeners.emplace_back(listener);
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
ASSERT_OK(db_->Put(WriteOptions(), "foo", "value0"));
|
|
|
|
|
|
|
|
ManagedSnapshot snapshot_guard(db_);
|
|
|
|
|
|
|
|
ColumnFamilyHandle* default_cf = db_->DefaultColumnFamily();
|
|
|
|
ASSERT_OK(db_->Flush(FlushOptions(), default_cf));
|
|
|
|
|
|
|
|
const Snapshot* snapshot = listener->snapshot_;
|
|
|
|
assert(snapshot);
|
|
|
|
|
|
|
|
ReadOptions read_opts;
|
|
|
|
read_opts.snapshot = snapshot;
|
|
|
|
|
|
|
|
// Using snapshot should not see "foo".
|
|
|
|
{
|
|
|
|
std::string value;
|
|
|
|
Status s = db_->Get(read_opts, "foo", &value);
|
|
|
|
ASSERT_TRUE(s.IsNotFound());
|
|
|
|
}
|
|
|
|
|
|
|
|
db_->ReleaseSnapshot(snapshot);
|
|
|
|
}
|
|
|
|
|
2020-12-04 03:21:08 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, ManualFlushUnder2PC) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.allow_2pc = true;
|
|
|
|
options.atomic_flush = GetParam();
|
|
|
|
// 64MB so that memtable flush won't be trigger by the small writes.
|
|
|
|
options.write_buffer_size = (static_cast<size_t>(64) << 20);
|
2023-01-24 17:54:04 +00:00
|
|
|
auto flush_listener = std::make_shared<FlushCounterListener>();
|
|
|
|
flush_listener->expected_flush_reason = FlushReason::kManualFlush;
|
|
|
|
options.listeners.push_back(flush_listener);
|
2020-12-04 03:21:08 +00:00
|
|
|
// Destroy the DB to recreate as a TransactionDB.
|
|
|
|
Close();
|
|
|
|
Destroy(options, true);
|
|
|
|
|
|
|
|
// Create a TransactionDB.
|
|
|
|
TransactionDB* txn_db = nullptr;
|
|
|
|
TransactionDBOptions txn_db_opts;
|
|
|
|
txn_db_opts.write_policy = TxnDBWritePolicy::WRITE_COMMITTED;
|
|
|
|
ASSERT_OK(TransactionDB::Open(options, txn_db_opts, dbname_, &txn_db));
|
|
|
|
ASSERT_NE(txn_db, nullptr);
|
|
|
|
db_ = txn_db;
|
|
|
|
|
|
|
|
// Create two more columns other than default CF.
|
|
|
|
std::vector<std::string> cfs = {"puppy", "kitty"};
|
|
|
|
CreateColumnFamilies(cfs, options);
|
|
|
|
ASSERT_EQ(handles_.size(), 2);
|
|
|
|
ASSERT_EQ(handles_[0]->GetName(), cfs[0]);
|
|
|
|
ASSERT_EQ(handles_[1]->GetName(), cfs[1]);
|
|
|
|
const size_t kNumCfToFlush = options.atomic_flush ? 2 : 1;
|
|
|
|
|
|
|
|
WriteOptions wopts;
|
|
|
|
TransactionOptions txn_opts;
|
|
|
|
// txn1 only prepare, but does not commit.
|
|
|
|
// The WAL containing the prepared but uncommitted data must be kept.
|
|
|
|
Transaction* txn1 = txn_db->BeginTransaction(wopts, txn_opts, nullptr);
|
|
|
|
// txn2 not only prepare, but also commit.
|
|
|
|
Transaction* txn2 = txn_db->BeginTransaction(wopts, txn_opts, nullptr);
|
|
|
|
ASSERT_NE(txn1, nullptr);
|
|
|
|
ASSERT_NE(txn2, nullptr);
|
|
|
|
for (size_t i = 0; i < kNumCfToFlush; i++) {
|
|
|
|
ASSERT_OK(txn1->Put(handles_[i], "k1", "v1"));
|
|
|
|
ASSERT_OK(txn2->Put(handles_[i], "k2", "v2"));
|
|
|
|
}
|
|
|
|
// A txn must be named before prepare.
|
|
|
|
ASSERT_OK(txn1->SetName("txn1"));
|
|
|
|
ASSERT_OK(txn2->SetName("txn2"));
|
|
|
|
// Prepare writes to WAL, but not to memtable. (WriteCommitted)
|
|
|
|
ASSERT_OK(txn1->Prepare());
|
|
|
|
ASSERT_OK(txn2->Prepare());
|
|
|
|
// Commit writes to memtable.
|
|
|
|
ASSERT_OK(txn2->Commit());
|
|
|
|
delete txn1;
|
|
|
|
delete txn2;
|
|
|
|
|
|
|
|
// There are still data in memtable not flushed.
|
|
|
|
// But since data is small enough to reside in the active memtable,
|
|
|
|
// there are no immutable memtable.
|
|
|
|
for (size_t i = 0; i < kNumCfToFlush; i++) {
|
|
|
|
auto cfh = static_cast<ColumnFamilyHandleImpl*>(handles_[i]);
|
|
|
|
ASSERT_EQ(0, cfh->cfd()->imm()->NumNotFlushed());
|
|
|
|
ASSERT_FALSE(cfh->cfd()->mem()->IsEmpty());
|
|
|
|
}
|
|
|
|
|
|
|
|
// Atomic flush memtables,
|
|
|
|
// the min log with prepared data should be written to MANIFEST.
|
|
|
|
std::vector<ColumnFamilyHandle*> cfs_to_flush(kNumCfToFlush);
|
|
|
|
for (size_t i = 0; i < kNumCfToFlush; i++) {
|
|
|
|
cfs_to_flush[i] = handles_[i];
|
|
|
|
}
|
|
|
|
ASSERT_OK(txn_db->Flush(FlushOptions(), cfs_to_flush));
|
|
|
|
|
|
|
|
// There are no remaining data in memtable after flush.
|
|
|
|
for (size_t i = 0; i < kNumCfToFlush; i++) {
|
|
|
|
auto cfh = static_cast<ColumnFamilyHandleImpl*>(handles_[i]);
|
|
|
|
ASSERT_EQ(0, cfh->cfd()->imm()->NumNotFlushed());
|
|
|
|
ASSERT_TRUE(cfh->cfd()->mem()->IsEmpty());
|
|
|
|
}
|
|
|
|
|
|
|
|
// The recovered min log number with prepared data should be non-zero.
|
|
|
|
// In 2pc mode, MinLogNumberToKeep returns the
|
2022-04-01 03:00:52 +00:00
|
|
|
// VersionSet::min_log_number_to_keep recovered from MANIFEST, if it's 0,
|
2020-12-04 03:21:08 +00:00
|
|
|
// it means atomic flush didn't write the min_log_number_to_keep to MANIFEST.
|
|
|
|
cfs.push_back(kDefaultColumnFamilyName);
|
|
|
|
ASSERT_OK(TryReopenWithColumnFamilies(cfs, options));
|
Prefer static_cast in place of most reinterpret_cast (#12308)
Summary:
The following are risks associated with pointer-to-pointer reinterpret_cast:
* Can produce the "wrong result" (crash or memory corruption). IIRC, in theory this can happen for any up-cast or down-cast for a non-standard-layout type, though in practice would only happen for multiple inheritance cases (where the base class pointer might be "inside" the derived object). We don't use multiple inheritance a lot, but we do.
* Can mask useful compiler errors upon code change, including converting between unrelated pointer types that you are expecting to be related, and converting between pointer and scalar types unintentionally.
I can only think of some obscure cases where static_cast could be troublesome when it compiles as a replacement:
* Going through `void*` could plausibly cause unnecessary or broken pointer arithmetic. Suppose we have
`struct Derived: public Base1, public Base2`. If we have `Derived*` -> `void*` -> `Base2*` -> `Derived*` through reinterpret casts, this could plausibly work (though technical UB) assuming the `Base2*` is not dereferenced. Changing to static cast could introduce breaking pointer arithmetic.
* Unnecessary (but safe) pointer arithmetic could arise in a case like `Derived*` -> `Base2*` -> `Derived*` where before the Base2 pointer might not have been dereferenced. This could potentially affect performance.
With some light scripting, I tried replacing pointer-to-pointer reinterpret_casts with static_cast and kept the cases that still compile. Most occurrences of reinterpret_cast have successfully been changed (except for java/ and third-party/). 294 changed, 257 remain.
A couple of related interventions included here:
* Previously Cache::Handle was not actually derived from in the implementations and just used as a `void*` stand-in with reinterpret_cast. Now there is a relationship to allow static_cast. In theory, this could introduce pointer arithmetic (as described above) but is unlikely without multiple inheritance AND non-empty Cache::Handle.
* Remove some unnecessary casts to void* as this is allowed to be implicit (for better or worse).
Most of the remaining reinterpret_casts are for converting to/from raw bytes of objects. We could consider better idioms for these patterns in follow-up work.
I wish there were a way to implement a template variant of static_cast that would only compile if no pointer arithmetic is generated, but best I can tell, this is not possible. AFAIK the best you could do is a dynamic check that the void* conversion after the static cast is unchanged.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12308
Test Plan: existing tests, CI
Reviewed By: ltamasi
Differential Revision: D53204947
Pulled By: pdillinger
fbshipit-source-id: 9de23e618263b0d5b9820f4e15966876888a16e2
2024-02-07 18:44:11 +00:00
|
|
|
DBImpl* db_impl = static_cast<DBImpl*>(db_);
|
2020-12-04 03:21:08 +00:00
|
|
|
ASSERT_TRUE(db_impl->allow_2pc());
|
|
|
|
ASSERT_NE(db_impl->MinLogNumberToKeep(), 0);
|
|
|
|
}
|
|
|
|
|
2018-10-26 22:06:44 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, ManualAtomicFlush) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = GetParam();
|
|
|
|
options.write_buffer_size = (static_cast<size_t>(64) << 20);
|
2023-01-24 17:54:04 +00:00
|
|
|
auto flush_listener = std::make_shared<FlushCounterListener>();
|
|
|
|
flush_listener->expected_flush_reason = FlushReason::kManualFlush;
|
|
|
|
options.listeners.push_back(flush_listener);
|
2018-10-26 22:06:44 +00:00
|
|
|
|
|
|
|
CreateAndReopenWithCF({"pikachu", "eevee"}, options);
|
|
|
|
size_t num_cfs = handles_.size();
|
|
|
|
ASSERT_EQ(3, num_cfs);
|
|
|
|
WriteOptions wopts;
|
|
|
|
wopts.disableWAL = true;
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
ASSERT_OK(Put(static_cast<int>(i) /*cf*/, "key", "value", wopts));
|
|
|
|
}
|
2020-12-04 03:21:08 +00:00
|
|
|
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
auto cfh = static_cast<ColumnFamilyHandleImpl*>(handles_[i]);
|
|
|
|
ASSERT_EQ(0, cfh->cfd()->imm()->NumNotFlushed());
|
|
|
|
ASSERT_FALSE(cfh->cfd()->mem()->IsEmpty());
|
|
|
|
}
|
|
|
|
|
2018-10-26 22:06:44 +00:00
|
|
|
std::vector<int> cf_ids;
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
cf_ids.emplace_back(static_cast<int>(i));
|
|
|
|
}
|
|
|
|
ASSERT_OK(Flush(cf_ids));
|
2020-12-04 03:21:08 +00:00
|
|
|
|
2018-10-26 22:06:44 +00:00
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
auto cfh = static_cast<ColumnFamilyHandleImpl*>(handles_[i]);
|
|
|
|
ASSERT_EQ(0, cfh->cfd()->imm()->NumNotFlushed());
|
|
|
|
ASSERT_TRUE(cfh->cfd()->mem()->IsEmpty());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-17 23:54:49 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, PrecomputeMinLogNumberToKeepNon2PC) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = GetParam();
|
|
|
|
options.write_buffer_size = (static_cast<size_t>(64) << 20);
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
|
|
|
|
const size_t num_cfs = handles_.size();
|
|
|
|
ASSERT_EQ(num_cfs, 2);
|
|
|
|
WriteOptions wopts;
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
ASSERT_OK(Put(static_cast<int>(i) /*cf*/, "key", "value", wopts));
|
|
|
|
}
|
|
|
|
|
|
|
|
{
|
|
|
|
// Flush the default CF only.
|
|
|
|
std::vector<int> cf_ids{0};
|
|
|
|
ASSERT_OK(Flush(cf_ids));
|
|
|
|
|
|
|
|
autovector<ColumnFamilyData*> flushed_cfds;
|
|
|
|
autovector<autovector<VersionEdit*>> flush_edits;
|
|
|
|
auto flushed_cfh = static_cast<ColumnFamilyHandleImpl*>(handles_[0]);
|
|
|
|
flushed_cfds.push_back(flushed_cfh->cfd());
|
|
|
|
flush_edits.push_back({});
|
|
|
|
auto unflushed_cfh = static_cast<ColumnFamilyHandleImpl*>(handles_[1]);
|
|
|
|
|
2021-12-10 19:03:39 +00:00
|
|
|
ASSERT_EQ(PrecomputeMinLogNumberToKeepNon2PC(dbfull()->GetVersionSet(),
|
2020-11-17 23:54:49 +00:00
|
|
|
flushed_cfds, flush_edits),
|
|
|
|
unflushed_cfh->cfd()->GetLogNumber());
|
|
|
|
}
|
|
|
|
|
|
|
|
{
|
|
|
|
// Flush all CFs.
|
|
|
|
std::vector<int> cf_ids;
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
cf_ids.emplace_back(static_cast<int>(i));
|
|
|
|
}
|
|
|
|
ASSERT_OK(Flush(cf_ids));
|
|
|
|
uint64_t log_num_after_flush = dbfull()->TEST_GetCurrentLogNumber();
|
|
|
|
|
2022-05-05 20:08:21 +00:00
|
|
|
uint64_t min_log_number_to_keep = std::numeric_limits<uint64_t>::max();
|
2020-11-17 23:54:49 +00:00
|
|
|
autovector<ColumnFamilyData*> flushed_cfds;
|
|
|
|
autovector<autovector<VersionEdit*>> flush_edits;
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
auto cfh = static_cast<ColumnFamilyHandleImpl*>(handles_[i]);
|
|
|
|
flushed_cfds.push_back(cfh->cfd());
|
|
|
|
flush_edits.push_back({});
|
|
|
|
min_log_number_to_keep =
|
|
|
|
std::min(min_log_number_to_keep, cfh->cfd()->GetLogNumber());
|
|
|
|
}
|
|
|
|
ASSERT_EQ(min_log_number_to_keep, log_num_after_flush);
|
2021-12-10 19:03:39 +00:00
|
|
|
ASSERT_EQ(PrecomputeMinLogNumberToKeepNon2PC(dbfull()->GetVersionSet(),
|
2020-11-17 23:54:49 +00:00
|
|
|
flushed_cfds, flush_edits),
|
|
|
|
min_log_number_to_keep);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-26 22:06:44 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, AtomicFlushTriggeredByMemTableFull) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = GetParam();
|
|
|
|
// 4KB so that we can easily trigger auto flush.
|
|
|
|
options.write_buffer_size = 4096;
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"DBImpl::BackgroundCallFlush:FlushFinish:0",
|
|
|
|
"DBAtomicFlushTest::AtomicFlushTriggeredByMemTableFull:BeforeCheck"}});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
CreateAndReopenWithCF({"pikachu", "eevee"}, options);
|
|
|
|
size_t num_cfs = handles_.size();
|
|
|
|
ASSERT_EQ(3, num_cfs);
|
|
|
|
WriteOptions wopts;
|
|
|
|
wopts.disableWAL = true;
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
ASSERT_OK(Put(static_cast<int>(i) /*cf*/, "key", "value", wopts));
|
|
|
|
}
|
|
|
|
// Keep writing to one of them column families to trigger auto flush.
|
|
|
|
for (int i = 0; i != 4000; ++i) {
|
|
|
|
ASSERT_OK(Put(static_cast<int>(num_cfs) - 1 /*cf*/,
|
|
|
|
"key" + std::to_string(i), "value" + std::to_string(i),
|
|
|
|
wopts));
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_SYNC_POINT(
|
|
|
|
"DBAtomicFlushTest::AtomicFlushTriggeredByMemTableFull:BeforeCheck");
|
|
|
|
if (options.atomic_flush) {
|
2020-06-02 22:02:44 +00:00
|
|
|
for (size_t i = 0; i + 1 != num_cfs; ++i) {
|
2018-10-26 22:06:44 +00:00
|
|
|
auto cfh = static_cast<ColumnFamilyHandleImpl*>(handles_[i]);
|
|
|
|
ASSERT_EQ(0, cfh->cfd()->imm()->NumNotFlushed());
|
|
|
|
ASSERT_TRUE(cfh->cfd()->mem()->IsEmpty());
|
|
|
|
}
|
|
|
|
} else {
|
2020-06-02 22:02:44 +00:00
|
|
|
for (size_t i = 0; i + 1 != num_cfs; ++i) {
|
2018-10-26 22:06:44 +00:00
|
|
|
auto cfh = static_cast<ColumnFamilyHandleImpl*>(handles_[i]);
|
|
|
|
ASSERT_EQ(0, cfh->cfd()->imm()->NumNotFlushed());
|
|
|
|
ASSERT_FALSE(cfh->cfd()->mem()->IsEmpty());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
2018-11-15 04:52:21 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, AtomicFlushRollbackSomeJobs) {
|
|
|
|
bool atomic_flush = GetParam();
|
|
|
|
if (!atomic_flush) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
std::unique_ptr<FaultInjectionTestEnv> fault_injection_env(
|
|
|
|
new FaultInjectionTestEnv(env_));
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = atomic_flush;
|
|
|
|
options.env = fault_injection_env.get();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"DBImpl::AtomicFlushMemTablesToOutputFiles:SomeFlushJobsComplete:1",
|
|
|
|
"DBAtomicFlushTest::AtomicFlushRollbackSomeJobs:1"},
|
|
|
|
{"DBAtomicFlushTest::AtomicFlushRollbackSomeJobs:2",
|
|
|
|
"DBImpl::AtomicFlushMemTablesToOutputFiles:SomeFlushJobsComplete:2"}});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
CreateAndReopenWithCF({"pikachu", "eevee"}, options);
|
|
|
|
size_t num_cfs = handles_.size();
|
|
|
|
ASSERT_EQ(3, num_cfs);
|
|
|
|
WriteOptions wopts;
|
|
|
|
wopts.disableWAL = true;
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
int cf_id = static_cast<int>(i);
|
|
|
|
ASSERT_OK(Put(cf_id, "key", "value", wopts));
|
|
|
|
}
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
flush_opts.wait = false;
|
|
|
|
ASSERT_OK(dbfull()->Flush(flush_opts, handles_));
|
|
|
|
TEST_SYNC_POINT("DBAtomicFlushTest::AtomicFlushRollbackSomeJobs:1");
|
|
|
|
fault_injection_env->SetFilesystemActive(false);
|
|
|
|
TEST_SYNC_POINT("DBAtomicFlushTest::AtomicFlushRollbackSomeJobs:2");
|
|
|
|
for (auto* cfh : handles_) {
|
2020-10-12 22:15:58 +00:00
|
|
|
// Returns the IO error happend during flush.
|
|
|
|
ASSERT_NOK(dbfull()->TEST_WaitForFlushMemTable(cfh));
|
2018-11-15 04:52:21 +00:00
|
|
|
}
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
auto cfh = static_cast<ColumnFamilyHandleImpl*>(handles_[i]);
|
|
|
|
ASSERT_EQ(1, cfh->cfd()->imm()->NumNotFlushed());
|
|
|
|
ASSERT_TRUE(cfh->cfd()->mem()->IsEmpty());
|
|
|
|
}
|
|
|
|
fault_injection_env->SetFilesystemActive(true);
|
|
|
|
Destroy(options);
|
|
|
|
}
|
|
|
|
|
2018-12-13 23:10:16 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, FlushMultipleCFs_DropSomeBeforeRequestFlush) {
|
|
|
|
bool atomic_flush = GetParam();
|
|
|
|
if (!atomic_flush) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = atomic_flush;
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
CreateAndReopenWithCF({"pikachu", "eevee"}, options);
|
|
|
|
size_t num_cfs = handles_.size();
|
|
|
|
ASSERT_EQ(3, num_cfs);
|
|
|
|
WriteOptions wopts;
|
|
|
|
wopts.disableWAL = true;
|
|
|
|
std::vector<int> cf_ids;
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
int cf_id = static_cast<int>(i);
|
|
|
|
ASSERT_OK(Put(cf_id, "key", "value", wopts));
|
|
|
|
cf_ids.push_back(cf_id);
|
|
|
|
}
|
|
|
|
ASSERT_OK(dbfull()->DropColumnFamily(handles_[1]));
|
2019-05-20 17:37:37 +00:00
|
|
|
ASSERT_TRUE(Flush(cf_ids).IsColumnFamilyDropped());
|
2018-12-13 23:10:16 +00:00
|
|
|
Destroy(options);
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_P(DBAtomicFlushTest,
|
|
|
|
FlushMultipleCFs_DropSomeAfterScheduleFlushBeforeFlushJobRun) {
|
|
|
|
bool atomic_flush = GetParam();
|
|
|
|
if (!atomic_flush) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = atomic_flush;
|
|
|
|
|
|
|
|
CreateAndReopenWithCF({"pikachu", "eevee"}, options);
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"DBImpl::AtomicFlushMemTables:AfterScheduleFlush",
|
|
|
|
"DBAtomicFlushTest::BeforeDropCF"},
|
|
|
|
{"DBAtomicFlushTest::AfterDropCF",
|
|
|
|
"DBImpl::BackgroundCallFlush:start"}});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
size_t num_cfs = handles_.size();
|
|
|
|
ASSERT_EQ(3, num_cfs);
|
|
|
|
WriteOptions wopts;
|
|
|
|
wopts.disableWAL = true;
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
int cf_id = static_cast<int>(i);
|
|
|
|
ASSERT_OK(Put(cf_id, "key", "value", wopts));
|
|
|
|
}
|
|
|
|
port::Thread user_thread([&]() {
|
|
|
|
TEST_SYNC_POINT("DBAtomicFlushTest::BeforeDropCF");
|
|
|
|
ASSERT_OK(dbfull()->DropColumnFamily(handles_[1]));
|
|
|
|
TEST_SYNC_POINT("DBAtomicFlushTest::AfterDropCF");
|
|
|
|
});
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
flush_opts.wait = true;
|
|
|
|
ASSERT_OK(dbfull()->Flush(flush_opts, handles_));
|
|
|
|
user_thread.join();
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
int cf_id = static_cast<int>(i);
|
|
|
|
ASSERT_EQ("value", Get(cf_id, "key"));
|
|
|
|
}
|
|
|
|
|
|
|
|
ReopenWithColumnFamilies({kDefaultColumnFamilyName, "eevee"}, options);
|
|
|
|
num_cfs = handles_.size();
|
|
|
|
ASSERT_EQ(2, num_cfs);
|
|
|
|
for (size_t i = 0; i != num_cfs; ++i) {
|
|
|
|
int cf_id = static_cast<int>(i);
|
|
|
|
ASSERT_EQ("value", Get(cf_id, "key"));
|
|
|
|
}
|
|
|
|
Destroy(options);
|
|
|
|
}
|
|
|
|
|
2019-04-29 19:29:57 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, TriggerFlushAndClose) {
|
|
|
|
bool atomic_flush = GetParam();
|
|
|
|
if (!atomic_flush) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
const int kNumKeysTriggerFlush = 4;
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = atomic_flush;
|
|
|
|
options.memtable_factory.reset(
|
2021-09-08 14:45:59 +00:00
|
|
|
test::NewSpecialSkipListFactory(kNumKeysTriggerFlush));
|
2019-04-29 19:29:57 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
|
|
|
|
for (int i = 0; i != kNumKeysTriggerFlush; ++i) {
|
|
|
|
ASSERT_OK(Put(0, "key" + std::to_string(i), "value" + std::to_string(i)));
|
|
|
|
}
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
ASSERT_OK(Put(0, "key", "value"));
|
|
|
|
Close();
|
|
|
|
|
|
|
|
ReopenWithColumnFamilies({kDefaultColumnFamilyName, "pikachu"}, options);
|
|
|
|
ASSERT_EQ("value", Get(0, "key"));
|
|
|
|
}
|
|
|
|
|
2019-05-11 00:53:41 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, PickMemtablesRaceWithBackgroundFlush) {
|
|
|
|
bool atomic_flush = GetParam();
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = atomic_flush;
|
|
|
|
options.max_write_buffer_number = 4;
|
|
|
|
// Set min_write_buffer_number_to_merge to be greater than 1, so that
|
|
|
|
// a column family with one memtable in the imm will not cause IsFlushPending
|
|
|
|
// to return true when flush_requested_ is false.
|
|
|
|
options.min_write_buffer_number_to_merge = 2;
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
ASSERT_EQ(2, handles_.size());
|
|
|
|
ASSERT_OK(dbfull()->PauseBackgroundWork());
|
|
|
|
ASSERT_OK(Put(0, "key00", "value00"));
|
|
|
|
ASSERT_OK(Put(1, "key10", "value10"));
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
flush_opts.wait = false;
|
|
|
|
ASSERT_OK(dbfull()->Flush(flush_opts, handles_));
|
|
|
|
ASSERT_OK(Put(0, "key01", "value01"));
|
|
|
|
// Since max_write_buffer_number is 4, the following flush won't cause write
|
|
|
|
// stall.
|
|
|
|
ASSERT_OK(dbfull()->Flush(flush_opts));
|
|
|
|
ASSERT_OK(dbfull()->DropColumnFamily(handles_[1]));
|
|
|
|
ASSERT_OK(dbfull()->DestroyColumnFamilyHandle(handles_[1]));
|
|
|
|
handles_[1] = nullptr;
|
|
|
|
ASSERT_OK(dbfull()->ContinueBackgroundWork());
|
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForFlushMemTable(handles_[0]));
|
|
|
|
delete handles_[0];
|
|
|
|
handles_.clear();
|
|
|
|
}
|
|
|
|
|
2019-07-01 21:04:10 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, CFDropRaceWithWaitForFlushMemTables) {
|
|
|
|
bool atomic_flush = GetParam();
|
|
|
|
if (!atomic_flush) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = atomic_flush;
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"DBImpl::AtomicFlushMemTables:AfterScheduleFlush",
|
|
|
|
"DBAtomicFlushTest::CFDropRaceWithWaitForFlushMemTables:BeforeDrop"},
|
|
|
|
{"DBAtomicFlushTest::CFDropRaceWithWaitForFlushMemTables:AfterFree",
|
|
|
|
"DBImpl::BackgroundCallFlush:start"},
|
|
|
|
{"DBImpl::BackgroundCallFlush:start",
|
|
|
|
"DBImpl::AtomicFlushMemTables:BeforeWaitForBgFlush"}});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
ASSERT_EQ(2, handles_.size());
|
|
|
|
ASSERT_OK(Put(0, "key", "value"));
|
|
|
|
ASSERT_OK(Put(1, "key", "value"));
|
|
|
|
auto* cfd_default =
|
|
|
|
static_cast<ColumnFamilyHandleImpl*>(dbfull()->DefaultColumnFamily())
|
|
|
|
->cfd();
|
|
|
|
auto* cfd_pikachu = static_cast<ColumnFamilyHandleImpl*>(handles_[1])->cfd();
|
|
|
|
port::Thread drop_cf_thr([&]() {
|
|
|
|
TEST_SYNC_POINT(
|
|
|
|
"DBAtomicFlushTest::CFDropRaceWithWaitForFlushMemTables:BeforeDrop");
|
|
|
|
ASSERT_OK(dbfull()->DropColumnFamily(handles_[1]));
|
|
|
|
delete handles_[1];
|
|
|
|
handles_.resize(1);
|
|
|
|
TEST_SYNC_POINT(
|
|
|
|
"DBAtomicFlushTest::CFDropRaceWithWaitForFlushMemTables:AfterFree");
|
|
|
|
});
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
flush_opts.allow_write_stall = true;
|
|
|
|
ASSERT_OK(dbfull()->TEST_AtomicFlushMemTables({cfd_default, cfd_pikachu},
|
|
|
|
flush_opts));
|
|
|
|
drop_cf_thr.join();
|
|
|
|
Close();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
2020-02-07 18:50:17 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, RollbackAfterFailToInstallResults) {
|
|
|
|
bool atomic_flush = GetParam();
|
|
|
|
if (!atomic_flush) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
auto fault_injection_env = std::make_shared<FaultInjectionTestEnv>(env_);
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.env = fault_injection_env.get();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = atomic_flush;
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
ASSERT_EQ(2, handles_.size());
|
|
|
|
for (size_t cf = 0; cf < handles_.size(); ++cf) {
|
|
|
|
ASSERT_OK(Put(static_cast<int>(cf), "a", "value"));
|
|
|
|
}
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"VersionSet::ProcessManifestWrites:BeforeWriteLastVersionEdit:0",
|
|
|
|
[&](void* /*arg*/) { fault_injection_env->SetFilesystemActive(false); });
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
Status s = db_->Flush(flush_opts, handles_);
|
|
|
|
ASSERT_NOK(s);
|
|
|
|
fault_injection_env->SetFilesystemActive(true);
|
|
|
|
Close();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
}
|
|
|
|
|
2023-12-26 19:04:25 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, FailureInMultiCfAutomaticFlush) {
|
|
|
|
bool atomic_flush = GetParam();
|
|
|
|
auto fault_injection_env = std::make_shared<FaultInjectionTestEnv>(env_);
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.env = fault_injection_env.get();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = atomic_flush;
|
|
|
|
const int kNumKeysTriggerFlush = 4;
|
|
|
|
options.memtable_factory.reset(
|
|
|
|
test::NewSpecialSkipListFactory(kNumKeysTriggerFlush));
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
ASSERT_EQ(2, handles_.size());
|
|
|
|
for (size_t cf = 0; cf < handles_.size(); ++cf) {
|
|
|
|
ASSERT_OK(Put(static_cast<int>(cf), "a", "value"));
|
|
|
|
}
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::ScheduleFlushes:PreSwitchMemtable",
|
|
|
|
[&](void* /*arg*/) { fault_injection_env->SetFilesystemActive(false); });
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
for (int i = 1; i < kNumKeysTriggerFlush; ++i) {
|
|
|
|
ASSERT_OK(Put(0, "key" + std::to_string(i), "value" + std::to_string(i)));
|
|
|
|
}
|
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForFlushMemTable());
|
|
|
|
// Next write after failed flush should fail.
|
|
|
|
ASSERT_NOK(Put(0, "x", "y"));
|
|
|
|
fault_injection_env->SetFilesystemActive(true);
|
|
|
|
Close();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
}
|
|
|
|
|
Fix atomic flush waiting forever for MANIFEST write (#9034)
Summary:
In atomic flush, concurrent background flush threads will commit to the MANIFEST
one by one, in the order of the IDs of their picked memtables for all included column
families. Each time, a background flush thread decides whether to wait based on two
criteria:
- Is db stopped? If so, don't wait.
- Am I the one to commit the currently earliest memtable? If so, don't wait and ready to go.
When atomic flush was implemented, error writing to or syncing the MANIFEST would
cause the db to be stopped. Therefore, this background thread does not have to check
for the background error while waiting. If there has been such an error, `DBStopped()`
would have been true, and this thread will **not** wait forever.
After we improved error handling, RocksDB may map an IOError while writing to MANIFEST
to a soft error, if there is no WAL. This requires the background threads to check for
background error while waiting. Otherwise, a background flush thread may wait forever.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9034
Test Plan: make check
Reviewed By: zhichao-cao
Differential Revision: D31639225
Pulled By: riversand963
fbshipit-source-id: e9ab07c4d8f2eade238adeefe3e42dd9a5a3ebbd
2021-10-21 04:33:32 +00:00
|
|
|
// In atomic flush, concurrent bg flush threads commit to the MANIFEST in
|
|
|
|
// serial, in the order of their picked memtables for each column family.
|
|
|
|
// Only when a bg flush thread finds out that its memtables are the earliest
|
|
|
|
// unflushed ones for all the included column families will this bg flush
|
|
|
|
// thread continue to commit to MANIFEST.
|
|
|
|
// This unit test uses sync point to coordinate the execution of two bg threads
|
|
|
|
// executing the same sequence of functions. The interleaving are as follows.
|
|
|
|
// time bg1 bg2
|
|
|
|
// | pick memtables to flush
|
|
|
|
// | flush memtables cf1_m1, cf2_m1
|
|
|
|
// | join MANIFEST write queue
|
|
|
|
// | pick memtabls to flush
|
|
|
|
// | flush memtables cf1_(m1+1)
|
|
|
|
// | join MANIFEST write queue
|
|
|
|
// | wait to write MANIFEST
|
|
|
|
// | write MANIFEST
|
|
|
|
// | IO error
|
|
|
|
// | detect IO error and stop waiting
|
|
|
|
// V
|
|
|
|
TEST_P(DBAtomicFlushTest, BgThreadNoWaitAfterManifestError) {
|
|
|
|
bool atomic_flush = GetParam();
|
|
|
|
if (!atomic_flush) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
auto fault_injection_env = std::make_shared<FaultInjectionTestEnv>(env_);
|
|
|
|
Options options = GetDefaultOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = true;
|
|
|
|
options.env = fault_injection_env.get();
|
|
|
|
// Set a larger value than default so that RocksDB can schedule concurrent
|
|
|
|
// background flush threads.
|
|
|
|
options.max_background_jobs = 8;
|
|
|
|
options.max_write_buffer_number = 8;
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
|
|
|
|
assert(2 == handles_.size());
|
|
|
|
|
|
|
|
WriteOptions write_opts;
|
|
|
|
write_opts.disableWAL = true;
|
|
|
|
|
|
|
|
ASSERT_OK(Put(0, "a", "v_0_a", write_opts));
|
|
|
|
ASSERT_OK(Put(1, "a", "v_1_a", write_opts));
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->LoadDependency({
|
|
|
|
{"BgFlushThr2:WaitToCommit", "BgFlushThr1:BeforeWriteManifest"},
|
|
|
|
});
|
|
|
|
|
|
|
|
std::thread::id bg_flush_thr1, bg_flush_thr2;
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::BackgroundCallFlush:start", [&](void*) {
|
|
|
|
if (bg_flush_thr1 == std::thread::id()) {
|
|
|
|
bg_flush_thr1 = std::this_thread::get_id();
|
|
|
|
} else if (bg_flush_thr2 == std::thread::id()) {
|
|
|
|
bg_flush_thr2 = std::this_thread::get_id();
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
int called = 0;
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::AtomicFlushMemTablesToOutputFiles:WaitToCommit", [&](void* arg) {
|
|
|
|
if (std::this_thread::get_id() == bg_flush_thr2) {
|
|
|
|
const auto* ptr = reinterpret_cast<std::pair<Status, bool>*>(arg);
|
|
|
|
assert(ptr);
|
|
|
|
if (0 == called) {
|
|
|
|
// When bg flush thread 2 reaches here for the first time.
|
|
|
|
ASSERT_OK(ptr->first);
|
|
|
|
ASSERT_TRUE(ptr->second);
|
|
|
|
} else if (1 == called) {
|
|
|
|
// When bg flush thread 2 reaches here for the second time.
|
|
|
|
ASSERT_TRUE(ptr->first.IsIOError());
|
|
|
|
ASSERT_FALSE(ptr->second);
|
|
|
|
}
|
|
|
|
++called;
|
|
|
|
TEST_SYNC_POINT("BgFlushThr2:WaitToCommit");
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"VersionSet::ProcessManifestWrites:BeforeWriteLastVersionEdit:0",
|
|
|
|
[&](void*) {
|
|
|
|
if (std::this_thread::get_id() == bg_flush_thr1) {
|
|
|
|
TEST_SYNC_POINT("BgFlushThr1:BeforeWriteManifest");
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"VersionSet::LogAndApply:WriteManifest", [&](void*) {
|
|
|
|
if (std::this_thread::get_id() != bg_flush_thr1) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
ASSERT_OK(db_->Put(write_opts, "b", "v_1_b"));
|
|
|
|
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
flush_opts.wait = false;
|
|
|
|
std::vector<ColumnFamilyHandle*> cfhs(1, db_->DefaultColumnFamily());
|
|
|
|
ASSERT_OK(dbfull()->Flush(flush_opts, cfhs));
|
|
|
|
});
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"VersionSet::ProcessManifestWrites:AfterSyncManifest", [&](void* arg) {
|
Prefer static_cast in place of most reinterpret_cast (#12308)
Summary:
The following are risks associated with pointer-to-pointer reinterpret_cast:
* Can produce the "wrong result" (crash or memory corruption). IIRC, in theory this can happen for any up-cast or down-cast for a non-standard-layout type, though in practice would only happen for multiple inheritance cases (where the base class pointer might be "inside" the derived object). We don't use multiple inheritance a lot, but we do.
* Can mask useful compiler errors upon code change, including converting between unrelated pointer types that you are expecting to be related, and converting between pointer and scalar types unintentionally.
I can only think of some obscure cases where static_cast could be troublesome when it compiles as a replacement:
* Going through `void*` could plausibly cause unnecessary or broken pointer arithmetic. Suppose we have
`struct Derived: public Base1, public Base2`. If we have `Derived*` -> `void*` -> `Base2*` -> `Derived*` through reinterpret casts, this could plausibly work (though technical UB) assuming the `Base2*` is not dereferenced. Changing to static cast could introduce breaking pointer arithmetic.
* Unnecessary (but safe) pointer arithmetic could arise in a case like `Derived*` -> `Base2*` -> `Derived*` where before the Base2 pointer might not have been dereferenced. This could potentially affect performance.
With some light scripting, I tried replacing pointer-to-pointer reinterpret_casts with static_cast and kept the cases that still compile. Most occurrences of reinterpret_cast have successfully been changed (except for java/ and third-party/). 294 changed, 257 remain.
A couple of related interventions included here:
* Previously Cache::Handle was not actually derived from in the implementations and just used as a `void*` stand-in with reinterpret_cast. Now there is a relationship to allow static_cast. In theory, this could introduce pointer arithmetic (as described above) but is unlikely without multiple inheritance AND non-empty Cache::Handle.
* Remove some unnecessary casts to void* as this is allowed to be implicit (for better or worse).
Most of the remaining reinterpret_casts are for converting to/from raw bytes of objects. We could consider better idioms for these patterns in follow-up work.
I wish there were a way to implement a template variant of static_cast that would only compile if no pointer arithmetic is generated, but best I can tell, this is not possible. AFAIK the best you could do is a dynamic check that the void* conversion after the static cast is unchanged.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/12308
Test Plan: existing tests, CI
Reviewed By: ltamasi
Differential Revision: D53204947
Pulled By: pdillinger
fbshipit-source-id: 9de23e618263b0d5b9820f4e15966876888a16e2
2024-02-07 18:44:11 +00:00
|
|
|
auto* ptr = static_cast<IOStatus*>(arg);
|
Fix atomic flush waiting forever for MANIFEST write (#9034)
Summary:
In atomic flush, concurrent background flush threads will commit to the MANIFEST
one by one, in the order of the IDs of their picked memtables for all included column
families. Each time, a background flush thread decides whether to wait based on two
criteria:
- Is db stopped? If so, don't wait.
- Am I the one to commit the currently earliest memtable? If so, don't wait and ready to go.
When atomic flush was implemented, error writing to or syncing the MANIFEST would
cause the db to be stopped. Therefore, this background thread does not have to check
for the background error while waiting. If there has been such an error, `DBStopped()`
would have been true, and this thread will **not** wait forever.
After we improved error handling, RocksDB may map an IOError while writing to MANIFEST
to a soft error, if there is no WAL. This requires the background threads to check for
background error while waiting. Otherwise, a background flush thread may wait forever.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9034
Test Plan: make check
Reviewed By: zhichao-cao
Differential Revision: D31639225
Pulled By: riversand963
fbshipit-source-id: e9ab07c4d8f2eade238adeefe3e42dd9a5a3ebbd
2021-10-21 04:33:32 +00:00
|
|
|
assert(ptr);
|
|
|
|
*ptr = IOStatus::IOError("Injected failure");
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
ASSERT_TRUE(dbfull()->Flush(FlushOptions(), handles_).IsIOError());
|
|
|
|
|
|
|
|
Close();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
}
|
|
|
|
|
2022-10-04 23:43:01 +00:00
|
|
|
TEST_P(DBAtomicFlushTest, NoWaitWhenWritesStopped) {
|
|
|
|
Options options = GetDefaultOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.atomic_flush = GetParam();
|
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
options.memtable_factory.reset(test::NewSpecialSkipListFactory(1));
|
|
|
|
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"DBImpl::DelayWrite:Start",
|
|
|
|
"DBAtomicFlushTest::NoWaitWhenWritesStopped:0"}});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->PauseBackgroundWork());
|
|
|
|
for (int i = 0; i < options.max_write_buffer_number; ++i) {
|
|
|
|
ASSERT_OK(Put("k" + std::to_string(i), "v" + std::to_string(i)));
|
|
|
|
}
|
|
|
|
std::thread stalled_writer([&]() { ASSERT_OK(Put("k", "v")); });
|
|
|
|
|
|
|
|
TEST_SYNC_POINT("DBAtomicFlushTest::NoWaitWhenWritesStopped:0");
|
|
|
|
|
|
|
|
{
|
|
|
|
FlushOptions flush_opts;
|
|
|
|
flush_opts.wait = false;
|
|
|
|
flush_opts.allow_write_stall = true;
|
|
|
|
ASSERT_TRUE(db_->Flush(flush_opts).IsTryAgain());
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->ContinueBackgroundWork());
|
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForFlushMemTable());
|
|
|
|
|
|
|
|
stalled_writer.join();
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
2020-06-03 22:53:09 +00:00
|
|
|
INSTANTIATE_TEST_CASE_P(DBFlushDirectIOTest, DBFlushDirectIOTest,
|
|
|
|
testing::Bool());
|
2017-04-13 20:07:33 +00:00
|
|
|
|
2020-06-03 22:53:09 +00:00
|
|
|
INSTANTIATE_TEST_CASE_P(DBAtomicFlushTest, DBAtomicFlushTest, testing::Bool());
|
2018-10-26 22:06:44 +00:00
|
|
|
|
Rollback other pending memtable flushes when a flush fails (#11865)
Summary:
when atomic_flush=false, there are certain cases where we try to install memtable results with already deleted SST files. This can happen when the following sequence events happen:
```
Start Flush0 for memtable M0 to SST0
Start Flush1 for memtable M1 to SST1
Flush 1 returns OK, but don't install to MANIFEST and let whoever flushes M0 to take care of it
Flush0 finishes with a retryable IOError, it rollbacks M0, (incorrectly) does not rollback M1, and deletes SST0 and SST1
Starts Flush2 for M0, it does not pick up M1 since it thought M1 is flushed
Flush2 writes SST2 and finishes OK, tries to install SST2 and SST1
Error opening SST1 since it's already deleted with an error message like the following:
IO error: No such file or directory: While open a file for random read: /tmp/rocksdbtest-501/db_flush_test_3577_4230653031040984171/000011.sst: No such file or directory
```
This happens since:
1. We currently only rollback the memtables that we are flushing in a flush job when atomic_flush=false.
2. Pending output SSTs from previous flushes are deleted since a pending file number is released whenever a flush job is finished no matter of flush status: https://github.com/facebook/rocksdb/blob/f42e70bf561d4be9b6bbe7316d1c2c0c8a3818e6/db/db_impl/db_impl_compaction_flush.cc#L3161
This PR fixes the issue by rollback these pending flushes.
There is another issue where if a new flush for new memtable starts and finishes after Flush0 finishes. Its output may also be deleted (see more in unit test). It is fixed by checking bg error status before installing a memtable result, and rollback if there is an error.
There is a more efficient fix where we just don't release the pending file output number for flushes that delegate installation. It is more efficient since it does not have to rewrite the flush output file. With the fix in this PR, we can end up with a giant file if a lot of memtables are being flushed together. However, the more efficient fix is a bit more complicated to implement (requires associating such pending file numbers with flush job/memtables) and is more risky since it changes normal flush code path.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11865
Test Plan: * Added repro unit tests.
Reviewed By: anand1976
Differential Revision: D49484922
Pulled By: cbi42
fbshipit-source-id: 25b536c08f4e02e7f1d0f86571663737d2b5d53d
2023-09-21 22:31:29 +00:00
|
|
|
TEST_F(DBFlushTest, NonAtomicFlushRollbackPendingFlushes) {
|
|
|
|
// Fix a bug in when atomic_flush=false.
|
|
|
|
// The bug can happen as follows:
|
|
|
|
// Start Flush0 for memtable M0 to SST0
|
|
|
|
// Start Flush1 for memtable M1 to SST1
|
|
|
|
// Flush1 returns OK, but don't install to MANIFEST and let whoever flushes
|
|
|
|
// M0 to take care of it
|
|
|
|
// Flush0 finishes with a retryable IOError
|
|
|
|
// - It rollbacks M0, (incorrectly) not M1
|
|
|
|
// - Deletes SST1 and SST2
|
|
|
|
//
|
|
|
|
// Auto-recovery will start Flush2 for M0, it does not pick up M1 since it
|
|
|
|
// thinks that M1 is flushed
|
|
|
|
// Flush2 writes SST3 and finishes OK, tries to install SST3 and SST2
|
|
|
|
// Error opening SST2 since it's already deleted
|
|
|
|
//
|
|
|
|
// The fix is to let Flush0 also rollback M1.
|
|
|
|
Options opts = CurrentOptions();
|
|
|
|
opts.atomic_flush = false;
|
|
|
|
opts.memtable_factory.reset(test::NewSpecialSkipListFactory(1));
|
|
|
|
opts.max_write_buffer_number = 64;
|
|
|
|
opts.max_background_flushes = 4;
|
|
|
|
env_->SetBackgroundThreads(4, Env::HIGH);
|
|
|
|
DestroyAndReopen(opts);
|
|
|
|
std::atomic_int flush_count = 0;
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"FlushJob::WriteLevel0Table:s", [&](void* s_ptr) {
|
|
|
|
int c = flush_count.fetch_add(1);
|
|
|
|
if (c == 0) {
|
|
|
|
Status* s = (Status*)(s_ptr);
|
|
|
|
IOStatus io_error = IOStatus::IOError("injected foobar");
|
|
|
|
io_error.SetRetryable(true);
|
|
|
|
*s = io_error;
|
|
|
|
TEST_SYNC_POINT("Let mem1 flush start");
|
|
|
|
TEST_SYNC_POINT("Wait for mem1 flush to finish");
|
|
|
|
}
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"Let mem1 flush start", "Mem1 flush starts"},
|
|
|
|
{"DBImpl::BGWorkFlush:done", "Wait for mem1 flush to finish"},
|
|
|
|
{"RecoverFromRetryableBGIOError:RecoverSuccess",
|
|
|
|
"Wait for error recover"}});
|
|
|
|
// Need first flush to wait for the second flush to finish
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
ASSERT_OK(Put(Key(1), "val1"));
|
|
|
|
// trigger bg flush mem0
|
|
|
|
ASSERT_OK(Put(Key(2), "val2"));
|
|
|
|
TEST_SYNC_POINT("Mem1 flush starts");
|
|
|
|
// trigger bg flush mem1
|
|
|
|
ASSERT_OK(Put(Key(3), "val3"));
|
|
|
|
|
|
|
|
TEST_SYNC_POINT("Wait for error recover");
|
|
|
|
ASSERT_EQ(1, NumTableFilesAtLevel(0));
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(DBFlushTest, AbortNonAtomicFlushWhenBGError) {
|
|
|
|
// Fix a bug in when atomic_flush=false.
|
|
|
|
// The bug can happen as follows:
|
|
|
|
// Start Flush0 for memtable M0 to SST0
|
|
|
|
// Start Flush1 for memtable M1 to SST1
|
|
|
|
// Flush1 returns OK, but doesn't install output MANIFEST and let whoever
|
|
|
|
// flushes M0 to take care of it
|
|
|
|
// Start Flush2 for memtable M2 to SST2
|
|
|
|
// Flush0 finishes with a retryable IOError
|
|
|
|
// - It rollbacks M0 AND M1
|
|
|
|
// - Deletes SST1 and SST2
|
|
|
|
// Flush2 finishes, does not rollback M2,
|
|
|
|
// - releases the pending file number that keeps SST2 alive
|
|
|
|
// - deletes SST2
|
|
|
|
//
|
|
|
|
// Then auto-recovery starts, error opening SST2 when try to install
|
|
|
|
// flush result
|
|
|
|
//
|
|
|
|
// The fix is to let Flush2 rollback M2 if it finds that
|
|
|
|
// there is a background error.
|
|
|
|
Options opts = CurrentOptions();
|
|
|
|
opts.atomic_flush = false;
|
|
|
|
opts.memtable_factory.reset(test::NewSpecialSkipListFactory(1));
|
|
|
|
opts.max_write_buffer_number = 64;
|
|
|
|
opts.max_background_flushes = 4;
|
|
|
|
env_->SetBackgroundThreads(4, Env::HIGH);
|
|
|
|
DestroyAndReopen(opts);
|
|
|
|
std::atomic_int flush_count = 0;
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"FlushJob::WriteLevel0Table:s", [&](void* s_ptr) {
|
|
|
|
int c = flush_count.fetch_add(1);
|
|
|
|
if (c == 0) {
|
|
|
|
Status* s = (Status*)(s_ptr);
|
|
|
|
IOStatus io_error = IOStatus::IOError("injected foobar");
|
|
|
|
io_error.SetRetryable(true);
|
|
|
|
*s = io_error;
|
|
|
|
TEST_SYNC_POINT("Let mem1 flush start");
|
|
|
|
TEST_SYNC_POINT("Wait for mem1 flush to finish");
|
|
|
|
|
|
|
|
TEST_SYNC_POINT("Let mem2 flush start");
|
|
|
|
TEST_SYNC_POINT("Wait for mem2 to start writing table");
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"FlushJob::WriteLevel0Table", [&](void* mems) {
|
|
|
|
autovector<MemTable*>* mems_ptr = (autovector<MemTable*>*)mems;
|
|
|
|
if ((*mems_ptr)[0]->GetID() == 3) {
|
|
|
|
TEST_SYNC_POINT("Mem2 flush starts writing table");
|
|
|
|
TEST_SYNC_POINT("Mem2 flush waits until rollback");
|
|
|
|
}
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"Let mem1 flush start", "Mem1 flush starts"},
|
|
|
|
{"DBImpl::BGWorkFlush:done", "Wait for mem1 flush to finish"},
|
|
|
|
{"Let mem2 flush start", "Mem2 flush starts"},
|
|
|
|
{"Mem2 flush starts writing table",
|
|
|
|
"Wait for mem2 to start writing table"},
|
|
|
|
{"RollbackMemtableFlush", "Mem2 flush waits until rollback"},
|
|
|
|
{"RecoverFromRetryableBGIOError:RecoverSuccess",
|
|
|
|
"Wait for error recover"}});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
ASSERT_OK(Put(Key(1), "val1"));
|
|
|
|
// trigger bg flush mem0
|
|
|
|
ASSERT_OK(Put(Key(2), "val2"));
|
|
|
|
TEST_SYNC_POINT("Mem1 flush starts");
|
|
|
|
// trigger bg flush mem1
|
|
|
|
ASSERT_OK(Put(Key(3), "val3"));
|
|
|
|
|
|
|
|
TEST_SYNC_POINT("Mem2 flush starts");
|
|
|
|
ASSERT_OK(Put(Key(4), "val4"));
|
|
|
|
|
|
|
|
TEST_SYNC_POINT("Wait for error recover");
|
|
|
|
// Recovery flush writes 3 memtables together into 1 file.
|
|
|
|
ASSERT_EQ(1, NumTableFilesAtLevel(0));
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(DBFlushTest, NonAtomicNormalFlushAbortWhenBGError) {
|
|
|
|
Options opts = CurrentOptions();
|
|
|
|
opts.atomic_flush = false;
|
|
|
|
opts.memtable_factory.reset(test::NewSpecialSkipListFactory(1));
|
|
|
|
opts.max_write_buffer_number = 64;
|
|
|
|
opts.max_background_flushes = 1;
|
|
|
|
env_->SetBackgroundThreads(2, Env::HIGH);
|
|
|
|
DestroyAndReopen(opts);
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
std::atomic_int flush_write_table_count = 0;
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"FlushJob::WriteLevel0Table:s", [&](void* s_ptr) {
|
|
|
|
int c = flush_write_table_count.fetch_add(1);
|
|
|
|
if (c == 0) {
|
|
|
|
Status* s = (Status*)(s_ptr);
|
|
|
|
IOStatus io_error = IOStatus::IOError("injected foobar");
|
|
|
|
io_error.SetRetryable(true);
|
|
|
|
*s = io_error;
|
|
|
|
}
|
|
|
|
});
|
|
|
|
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
2023-09-22 23:43:50 +00:00
|
|
|
{{"Let error recovery start",
|
|
|
|
"RecoverFromRetryableBGIOError:BeforeStart"},
|
|
|
|
{"RecoverFromRetryableBGIOError:RecoverSuccess",
|
Rollback other pending memtable flushes when a flush fails (#11865)
Summary:
when atomic_flush=false, there are certain cases where we try to install memtable results with already deleted SST files. This can happen when the following sequence events happen:
```
Start Flush0 for memtable M0 to SST0
Start Flush1 for memtable M1 to SST1
Flush 1 returns OK, but don't install to MANIFEST and let whoever flushes M0 to take care of it
Flush0 finishes with a retryable IOError, it rollbacks M0, (incorrectly) does not rollback M1, and deletes SST0 and SST1
Starts Flush2 for M0, it does not pick up M1 since it thought M1 is flushed
Flush2 writes SST2 and finishes OK, tries to install SST2 and SST1
Error opening SST1 since it's already deleted with an error message like the following:
IO error: No such file or directory: While open a file for random read: /tmp/rocksdbtest-501/db_flush_test_3577_4230653031040984171/000011.sst: No such file or directory
```
This happens since:
1. We currently only rollback the memtables that we are flushing in a flush job when atomic_flush=false.
2. Pending output SSTs from previous flushes are deleted since a pending file number is released whenever a flush job is finished no matter of flush status: https://github.com/facebook/rocksdb/blob/f42e70bf561d4be9b6bbe7316d1c2c0c8a3818e6/db/db_impl/db_impl_compaction_flush.cc#L3161
This PR fixes the issue by rollback these pending flushes.
There is another issue where if a new flush for new memtable starts and finishes after Flush0 finishes. Its output may also be deleted (see more in unit test). It is fixed by checking bg error status before installing a memtable result, and rollback if there is an error.
There is a more efficient fix where we just don't release the pending file output number for flushes that delegate installation. It is more efficient since it does not have to rewrite the flush output file. With the fix in this PR, we can end up with a giant file if a lot of memtables are being flushed together. However, the more efficient fix is a bit more complicated to implement (requires associating such pending file numbers with flush job/memtables) and is more risky since it changes normal flush code path.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11865
Test Plan: * Added repro unit tests.
Reviewed By: anand1976
Differential Revision: D49484922
Pulled By: cbi42
fbshipit-source-id: 25b536c08f4e02e7f1d0f86571663737d2b5d53d
2023-09-21 22:31:29 +00:00
|
|
|
"Wait for error recover"}});
|
|
|
|
|
|
|
|
ASSERT_OK(Put(Key(1), "val1"));
|
|
|
|
// trigger bg flush0 for mem0
|
|
|
|
ASSERT_OK(Put(Key(2), "val2"));
|
2023-09-22 23:43:50 +00:00
|
|
|
// Not checking status since this wait can finish before flush starts.
|
Rollback other pending memtable flushes when a flush fails (#11865)
Summary:
when atomic_flush=false, there are certain cases where we try to install memtable results with already deleted SST files. This can happen when the following sequence events happen:
```
Start Flush0 for memtable M0 to SST0
Start Flush1 for memtable M1 to SST1
Flush 1 returns OK, but don't install to MANIFEST and let whoever flushes M0 to take care of it
Flush0 finishes with a retryable IOError, it rollbacks M0, (incorrectly) does not rollback M1, and deletes SST0 and SST1
Starts Flush2 for M0, it does not pick up M1 since it thought M1 is flushed
Flush2 writes SST2 and finishes OK, tries to install SST2 and SST1
Error opening SST1 since it's already deleted with an error message like the following:
IO error: No such file or directory: While open a file for random read: /tmp/rocksdbtest-501/db_flush_test_3577_4230653031040984171/000011.sst: No such file or directory
```
This happens since:
1. We currently only rollback the memtables that we are flushing in a flush job when atomic_flush=false.
2. Pending output SSTs from previous flushes are deleted since a pending file number is released whenever a flush job is finished no matter of flush status: https://github.com/facebook/rocksdb/blob/f42e70bf561d4be9b6bbe7316d1c2c0c8a3818e6/db/db_impl/db_impl_compaction_flush.cc#L3161
This PR fixes the issue by rollback these pending flushes.
There is another issue where if a new flush for new memtable starts and finishes after Flush0 finishes. Its output may also be deleted (see more in unit test). It is fixed by checking bg error status before installing a memtable result, and rollback if there is an error.
There is a more efficient fix where we just don't release the pending file output number for flushes that delegate installation. It is more efficient since it does not have to rewrite the flush output file. With the fix in this PR, we can end up with a giant file if a lot of memtables are being flushed together. However, the more efficient fix is a bit more complicated to implement (requires associating such pending file numbers with flush job/memtables) and is more risky since it changes normal flush code path.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11865
Test Plan: * Added repro unit tests.
Reviewed By: anand1976
Differential Revision: D49484922
Pulled By: cbi42
fbshipit-source-id: 25b536c08f4e02e7f1d0f86571663737d2b5d53d
2023-09-21 22:31:29 +00:00
|
|
|
dbfull()->TEST_WaitForFlushMemTable().PermitUncheckedError();
|
|
|
|
|
|
|
|
// trigger bg flush1 for mem1, should see bg error and abort
|
|
|
|
// before picking a memtable to flush
|
|
|
|
ASSERT_OK(Put(Key(3), "val3"));
|
2023-09-22 23:43:50 +00:00
|
|
|
ASSERT_NOK(dbfull()->TEST_WaitForFlushMemTable());
|
|
|
|
ASSERT_EQ(0, NumTableFilesAtLevel(0));
|
Rollback other pending memtable flushes when a flush fails (#11865)
Summary:
when atomic_flush=false, there are certain cases where we try to install memtable results with already deleted SST files. This can happen when the following sequence events happen:
```
Start Flush0 for memtable M0 to SST0
Start Flush1 for memtable M1 to SST1
Flush 1 returns OK, but don't install to MANIFEST and let whoever flushes M0 to take care of it
Flush0 finishes with a retryable IOError, it rollbacks M0, (incorrectly) does not rollback M1, and deletes SST0 and SST1
Starts Flush2 for M0, it does not pick up M1 since it thought M1 is flushed
Flush2 writes SST2 and finishes OK, tries to install SST2 and SST1
Error opening SST1 since it's already deleted with an error message like the following:
IO error: No such file or directory: While open a file for random read: /tmp/rocksdbtest-501/db_flush_test_3577_4230653031040984171/000011.sst: No such file or directory
```
This happens since:
1. We currently only rollback the memtables that we are flushing in a flush job when atomic_flush=false.
2. Pending output SSTs from previous flushes are deleted since a pending file number is released whenever a flush job is finished no matter of flush status: https://github.com/facebook/rocksdb/blob/f42e70bf561d4be9b6bbe7316d1c2c0c8a3818e6/db/db_impl/db_impl_compaction_flush.cc#L3161
This PR fixes the issue by rollback these pending flushes.
There is another issue where if a new flush for new memtable starts and finishes after Flush0 finishes. Its output may also be deleted (see more in unit test). It is fixed by checking bg error status before installing a memtable result, and rollback if there is an error.
There is a more efficient fix where we just don't release the pending file output number for flushes that delegate installation. It is more efficient since it does not have to rewrite the flush output file. With the fix in this PR, we can end up with a giant file if a lot of memtables are being flushed together. However, the more efficient fix is a bit more complicated to implement (requires associating such pending file numbers with flush job/memtables) and is more risky since it changes normal flush code path.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11865
Test Plan: * Added repro unit tests.
Reviewed By: anand1976
Differential Revision: D49484922
Pulled By: cbi42
fbshipit-source-id: 25b536c08f4e02e7f1d0f86571663737d2b5d53d
2023-09-21 22:31:29 +00:00
|
|
|
|
2023-09-22 23:43:50 +00:00
|
|
|
TEST_SYNC_POINT("Let error recovery start");
|
Rollback other pending memtable flushes when a flush fails (#11865)
Summary:
when atomic_flush=false, there are certain cases where we try to install memtable results with already deleted SST files. This can happen when the following sequence events happen:
```
Start Flush0 for memtable M0 to SST0
Start Flush1 for memtable M1 to SST1
Flush 1 returns OK, but don't install to MANIFEST and let whoever flushes M0 to take care of it
Flush0 finishes with a retryable IOError, it rollbacks M0, (incorrectly) does not rollback M1, and deletes SST0 and SST1
Starts Flush2 for M0, it does not pick up M1 since it thought M1 is flushed
Flush2 writes SST2 and finishes OK, tries to install SST2 and SST1
Error opening SST1 since it's already deleted with an error message like the following:
IO error: No such file or directory: While open a file for random read: /tmp/rocksdbtest-501/db_flush_test_3577_4230653031040984171/000011.sst: No such file or directory
```
This happens since:
1. We currently only rollback the memtables that we are flushing in a flush job when atomic_flush=false.
2. Pending output SSTs from previous flushes are deleted since a pending file number is released whenever a flush job is finished no matter of flush status: https://github.com/facebook/rocksdb/blob/f42e70bf561d4be9b6bbe7316d1c2c0c8a3818e6/db/db_impl/db_impl_compaction_flush.cc#L3161
This PR fixes the issue by rollback these pending flushes.
There is another issue where if a new flush for new memtable starts and finishes after Flush0 finishes. Its output may also be deleted (see more in unit test). It is fixed by checking bg error status before installing a memtable result, and rollback if there is an error.
There is a more efficient fix where we just don't release the pending file output number for flushes that delegate installation. It is more efficient since it does not have to rewrite the flush output file. With the fix in this PR, we can end up with a giant file if a lot of memtables are being flushed together. However, the more efficient fix is a bit more complicated to implement (requires associating such pending file numbers with flush job/memtables) and is more risky since it changes normal flush code path.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11865
Test Plan: * Added repro unit tests.
Reviewed By: anand1976
Differential Revision: D49484922
Pulled By: cbi42
fbshipit-source-id: 25b536c08f4e02e7f1d0f86571663737d2b5d53d
2023-09-21 22:31:29 +00:00
|
|
|
TEST_SYNC_POINT("Wait for error recover");
|
|
|
|
// Recovery flush writes 2 memtables together into 1 file.
|
|
|
|
ASSERT_EQ(1, NumTableFilesAtLevel(0));
|
|
|
|
// 1 for flush 0 and 1 for recovery flush
|
|
|
|
ASSERT_EQ(2, flush_write_table_count);
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
2023-09-22 23:43:50 +00:00
|
|
|
TEST_F(DBFlushTest, DBStuckAfterAtomicFlushError) {
|
|
|
|
// Test for a bug with atomic flush where DB can become stuck
|
|
|
|
// after a flush error. A repro timeline:
|
|
|
|
//
|
|
|
|
// Start Flush0 for mem0
|
|
|
|
// Start Flush1 for mem1
|
|
|
|
// Now Flush1 will wait for Flush0 to install mem0
|
|
|
|
// Flush0 finishes with retryable IOError, rollbacks mem0
|
|
|
|
// Resume starts and waits for background job to finish, i.e., Flush1
|
|
|
|
// Fill memtable again, trigger Flush2 for mem0
|
|
|
|
// Flush2 will get error status, and not rollback mem0, see code in
|
|
|
|
// https://github.com/facebook/rocksdb/blob/b927ba5936216861c2c35ab68f50ba4a78e65747/db/db_impl/db_impl_compaction_flush.cc#L725
|
|
|
|
//
|
|
|
|
// DB is stuck since mem0 can never be picked now
|
|
|
|
//
|
|
|
|
// The fix is to rollback mem0 in Flush2, and let Flush1 also abort upon
|
|
|
|
// background error besides waiting for older memtables to be installed.
|
|
|
|
// The recovery flush in this case should pick up all memtables
|
|
|
|
// and write them to a single L0 file.
|
|
|
|
Options opts = CurrentOptions();
|
|
|
|
opts.atomic_flush = true;
|
|
|
|
opts.memtable_factory.reset(test::NewSpecialSkipListFactory(1));
|
|
|
|
opts.max_write_buffer_number = 64;
|
|
|
|
opts.max_background_flushes = 4;
|
|
|
|
env_->SetBackgroundThreads(4, Env::HIGH);
|
|
|
|
DestroyAndReopen(opts);
|
|
|
|
|
|
|
|
std::atomic_int flush_count = 0;
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"FlushJob::WriteLevel0Table:s", [&](void* s_ptr) {
|
|
|
|
int c = flush_count.fetch_add(1);
|
|
|
|
if (c == 0) {
|
|
|
|
Status* s = (Status*)(s_ptr);
|
|
|
|
IOStatus io_error = IOStatus::IOError("injected foobar");
|
|
|
|
io_error.SetRetryable(true);
|
|
|
|
*s = io_error;
|
|
|
|
TEST_SYNC_POINT("Let flush for mem1 start");
|
|
|
|
// Wait for Flush1 to start waiting to install flush result
|
|
|
|
TEST_SYNC_POINT("Wait for flush for mem1");
|
|
|
|
}
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"Let flush for mem1 start", "Flush for mem1"},
|
|
|
|
{"DBImpl::AtomicFlushMemTablesToOutputFiles:WaitCV",
|
|
|
|
"Wait for flush for mem1"},
|
|
|
|
{"RecoverFromRetryableBGIOError:BeforeStart",
|
|
|
|
"Wait for resume to start"},
|
|
|
|
{"Recovery should continue here",
|
|
|
|
"RecoverFromRetryableBGIOError:BeforeStart2"},
|
|
|
|
{"RecoverFromRetryableBGIOError:RecoverSuccess",
|
|
|
|
"Wait for error recover"}});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
ASSERT_OK(Put(Key(1), "val1"));
|
|
|
|
// trigger Flush0 for mem0
|
|
|
|
ASSERT_OK(Put(Key(2), "val2"));
|
|
|
|
|
|
|
|
// trigger Flush1 for mem1
|
|
|
|
TEST_SYNC_POINT("Flush for mem1");
|
|
|
|
ASSERT_OK(Put(Key(3), "val3"));
|
|
|
|
|
|
|
|
// Wait until resume started to schedule another flush
|
|
|
|
TEST_SYNC_POINT("Wait for resume to start");
|
|
|
|
// This flush should not be scheduled due to bg error
|
|
|
|
ASSERT_OK(Put(Key(4), "val4"));
|
|
|
|
|
|
|
|
// TEST_WaitForBackgroundWork() returns background error
|
|
|
|
// after all background work is done.
|
|
|
|
ASSERT_NOK(dbfull()->TEST_WaitForBackgroundWork());
|
|
|
|
// Flush should abort and not writing any table
|
|
|
|
ASSERT_EQ(0, NumTableFilesAtLevel(0));
|
|
|
|
|
|
|
|
// Wait until this flush is done.
|
|
|
|
TEST_SYNC_POINT("Recovery should continue here");
|
|
|
|
TEST_SYNC_POINT("Wait for error recover");
|
|
|
|
// error recovery can schedule new flushes, but should not
|
|
|
|
// encounter error
|
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForBackgroundWork());
|
|
|
|
ASSERT_EQ(1, NumTableFilesAtLevel(0));
|
|
|
|
}
|
2020-02-20 20:07:53 +00:00
|
|
|
} // namespace ROCKSDB_NAMESPACE
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
|
|
|
|
int main(int argc, char** argv) {
|
2020-02-20 20:07:53 +00:00
|
|
|
ROCKSDB_NAMESPACE::port::InstallStackTraceHandler();
|
Fix flush not being commit while writing manifest
Summary:
Fix flush not being commit while writing manifest, which is a recent bug introduced by D60075.
The issue:
# Options.max_background_flushes > 1
# Background thread A pick up a flush job, flush, then commit to manifest. (Note that mutex is released before writing manifest.)
# Background thread B pick up another flush job, flush. When it gets to `MemTableList::InstallMemtableFlushResults`, it notices another thread is commiting, so it quit.
# After the first commit, thread A doesn't double check if there are more flush result need to commit, leaving the second flush uncommitted.
Test Plan: run the test. Also verify the new test hit deadlock without the fix.
Reviewers: sdong, igor, lightmark
Reviewed By: lightmark
Subscribers: andrewkr, omegaga, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D60969
2016-07-21 17:10:41 +00:00
|
|
|
::testing::InitGoogleTest(&argc, argv);
|
|
|
|
return RUN_ALL_TESTS();
|
|
|
|
}
|