rocksdb/db/wal_manager_test.cc
Hui Xiao 06e593376c Group SST write in flush, compaction and db open with new stats (#11910)
Summary:
## Context/Summary
Similar to https://github.com/facebook/rocksdb/pull/11288, https://github.com/facebook/rocksdb/pull/11444, categorizing SST/blob file write according to different io activities allows more insight into the activity.

For that, this PR does the following:
- Tag different write IOs by passing down and converting WriteOptions to IOOptions
- Add new SST_WRITE_MICROS histogram in WritableFileWriter::Append() and breakdown FILE_WRITE_{FLUSH|COMPACTION|DB_OPEN}_MICROS

Some related code refactory to make implementation cleaner:
- Blob stats
   - Replace high-level write measurement with low-level WritableFileWriter::Append() measurement for BLOB_DB_BLOB_FILE_WRITE_MICROS. This is to make FILE_WRITE_{FLUSH|COMPACTION|DB_OPEN}_MICROS  include blob file. As a consequence, this introduces some behavioral changes on it, see HISTORY and db bench test plan below for more info.
   - Fix bugs where BLOB_DB_BLOB_FILE_SYNCED/BLOB_DB_BLOB_FILE_BYTES_WRITTEN include file failed to sync and bytes failed to write.
- Refactor WriteOptions constructor for easier construction with io_activity and rate_limiter_priority
- Refactor DBImpl::~DBImpl()/BlobDBImpl::Close() to bypass thread op verification
- Build table
   - TableBuilderOptions now includes Read/WriteOpitons so BuildTable() do not need to take these two variables
   - Replace the io_priority passed into BuildTable() with TableBuilderOptions::WriteOpitons::rate_limiter_priority. Similar for BlobFileBuilder.
This parameter is used for dynamically changing file io priority for flush, see  https://github.com/facebook/rocksdb/pull/9988?fbclid=IwAR1DtKel6c-bRJAdesGo0jsbztRtciByNlvokbxkV6h_L-AE9MACzqRTT5s for more
   - Update ThreadStatus::FLUSH_BYTES_WRITTEN to use io_activity to track flush IO in flush job and db open instead of io_priority

## Test
### db bench

Flush
```
./db_bench --statistics=1 --benchmarks=fillseq --num=100000 --write_buffer_size=100

rocksdb.sst.write.micros P50 : 1.830863 P95 : 4.094720 P99 : 6.578947 P100 : 26.000000 COUNT : 7875 SUM : 20377
rocksdb.file.write.flush.micros P50 : 1.830863 P95 : 4.094720 P99 : 6.578947 P100 : 26.000000 COUNT : 7875 SUM : 20377
rocksdb.file.write.compaction.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.file.write.db.open.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
```

compaction, db oopen
```
Setup: ./db_bench --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench

Run:./db_bench --statistics=1 --benchmarks=compact  --db=../db_bench --use_existing_db=1

rocksdb.sst.write.micros P50 : 2.675325 P95 : 9.578788 P99 : 18.780000 P100 : 314.000000 COUNT : 638 SUM : 3279
rocksdb.file.write.flush.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.file.write.compaction.micros P50 : 2.757353 P95 : 9.610687 P99 : 19.316667 P100 : 314.000000 COUNT : 615 SUM : 3213
rocksdb.file.write.db.open.micros P50 : 2.055556 P95 : 3.925000 P99 : 9.000000 P100 : 9.000000 COUNT : 23 SUM : 66
```

blob stats - just to make sure they aren't broken by this PR
```
Integrated Blob DB

Setup: ./db_bench --enable_blob_files=1 --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench

Run:./db_bench --enable_blob_files=1 --statistics=1 --benchmarks=compact  --db=../db_bench --use_existing_db=1

pre-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 7.298246 P95 : 9.771930 P99 : 9.991813 P100 : 16.000000 COUNT : 235 SUM : 1600
rocksdb.blobdb.blob.file.synced COUNT : 1
rocksdb.blobdb.blob.file.bytes.written COUNT : 34842

post-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 2.000000 P95 : 2.829360 P99 : 2.993779 P100 : 9.000000 COUNT : 707 SUM : 1614
- COUNT is higher and values are smaller as it includes header and footer write
- COUNT is 3X higher due to each Append() count as one post-PR, while in pre-PR, 3 Append()s counts as one. See https://github.com/facebook/rocksdb/pull/11910/files#diff-32b811c0a1c000768cfb2532052b44dc0b3bf82253f3eab078e15ff201a0dabfL157-L164

rocksdb.blobdb.blob.file.synced COUNT : 1 (stay the same)
rocksdb.blobdb.blob.file.bytes.written COUNT : 34842 (stay the same)
```

```
Stacked Blob DB

Run: ./db_bench --use_blob_db=1 --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench

pre-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 12.808042 P95 : 19.674497 P99 : 28.539683 P100 : 51.000000 COUNT : 10000 SUM : 140876
rocksdb.blobdb.blob.file.synced COUNT : 8
rocksdb.blobdb.blob.file.bytes.written COUNT : 1043445

post-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 1.657370 P95 : 2.952175 P99 : 3.877519 P100 : 24.000000 COUNT : 30001 SUM : 67924
- COUNT is higher and values are smaller as it includes header and footer write
- COUNT is 3X higher due to each Append() count as one post-PR, while in pre-PR, 3 Append()s counts as one. See https://github.com/facebook/rocksdb/pull/11910/files#diff-32b811c0a1c000768cfb2532052b44dc0b3bf82253f3eab078e15ff201a0dabfL157-L164

rocksdb.blobdb.blob.file.synced COUNT : 8 (stay the same)
rocksdb.blobdb.blob.file.bytes.written COUNT : 1043445 (stay the same)
```

###  Rehearsal CI stress test
Trigger 3 full runs of all our CI stress tests

###  Performance

Flush
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_pre_pr --benchmark_filter=ManualFlush/key_num:524288/per_key_size:256 --benchmark_repetitions=1000
-- default: 1 thread is used to run benchmark; enable_statistics = true

Pre-pr: avg 507515519.3 ns
497686074,499444327,500862543,501389862,502994471,503744435,504142123,504224056,505724198,506610393,506837742,506955122,507695561,507929036,508307733,508312691,508999120,509963561,510142147,510698091,510743096,510769317,510957074,511053311,511371367,511409911,511432960,511642385,511691964,511730908,

Post-pr: avg 511971266.5 ns, regressed 0.88%
502744835,506502498,507735420,507929724,508313335,509548582,509994942,510107257,510715603,511046955,511352639,511458478,512117521,512317380,512766303,512972652,513059586,513804934,513808980,514059409,514187369,514389494,514447762,514616464,514622882,514641763,514666265,514716377,514990179,515502408,
```

Compaction
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_{pre|post}_pr --benchmark_filter=ManualCompaction/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:1  --benchmark_repetitions=1000
-- default: 1 thread is used to run benchmark

Pre-pr: avg 495346098.30 ns
492118301,493203526,494201411,494336607,495269217,495404950,496402598,497012157,497358370,498153846

Post-pr: avg 504528077.20, regressed 1.85%. "ManualCompaction" include flush so the isolated regression for compaction should be around 1.85-0.88 = 0.97%
502465338,502485945,502541789,502909283,503438601,504143885,506113087,506629423,507160414,507393007
```

Put with WAL (in case passing WriteOptions slows down this path even without collecting SST write stats)
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_pre_pr --benchmark_filter=DBPut/comp_style:0/max_data:107374182400/per_key_size:256/enable_statistics:1/wal:1  --benchmark_repetitions=1000
-- default: 1 thread is used to run benchmark

Pre-pr: avg 3848.10 ns
3814,3838,3839,3848,3854,3854,3854,3860,3860,3860

Post-pr: avg 3874.20 ns, regressed 0.68%
3863,3867,3871,3874,3875,3877,3877,3877,3880,3881
```

Pull Request resolved: https://github.com/facebook/rocksdb/pull/11910

Reviewed By: ajkr

Differential Revision: D49788060

Pulled By: hx235

fbshipit-source-id: 79e73699cda5be3b66461687e5147c2484fc5eff
2023-12-29 15:29:23 -08:00

338 lines
11 KiB
C++

// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
#include "db/wal_manager.h"
#include <map>
#include <string>
#include "db/column_family.h"
#include "db/db_impl/db_impl.h"
#include "db/log_writer.h"
#include "db/version_set.h"
#include "env/mock_env.h"
#include "file/writable_file_writer.h"
#include "rocksdb/cache.h"
#include "rocksdb/file_system.h"
#include "rocksdb/write_batch.h"
#include "rocksdb/write_buffer_manager.h"
#include "table/mock_table.h"
#include "test_util/testharness.h"
#include "test_util/testutil.h"
#include "util/string_util.h"
namespace ROCKSDB_NAMESPACE {
// TODO(icanadi) mock out VersionSet
// TODO(icanadi) move other WalManager-specific tests from db_test here
class WalManagerTest : public testing::Test {
public:
WalManagerTest()
: dbname_(test::PerThreadDBPath("wal_manager_test")),
db_options_(),
table_cache_(NewLRUCache(50000, 16)),
write_buffer_manager_(db_options_.db_write_buffer_size),
current_log_number_(0) {
env_.reset(MockEnv::Create(Env::Default()));
EXPECT_OK(DestroyDB(dbname_, Options()));
}
void Init() {
ASSERT_OK(env_->CreateDirIfMissing(dbname_));
ASSERT_OK(env_->CreateDirIfMissing(ArchivalDirectory(dbname_)));
db_options_.db_paths.emplace_back(dbname_,
std::numeric_limits<uint64_t>::max());
db_options_.wal_dir = dbname_;
db_options_.env = env_.get();
db_options_.fs = env_->GetFileSystem();
db_options_.clock = env_->GetSystemClock().get();
versions_.reset(new VersionSet(
dbname_, &db_options_, env_options_, table_cache_.get(),
&write_buffer_manager_, &write_controller_,
/*block_cache_tracer=*/nullptr, /*io_tracer=*/nullptr,
/*db_id=*/"", /*db_session_id=*/"", /*daily_offpeak_time_utc=*/"",
/*error_handler=*/nullptr, /*read_only=*/false));
wal_manager_.reset(
new WalManager(db_options_, env_options_, nullptr /*IOTracer*/));
}
void Reopen() {
wal_manager_.reset(
new WalManager(db_options_, env_options_, nullptr /*IOTracer*/));
}
// NOT thread safe
void Put(const std::string& key, const std::string& value) {
assert(current_log_writer_.get() != nullptr);
uint64_t seq = versions_->LastSequence() + 1;
WriteBatch batch;
ASSERT_OK(batch.Put(key, value));
WriteBatchInternal::SetSequence(&batch, seq);
ASSERT_OK(current_log_writer_->AddRecord(
WriteOptions(), WriteBatchInternal::Contents(&batch)));
versions_->SetLastAllocatedSequence(seq);
versions_->SetLastPublishedSequence(seq);
versions_->SetLastSequence(seq);
}
// NOT thread safe
void RollTheLog(bool /*archived*/) {
current_log_number_++;
std::string fname = ArchivedLogFileName(dbname_, current_log_number_);
const auto& fs = env_->GetFileSystem();
std::unique_ptr<WritableFileWriter> file_writer;
ASSERT_OK(WritableFileWriter::Create(fs, fname, env_options_, &file_writer,
nullptr));
current_log_writer_.reset(
new log::Writer(std::move(file_writer), 0, false));
}
void CreateArchiveLogs(int num_logs, int entries_per_log) {
for (int i = 1; i <= num_logs; ++i) {
RollTheLog(true);
for (int k = 0; k < entries_per_log; ++k) {
Put(std::to_string(k), std::string(1024, 'a'));
}
}
}
std::unique_ptr<TransactionLogIterator> OpenTransactionLogIter(
const SequenceNumber seq) {
std::unique_ptr<TransactionLogIterator> iter;
Status status = wal_manager_->GetUpdatesSince(
seq, &iter, TransactionLogIterator::ReadOptions(), versions_.get());
EXPECT_OK(status);
return iter;
}
std::unique_ptr<MockEnv> env_;
std::string dbname_;
ImmutableDBOptions db_options_;
WriteController write_controller_;
EnvOptions env_options_;
std::shared_ptr<Cache> table_cache_;
WriteBufferManager write_buffer_manager_;
std::unique_ptr<VersionSet> versions_;
std::unique_ptr<WalManager> wal_manager_;
std::unique_ptr<log::Writer> current_log_writer_;
uint64_t current_log_number_;
};
TEST_F(WalManagerTest, ReadFirstRecordCache) {
Init();
std::string path = dbname_ + "/000001.log";
std::unique_ptr<FSWritableFile> file;
ASSERT_OK(env_->GetFileSystem()->NewWritableFile(path, FileOptions(), &file,
nullptr));
SequenceNumber s;
ASSERT_OK(wal_manager_->TEST_ReadFirstLine(path, 1 /* number */, &s));
ASSERT_EQ(s, 0U);
ASSERT_OK(
wal_manager_->TEST_ReadFirstRecord(kAliveLogFile, 1 /* number */, &s));
ASSERT_EQ(s, 0U);
std::unique_ptr<WritableFileWriter> file_writer(
new WritableFileWriter(std::move(file), path, FileOptions()));
log::Writer writer(std::move(file_writer), 1,
db_options_.recycle_log_file_num > 0);
WriteBatch batch;
ASSERT_OK(batch.Put("foo", "bar"));
WriteBatchInternal::SetSequence(&batch, 10);
ASSERT_OK(
writer.AddRecord(WriteOptions(), WriteBatchInternal::Contents(&batch)));
// TODO(icanadi) move SpecialEnv outside of db_test, so we can reuse it here.
// Waiting for lei to finish with db_test
// env_->count_sequential_reads_ = true;
// sequential_read_counter_ sanity test
// ASSERT_EQ(env_->sequential_read_counter_.Read(), 0);
ASSERT_OK(wal_manager_->TEST_ReadFirstRecord(kAliveLogFile, 1, &s));
ASSERT_EQ(s, 10U);
// did a read
// TODO(icanadi) move SpecialEnv outside of db_test, so we can reuse it here
// ASSERT_EQ(env_->sequential_read_counter_.Read(), 1);
ASSERT_OK(wal_manager_->TEST_ReadFirstRecord(kAliveLogFile, 1, &s));
ASSERT_EQ(s, 10U);
// no new reads since the value is cached
// TODO(icanadi) move SpecialEnv outside of db_test, so we can reuse it here
// ASSERT_EQ(env_->sequential_read_counter_.Read(), 1);
}
namespace {
uint64_t GetLogDirSize(std::string dir_path, Env* env) {
uint64_t dir_size = 0;
std::vector<std::string> files;
EXPECT_OK(env->GetChildren(dir_path, &files));
for (auto& f : files) {
uint64_t number;
FileType type;
if (ParseFileName(f, &number, &type) && type == kWalFile) {
std::string const file_path = dir_path + "/" + f;
uint64_t file_size;
EXPECT_OK(env->GetFileSize(file_path, &file_size));
dir_size += file_size;
}
}
return dir_size;
}
std::vector<std::uint64_t> ListSpecificFiles(
Env* env, const std::string& path, const FileType expected_file_type) {
std::vector<std::string> files;
std::vector<uint64_t> file_numbers;
uint64_t number;
FileType type;
EXPECT_OK(env->GetChildren(path, &files));
for (size_t i = 0; i < files.size(); ++i) {
if (ParseFileName(files[i], &number, &type)) {
if (type == expected_file_type) {
file_numbers.push_back(number);
}
}
}
return file_numbers;
}
int CountRecords(TransactionLogIterator* iter) {
int count = 0;
SequenceNumber lastSequence = 0;
BatchResult res;
while (iter->Valid()) {
res = iter->GetBatch();
EXPECT_TRUE(res.sequence > lastSequence);
++count;
lastSequence = res.sequence;
EXPECT_OK(iter->status());
iter->Next();
}
EXPECT_OK(iter->status());
return count;
}
} // anonymous namespace
TEST_F(WalManagerTest, WALArchivalSizeLimit) {
db_options_.WAL_ttl_seconds = 0;
db_options_.WAL_size_limit_MB = 1000;
Init();
// TEST : Create WalManager with huge size limit and no ttl.
// Create some archived files and call PurgeObsoleteWALFiles().
// Count the archived log files that survived.
// Assert that all of them did.
// Change size limit. Re-open WalManager.
// Assert that archive is not greater than WAL_size_limit_MB after
// PurgeObsoleteWALFiles()
// Set ttl and time_to_check_ to small values. Re-open db.
// Assert that there are no archived logs left.
std::string archive_dir = ArchivalDirectory(dbname_);
CreateArchiveLogs(20, 5000);
std::vector<std::uint64_t> log_files =
ListSpecificFiles(env_.get(), archive_dir, kWalFile);
ASSERT_EQ(log_files.size(), 20U);
db_options_.WAL_size_limit_MB = 8;
Reopen();
wal_manager_->PurgeObsoleteWALFiles();
uint64_t archive_size = GetLogDirSize(archive_dir, env_.get());
ASSERT_TRUE(archive_size <= db_options_.WAL_size_limit_MB * 1024 * 1024);
db_options_.WAL_ttl_seconds = 1;
env_->SleepForMicroseconds(2 * 1000 * 1000);
Reopen();
wal_manager_->PurgeObsoleteWALFiles();
log_files = ListSpecificFiles(env_.get(), archive_dir, kWalFile);
ASSERT_TRUE(log_files.empty());
}
TEST_F(WalManagerTest, WALArchivalTtl) {
db_options_.WAL_ttl_seconds = 1000;
Init();
// TEST : Create WalManager with a ttl and no size limit.
// Create some archived log files and call PurgeObsoleteWALFiles().
// Assert that files are not deleted
// Reopen db with small ttl.
// Assert that all archived logs was removed.
std::string archive_dir = ArchivalDirectory(dbname_);
CreateArchiveLogs(20, 5000);
std::vector<uint64_t> log_files =
ListSpecificFiles(env_.get(), archive_dir, kWalFile);
ASSERT_GT(log_files.size(), 0U);
db_options_.WAL_ttl_seconds = 1;
env_->SleepForMicroseconds(3 * 1000 * 1000);
Reopen();
wal_manager_->PurgeObsoleteWALFiles();
log_files = ListSpecificFiles(env_.get(), archive_dir, kWalFile);
ASSERT_TRUE(log_files.empty());
}
TEST_F(WalManagerTest, TransactionLogIteratorMoveOverZeroFiles) {
Init();
RollTheLog(false);
Put("key1", std::string(1024, 'a'));
// Create a zero record WAL file.
RollTheLog(false);
RollTheLog(false);
Put("key2", std::string(1024, 'a'));
auto iter = OpenTransactionLogIter(0);
ASSERT_EQ(2, CountRecords(iter.get()));
}
TEST_F(WalManagerTest, TransactionLogIteratorJustEmptyFile) {
Init();
RollTheLog(false);
auto iter = OpenTransactionLogIter(0);
// Check that an empty iterator is returned
ASSERT_TRUE(!iter->Valid());
}
TEST_F(WalManagerTest, TransactionLogIteratorNewFileWhileScanning) {
Init();
CreateArchiveLogs(2, 100);
auto iter = OpenTransactionLogIter(0);
CreateArchiveLogs(1, 100);
int i = 0;
for (; iter->Valid(); iter->Next()) {
i++;
}
ASSERT_EQ(i, 200);
// A new log file was added after the iterator was created.
// TryAgain indicates a new iterator is needed to fetch the new data
ASSERT_TRUE(iter->status().IsTryAgain());
iter = OpenTransactionLogIter(0);
i = 0;
for (; iter->Valid(); iter->Next()) {
i++;
}
ASSERT_EQ(i, 300);
ASSERT_TRUE(iter->status().ok());
}
} // namespace ROCKSDB_NAMESPACE
int main(int argc, char** argv) {
ROCKSDB_NAMESPACE::port::InstallStackTraceHandler();
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}