rocksdb/env/file_system.cc
Hui Xiao 06e593376c Group SST write in flush, compaction and db open with new stats (#11910)
Summary:
## Context/Summary
Similar to https://github.com/facebook/rocksdb/pull/11288, https://github.com/facebook/rocksdb/pull/11444, categorizing SST/blob file write according to different io activities allows more insight into the activity.

For that, this PR does the following:
- Tag different write IOs by passing down and converting WriteOptions to IOOptions
- Add new SST_WRITE_MICROS histogram in WritableFileWriter::Append() and breakdown FILE_WRITE_{FLUSH|COMPACTION|DB_OPEN}_MICROS

Some related code refactory to make implementation cleaner:
- Blob stats
   - Replace high-level write measurement with low-level WritableFileWriter::Append() measurement for BLOB_DB_BLOB_FILE_WRITE_MICROS. This is to make FILE_WRITE_{FLUSH|COMPACTION|DB_OPEN}_MICROS  include blob file. As a consequence, this introduces some behavioral changes on it, see HISTORY and db bench test plan below for more info.
   - Fix bugs where BLOB_DB_BLOB_FILE_SYNCED/BLOB_DB_BLOB_FILE_BYTES_WRITTEN include file failed to sync and bytes failed to write.
- Refactor WriteOptions constructor for easier construction with io_activity and rate_limiter_priority
- Refactor DBImpl::~DBImpl()/BlobDBImpl::Close() to bypass thread op verification
- Build table
   - TableBuilderOptions now includes Read/WriteOpitons so BuildTable() do not need to take these two variables
   - Replace the io_priority passed into BuildTable() with TableBuilderOptions::WriteOpitons::rate_limiter_priority. Similar for BlobFileBuilder.
This parameter is used for dynamically changing file io priority for flush, see  https://github.com/facebook/rocksdb/pull/9988?fbclid=IwAR1DtKel6c-bRJAdesGo0jsbztRtciByNlvokbxkV6h_L-AE9MACzqRTT5s for more
   - Update ThreadStatus::FLUSH_BYTES_WRITTEN to use io_activity to track flush IO in flush job and db open instead of io_priority

## Test
### db bench

Flush
```
./db_bench --statistics=1 --benchmarks=fillseq --num=100000 --write_buffer_size=100

rocksdb.sst.write.micros P50 : 1.830863 P95 : 4.094720 P99 : 6.578947 P100 : 26.000000 COUNT : 7875 SUM : 20377
rocksdb.file.write.flush.micros P50 : 1.830863 P95 : 4.094720 P99 : 6.578947 P100 : 26.000000 COUNT : 7875 SUM : 20377
rocksdb.file.write.compaction.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.file.write.db.open.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
```

compaction, db oopen
```
Setup: ./db_bench --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench

Run:./db_bench --statistics=1 --benchmarks=compact  --db=../db_bench --use_existing_db=1

rocksdb.sst.write.micros P50 : 2.675325 P95 : 9.578788 P99 : 18.780000 P100 : 314.000000 COUNT : 638 SUM : 3279
rocksdb.file.write.flush.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.file.write.compaction.micros P50 : 2.757353 P95 : 9.610687 P99 : 19.316667 P100 : 314.000000 COUNT : 615 SUM : 3213
rocksdb.file.write.db.open.micros P50 : 2.055556 P95 : 3.925000 P99 : 9.000000 P100 : 9.000000 COUNT : 23 SUM : 66
```

blob stats - just to make sure they aren't broken by this PR
```
Integrated Blob DB

Setup: ./db_bench --enable_blob_files=1 --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench

Run:./db_bench --enable_blob_files=1 --statistics=1 --benchmarks=compact  --db=../db_bench --use_existing_db=1

pre-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 7.298246 P95 : 9.771930 P99 : 9.991813 P100 : 16.000000 COUNT : 235 SUM : 1600
rocksdb.blobdb.blob.file.synced COUNT : 1
rocksdb.blobdb.blob.file.bytes.written COUNT : 34842

post-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 2.000000 P95 : 2.829360 P99 : 2.993779 P100 : 9.000000 COUNT : 707 SUM : 1614
- COUNT is higher and values are smaller as it includes header and footer write
- COUNT is 3X higher due to each Append() count as one post-PR, while in pre-PR, 3 Append()s counts as one. See https://github.com/facebook/rocksdb/pull/11910/files#diff-32b811c0a1c000768cfb2532052b44dc0b3bf82253f3eab078e15ff201a0dabfL157-L164

rocksdb.blobdb.blob.file.synced COUNT : 1 (stay the same)
rocksdb.blobdb.blob.file.bytes.written COUNT : 34842 (stay the same)
```

```
Stacked Blob DB

Run: ./db_bench --use_blob_db=1 --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench

pre-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 12.808042 P95 : 19.674497 P99 : 28.539683 P100 : 51.000000 COUNT : 10000 SUM : 140876
rocksdb.blobdb.blob.file.synced COUNT : 8
rocksdb.blobdb.blob.file.bytes.written COUNT : 1043445

post-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 1.657370 P95 : 2.952175 P99 : 3.877519 P100 : 24.000000 COUNT : 30001 SUM : 67924
- COUNT is higher and values are smaller as it includes header and footer write
- COUNT is 3X higher due to each Append() count as one post-PR, while in pre-PR, 3 Append()s counts as one. See https://github.com/facebook/rocksdb/pull/11910/files#diff-32b811c0a1c000768cfb2532052b44dc0b3bf82253f3eab078e15ff201a0dabfL157-L164

rocksdb.blobdb.blob.file.synced COUNT : 8 (stay the same)
rocksdb.blobdb.blob.file.bytes.written COUNT : 1043445 (stay the same)
```

###  Rehearsal CI stress test
Trigger 3 full runs of all our CI stress tests

###  Performance

Flush
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_pre_pr --benchmark_filter=ManualFlush/key_num:524288/per_key_size:256 --benchmark_repetitions=1000
-- default: 1 thread is used to run benchmark; enable_statistics = true

Pre-pr: avg 507515519.3 ns
497686074,499444327,500862543,501389862,502994471,503744435,504142123,504224056,505724198,506610393,506837742,506955122,507695561,507929036,508307733,508312691,508999120,509963561,510142147,510698091,510743096,510769317,510957074,511053311,511371367,511409911,511432960,511642385,511691964,511730908,

Post-pr: avg 511971266.5 ns, regressed 0.88%
502744835,506502498,507735420,507929724,508313335,509548582,509994942,510107257,510715603,511046955,511352639,511458478,512117521,512317380,512766303,512972652,513059586,513804934,513808980,514059409,514187369,514389494,514447762,514616464,514622882,514641763,514666265,514716377,514990179,515502408,
```

Compaction
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_{pre|post}_pr --benchmark_filter=ManualCompaction/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:1  --benchmark_repetitions=1000
-- default: 1 thread is used to run benchmark

Pre-pr: avg 495346098.30 ns
492118301,493203526,494201411,494336607,495269217,495404950,496402598,497012157,497358370,498153846

Post-pr: avg 504528077.20, regressed 1.85%. "ManualCompaction" include flush so the isolated regression for compaction should be around 1.85-0.88 = 0.97%
502465338,502485945,502541789,502909283,503438601,504143885,506113087,506629423,507160414,507393007
```

Put with WAL (in case passing WriteOptions slows down this path even without collecting SST write stats)
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_pre_pr --benchmark_filter=DBPut/comp_style:0/max_data:107374182400/per_key_size:256/enable_statistics:1/wal:1  --benchmark_repetitions=1000
-- default: 1 thread is used to run benchmark

Pre-pr: avg 3848.10 ns
3814,3838,3839,3848,3854,3854,3854,3860,3860,3860

Post-pr: avg 3874.20 ns, regressed 0.68%
3863,3867,3871,3874,3875,3877,3877,3877,3880,3881
```

Pull Request resolved: https://github.com/facebook/rocksdb/pull/11910

Reviewed By: ajkr

Differential Revision: D49788060

Pulled By: hx235

fbshipit-source-id: 79e73699cda5be3b66461687e5147c2484fc5eff
2023-12-29 15:29:23 -08:00

279 lines
9.4 KiB
C++

// Copyright (c) 2019-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
//
#include "rocksdb/file_system.h"
#include "env/composite_env_wrapper.h"
#include "env/env_chroot.h"
#include "env/env_encryption_ctr.h"
#include "env/fs_readonly.h"
#include "env/mock_env.h"
#include "logging/env_logger.h"
#include "options/db_options.h"
#include "rocksdb/convenience.h"
#include "rocksdb/utilities/customizable_util.h"
#include "rocksdb/utilities/object_registry.h"
#include "rocksdb/utilities/options_type.h"
#include "util/string_util.h"
#include "utilities/counted_fs.h"
#include "utilities/env_timed.h"
namespace ROCKSDB_NAMESPACE {
FileSystem::FileSystem() = default;
FileSystem::~FileSystem() = default;
static int RegisterBuiltinFileSystems(ObjectLibrary& library,
const std::string& /*arg*/) {
library.AddFactory<FileSystem>(
TimedFileSystem::kClassName(),
[](const std::string& /*uri*/, std::unique_ptr<FileSystem>* guard,
std::string* /* errmsg */) {
guard->reset(new TimedFileSystem(nullptr));
return guard->get();
});
library.AddFactory<FileSystem>(
ReadOnlyFileSystem::kClassName(),
[](const std::string& /*uri*/, std::unique_ptr<FileSystem>* guard,
std::string* /* errmsg */) {
guard->reset(new ReadOnlyFileSystem(nullptr));
return guard->get();
});
library.AddFactory<FileSystem>(
EncryptedFileSystem::kClassName(),
[](const std::string& /*uri*/, std::unique_ptr<FileSystem>* guard,
std::string* errmsg) {
Status s = NewEncryptedFileSystemImpl(nullptr, nullptr, guard);
if (!s.ok()) {
*errmsg = s.ToString();
}
return guard->get();
});
library.AddFactory<FileSystem>(
CountedFileSystem::kClassName(),
[](const std::string& /*uri*/, std::unique_ptr<FileSystem>* guard,
std::string* /*errmsg*/) {
guard->reset(new CountedFileSystem(FileSystem::Default()));
return guard->get();
});
library.AddFactory<FileSystem>(
MockFileSystem::kClassName(),
[](const std::string& /*uri*/, std::unique_ptr<FileSystem>* guard,
std::string* /*errmsg*/) {
guard->reset(new MockFileSystem(SystemClock::Default()));
return guard->get();
});
#ifndef OS_WIN
library.AddFactory<FileSystem>(
ChrootFileSystem::kClassName(),
[](const std::string& /*uri*/, std::unique_ptr<FileSystem>* guard,
std::string* /* errmsg */) {
guard->reset(new ChrootFileSystem(nullptr, ""));
return guard->get();
});
#endif // OS_WIN
size_t num_types;
return static_cast<int>(library.GetFactoryCount(&num_types));
}
Status FileSystem::CreateFromString(const ConfigOptions& config_options,
const std::string& value,
std::shared_ptr<FileSystem>* result) {
auto default_fs = FileSystem::Default();
if (default_fs->IsInstanceOf(value)) {
*result = default_fs;
return Status::OK();
} else {
static std::once_flag once;
std::call_once(once, [&]() {
RegisterBuiltinFileSystems(*(ObjectLibrary::Default().get()), "");
});
return LoadSharedObject<FileSystem>(config_options, value, result);
}
}
IOStatus FileSystem::ReuseWritableFile(const std::string& fname,
const std::string& old_fname,
const FileOptions& opts,
std::unique_ptr<FSWritableFile>* result,
IODebugContext* dbg) {
IOStatus s = RenameFile(old_fname, fname, opts.io_options, dbg);
if (!s.ok()) {
return s;
}
return NewWritableFile(fname, opts, result, dbg);
}
IOStatus FileSystem::NewLogger(const std::string& fname,
const IOOptions& io_opts,
std::shared_ptr<Logger>* result,
IODebugContext* dbg) {
FileOptions options;
options.io_options = io_opts;
// TODO: Tune the buffer size.
options.writable_file_max_buffer_size = 1024 * 1024;
std::unique_ptr<FSWritableFile> writable_file;
const IOStatus status = NewWritableFile(fname, options, &writable_file, dbg);
if (!status.ok()) {
return status;
}
*result = std::make_shared<EnvLogger>(std::move(writable_file), fname,
options, Env::Default());
return IOStatus::OK();
}
FileOptions FileSystem::OptimizeForLogRead(
const FileOptions& file_options) const {
FileOptions optimized_file_options(file_options);
optimized_file_options.use_direct_reads = false;
return optimized_file_options;
}
FileOptions FileSystem::OptimizeForManifestRead(
const FileOptions& file_options) const {
FileOptions optimized_file_options(file_options);
optimized_file_options.use_direct_reads = false;
return optimized_file_options;
}
FileOptions FileSystem::OptimizeForLogWrite(const FileOptions& file_options,
const DBOptions& db_options) const {
FileOptions optimized_file_options(file_options);
optimized_file_options.bytes_per_sync = db_options.wal_bytes_per_sync;
optimized_file_options.writable_file_max_buffer_size =
db_options.writable_file_max_buffer_size;
return optimized_file_options;
}
FileOptions FileSystem::OptimizeForManifestWrite(
const FileOptions& file_options) const {
return file_options;
}
FileOptions FileSystem::OptimizeForCompactionTableWrite(
const FileOptions& file_options,
const ImmutableDBOptions& db_options) const {
FileOptions optimized_file_options(file_options);
optimized_file_options.use_direct_writes =
db_options.use_direct_io_for_flush_and_compaction;
return optimized_file_options;
}
FileOptions FileSystem::OptimizeForCompactionTableRead(
const FileOptions& file_options,
const ImmutableDBOptions& db_options) const {
FileOptions optimized_file_options(file_options);
optimized_file_options.use_direct_reads = db_options.use_direct_reads;
return optimized_file_options;
}
FileOptions FileSystem::OptimizeForBlobFileRead(
const FileOptions& file_options,
const ImmutableDBOptions& db_options) const {
FileOptions optimized_file_options(file_options);
optimized_file_options.use_direct_reads = db_options.use_direct_reads;
return optimized_file_options;
}
IOStatus WriteStringToFile(FileSystem* fs, const Slice& data,
const std::string& fname, bool should_sync,
const IOOptions& io_options) {
std::unique_ptr<FSWritableFile> file;
EnvOptions soptions;
IOStatus s = fs->NewWritableFile(fname, soptions, &file, nullptr);
if (!s.ok()) {
return s;
}
s = file->Append(data, io_options, nullptr);
if (s.ok() && should_sync) {
s = file->Sync(io_options, nullptr);
}
if (!s.ok()) {
fs->DeleteFile(fname, io_options, nullptr);
}
return s;
}
IOStatus ReadFileToString(FileSystem* fs, const std::string& fname,
std::string* data) {
FileOptions soptions;
data->clear();
std::unique_ptr<FSSequentialFile> file;
IOStatus s = status_to_io_status(
fs->NewSequentialFile(fname, soptions, &file, nullptr));
if (!s.ok()) {
return s;
}
static const int kBufferSize = 8192;
char* space = new char[kBufferSize];
while (true) {
Slice fragment;
s = file->Read(kBufferSize, IOOptions(), &fragment, space, nullptr);
if (!s.ok()) {
break;
}
data->append(fragment.data(), fragment.size());
if (fragment.empty()) {
break;
}
}
delete[] space;
return s;
}
namespace {
static std::unordered_map<std::string, OptionTypeInfo> fs_wrapper_type_info = {
{"target",
OptionTypeInfo::AsCustomSharedPtr<FileSystem>(
0, OptionVerificationType::kByName, OptionTypeFlags::kDontSerialize)},
};
} // namespace
FileSystemWrapper::FileSystemWrapper(const std::shared_ptr<FileSystem>& t)
: target_(t) {
RegisterOptions("", &target_, &fs_wrapper_type_info);
}
Status FileSystemWrapper::PrepareOptions(const ConfigOptions& options) {
if (target_ == nullptr) {
target_ = FileSystem::Default();
}
return FileSystem::PrepareOptions(options);
}
std::string FileSystemWrapper::SerializeOptions(
const ConfigOptions& config_options, const std::string& header) const {
auto parent = FileSystem::SerializeOptions(config_options, "");
if (config_options.IsShallow() || target_ == nullptr ||
target_->IsInstanceOf(FileSystem::kDefaultName())) {
return parent;
} else {
std::string result = header;
if (!StartsWith(parent, OptionTypeInfo::kIdPropName())) {
result.append(OptionTypeInfo::kIdPropName()).append("=");
}
result.append(parent);
if (!EndsWith(result, config_options.delimiter)) {
result.append(config_options.delimiter);
}
result.append("target=").append(target_->ToString(config_options));
return result;
}
}
DirFsyncOptions::DirFsyncOptions() { reason = kDefault; }
DirFsyncOptions::DirFsyncOptions(std::string file_renamed_new_name) {
reason = kFileRenamed;
renamed_new_name = file_renamed_new_name;
}
DirFsyncOptions::DirFsyncOptions(FsyncReason fsync_reason) {
assert(fsync_reason != kFileRenamed);
reason = fsync_reason;
}
} // namespace ROCKSDB_NAMESPACE