mirror of
https://github.com/facebook/rocksdb.git
synced 2024-11-25 22:44:05 +00:00
06e593376c
Summary: ## Context/Summary Similar to https://github.com/facebook/rocksdb/pull/11288, https://github.com/facebook/rocksdb/pull/11444, categorizing SST/blob file write according to different io activities allows more insight into the activity. For that, this PR does the following: - Tag different write IOs by passing down and converting WriteOptions to IOOptions - Add new SST_WRITE_MICROS histogram in WritableFileWriter::Append() and breakdown FILE_WRITE_{FLUSH|COMPACTION|DB_OPEN}_MICROS Some related code refactory to make implementation cleaner: - Blob stats - Replace high-level write measurement with low-level WritableFileWriter::Append() measurement for BLOB_DB_BLOB_FILE_WRITE_MICROS. This is to make FILE_WRITE_{FLUSH|COMPACTION|DB_OPEN}_MICROS include blob file. As a consequence, this introduces some behavioral changes on it, see HISTORY and db bench test plan below for more info. - Fix bugs where BLOB_DB_BLOB_FILE_SYNCED/BLOB_DB_BLOB_FILE_BYTES_WRITTEN include file failed to sync and bytes failed to write. - Refactor WriteOptions constructor for easier construction with io_activity and rate_limiter_priority - Refactor DBImpl::~DBImpl()/BlobDBImpl::Close() to bypass thread op verification - Build table - TableBuilderOptions now includes Read/WriteOpitons so BuildTable() do not need to take these two variables - Replace the io_priority passed into BuildTable() with TableBuilderOptions::WriteOpitons::rate_limiter_priority. Similar for BlobFileBuilder. This parameter is used for dynamically changing file io priority for flush, see https://github.com/facebook/rocksdb/pull/9988?fbclid=IwAR1DtKel6c-bRJAdesGo0jsbztRtciByNlvokbxkV6h_L-AE9MACzqRTT5s for more - Update ThreadStatus::FLUSH_BYTES_WRITTEN to use io_activity to track flush IO in flush job and db open instead of io_priority ## Test ### db bench Flush ``` ./db_bench --statistics=1 --benchmarks=fillseq --num=100000 --write_buffer_size=100 rocksdb.sst.write.micros P50 : 1.830863 P95 : 4.094720 P99 : 6.578947 P100 : 26.000000 COUNT : 7875 SUM : 20377 rocksdb.file.write.flush.micros P50 : 1.830863 P95 : 4.094720 P99 : 6.578947 P100 : 26.000000 COUNT : 7875 SUM : 20377 rocksdb.file.write.compaction.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 rocksdb.file.write.db.open.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 ``` compaction, db oopen ``` Setup: ./db_bench --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench Run:./db_bench --statistics=1 --benchmarks=compact --db=../db_bench --use_existing_db=1 rocksdb.sst.write.micros P50 : 2.675325 P95 : 9.578788 P99 : 18.780000 P100 : 314.000000 COUNT : 638 SUM : 3279 rocksdb.file.write.flush.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0 rocksdb.file.write.compaction.micros P50 : 2.757353 P95 : 9.610687 P99 : 19.316667 P100 : 314.000000 COUNT : 615 SUM : 3213 rocksdb.file.write.db.open.micros P50 : 2.055556 P95 : 3.925000 P99 : 9.000000 P100 : 9.000000 COUNT : 23 SUM : 66 ``` blob stats - just to make sure they aren't broken by this PR ``` Integrated Blob DB Setup: ./db_bench --enable_blob_files=1 --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench Run:./db_bench --enable_blob_files=1 --statistics=1 --benchmarks=compact --db=../db_bench --use_existing_db=1 pre-PR: rocksdb.blobdb.blob.file.write.micros P50 : 7.298246 P95 : 9.771930 P99 : 9.991813 P100 : 16.000000 COUNT : 235 SUM : 1600 rocksdb.blobdb.blob.file.synced COUNT : 1 rocksdb.blobdb.blob.file.bytes.written COUNT : 34842 post-PR: rocksdb.blobdb.blob.file.write.micros P50 : 2.000000 P95 : 2.829360 P99 : 2.993779 P100 : 9.000000 COUNT : 707 SUM : 1614 - COUNT is higher and values are smaller as it includes header and footer write - COUNT is 3X higher due to each Append() count as one post-PR, while in pre-PR, 3 Append()s counts as one. See https://github.com/facebook/rocksdb/pull/11910/files#diff-32b811c0a1c000768cfb2532052b44dc0b3bf82253f3eab078e15ff201a0dabfL157-L164 rocksdb.blobdb.blob.file.synced COUNT : 1 (stay the same) rocksdb.blobdb.blob.file.bytes.written COUNT : 34842 (stay the same) ``` ``` Stacked Blob DB Run: ./db_bench --use_blob_db=1 --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench pre-PR: rocksdb.blobdb.blob.file.write.micros P50 : 12.808042 P95 : 19.674497 P99 : 28.539683 P100 : 51.000000 COUNT : 10000 SUM : 140876 rocksdb.blobdb.blob.file.synced COUNT : 8 rocksdb.blobdb.blob.file.bytes.written COUNT : 1043445 post-PR: rocksdb.blobdb.blob.file.write.micros P50 : 1.657370 P95 : 2.952175 P99 : 3.877519 P100 : 24.000000 COUNT : 30001 SUM : 67924 - COUNT is higher and values are smaller as it includes header and footer write - COUNT is 3X higher due to each Append() count as one post-PR, while in pre-PR, 3 Append()s counts as one. See https://github.com/facebook/rocksdb/pull/11910/files#diff-32b811c0a1c000768cfb2532052b44dc0b3bf82253f3eab078e15ff201a0dabfL157-L164 rocksdb.blobdb.blob.file.synced COUNT : 8 (stay the same) rocksdb.blobdb.blob.file.bytes.written COUNT : 1043445 (stay the same) ``` ### Rehearsal CI stress test Trigger 3 full runs of all our CI stress tests ### Performance Flush ``` TEST_TMPDIR=/dev/shm ./db_basic_bench_pre_pr --benchmark_filter=ManualFlush/key_num:524288/per_key_size:256 --benchmark_repetitions=1000 -- default: 1 thread is used to run benchmark; enable_statistics = true Pre-pr: avg 507515519.3 ns 497686074,499444327,500862543,501389862,502994471,503744435,504142123,504224056,505724198,506610393,506837742,506955122,507695561,507929036,508307733,508312691,508999120,509963561,510142147,510698091,510743096,510769317,510957074,511053311,511371367,511409911,511432960,511642385,511691964,511730908, Post-pr: avg 511971266.5 ns, regressed 0.88% 502744835,506502498,507735420,507929724,508313335,509548582,509994942,510107257,510715603,511046955,511352639,511458478,512117521,512317380,512766303,512972652,513059586,513804934,513808980,514059409,514187369,514389494,514447762,514616464,514622882,514641763,514666265,514716377,514990179,515502408, ``` Compaction ``` TEST_TMPDIR=/dev/shm ./db_basic_bench_{pre|post}_pr --benchmark_filter=ManualCompaction/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:1 --benchmark_repetitions=1000 -- default: 1 thread is used to run benchmark Pre-pr: avg 495346098.30 ns 492118301,493203526,494201411,494336607,495269217,495404950,496402598,497012157,497358370,498153846 Post-pr: avg 504528077.20, regressed 1.85%. "ManualCompaction" include flush so the isolated regression for compaction should be around 1.85-0.88 = 0.97% 502465338,502485945,502541789,502909283,503438601,504143885,506113087,506629423,507160414,507393007 ``` Put with WAL (in case passing WriteOptions slows down this path even without collecting SST write stats) ``` TEST_TMPDIR=/dev/shm ./db_basic_bench_pre_pr --benchmark_filter=DBPut/comp_style:0/max_data:107374182400/per_key_size:256/enable_statistics:1/wal:1 --benchmark_repetitions=1000 -- default: 1 thread is used to run benchmark Pre-pr: avg 3848.10 ns 3814,3838,3839,3848,3854,3854,3854,3860,3860,3860 Post-pr: avg 3874.20 ns, regressed 0.68% 3863,3867,3871,3874,3875,3877,3877,3877,3880,3881 ``` Pull Request resolved: https://github.com/facebook/rocksdb/pull/11910 Reviewed By: ajkr Differential Revision: D49788060 Pulled By: hx235 fbshipit-source-id: 79e73699cda5be3b66461687e5147c2484fc5eff
532 lines
21 KiB
C++
532 lines
21 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
#include "table/block_fetcher.h"
|
|
|
|
#include "db/table_properties_collector.h"
|
|
#include "file/file_util.h"
|
|
#include "options/options_helper.h"
|
|
#include "port/port.h"
|
|
#include "port/stack_trace.h"
|
|
#include "rocksdb/db.h"
|
|
#include "rocksdb/file_system.h"
|
|
#include "table/block_based/binary_search_index_reader.h"
|
|
#include "table/block_based/block_based_table_builder.h"
|
|
#include "table/block_based/block_based_table_factory.h"
|
|
#include "table/block_based/block_based_table_reader.h"
|
|
#include "table/format.h"
|
|
#include "test_util/testharness.h"
|
|
#include "utilities/memory_allocators.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
namespace {
|
|
struct MemcpyStats {
|
|
int num_stack_buf_memcpy;
|
|
int num_heap_buf_memcpy;
|
|
int num_compressed_buf_memcpy;
|
|
};
|
|
|
|
struct BufAllocationStats {
|
|
int num_heap_buf_allocations;
|
|
int num_compressed_buf_allocations;
|
|
};
|
|
|
|
struct TestStats {
|
|
MemcpyStats memcpy_stats;
|
|
BufAllocationStats buf_allocation_stats;
|
|
};
|
|
|
|
class BlockFetcherTest : public testing::Test {
|
|
public:
|
|
enum class Mode {
|
|
kBufferedRead = 0,
|
|
kBufferedMmap,
|
|
kDirectRead,
|
|
kNumModes,
|
|
};
|
|
// use NumModes as array size to avoid "size of array '...' has non-integral
|
|
// type" errors.
|
|
const static int NumModes = static_cast<int>(Mode::kNumModes);
|
|
|
|
protected:
|
|
void SetUp() override {
|
|
SetupSyncPointsToMockDirectIO();
|
|
test_dir_ = test::PerThreadDBPath("block_fetcher_test");
|
|
env_ = Env::Default();
|
|
fs_ = FileSystem::Default();
|
|
ASSERT_OK(fs_->CreateDir(test_dir_, IOOptions(), nullptr));
|
|
}
|
|
|
|
void TearDown() override { EXPECT_OK(DestroyDir(env_, test_dir_)); }
|
|
|
|
void AssertSameBlock(const std::string& block1, const std::string& block2) {
|
|
ASSERT_EQ(block1, block2);
|
|
}
|
|
|
|
// Creates a table with kv pairs (i, i) where i ranges from 0 to 9, inclusive.
|
|
void CreateTable(const std::string& table_name,
|
|
const CompressionType& compression_type) {
|
|
std::unique_ptr<WritableFileWriter> writer;
|
|
NewFileWriter(table_name, &writer);
|
|
|
|
// Create table builder.
|
|
ImmutableOptions ioptions(options_);
|
|
InternalKeyComparator comparator(options_.comparator);
|
|
ColumnFamilyOptions cf_options(options_);
|
|
MutableCFOptions moptions(cf_options);
|
|
IntTblPropCollectorFactories factories;
|
|
const ReadOptions read_options;
|
|
const WriteOptions write_options;
|
|
std::unique_ptr<TableBuilder> table_builder(table_factory_.NewTableBuilder(
|
|
TableBuilderOptions(ioptions, moptions, read_options, write_options,
|
|
comparator, &factories, compression_type,
|
|
CompressionOptions(), 0 /* column_family_id */,
|
|
kDefaultColumnFamilyName, -1 /* level */),
|
|
writer.get()));
|
|
|
|
// Build table.
|
|
for (int i = 0; i < 9; i++) {
|
|
std::string key = ToInternalKey(std::to_string(i));
|
|
// Append "00000000" to string value to enhance compression ratio
|
|
std::string value = "00000000" + std::to_string(i);
|
|
table_builder->Add(key, value);
|
|
}
|
|
ASSERT_OK(table_builder->Finish());
|
|
}
|
|
|
|
void FetchIndexBlock(const std::string& table_name,
|
|
CountedMemoryAllocator* heap_buf_allocator,
|
|
CountedMemoryAllocator* compressed_buf_allocator,
|
|
MemcpyStats* memcpy_stats, BlockContents* index_block,
|
|
std::string* result) {
|
|
FileOptions fopt(options_);
|
|
std::unique_ptr<RandomAccessFileReader> file;
|
|
NewFileReader(table_name, fopt, &file);
|
|
|
|
// Get handle of the index block.
|
|
Footer footer;
|
|
ReadFooter(file.get(), &footer);
|
|
const BlockHandle& index_handle = footer.index_handle();
|
|
// FIXME: index handle will need to come from metaindex for
|
|
// format_version >= 6 when that becomes the default
|
|
ASSERT_FALSE(index_handle.IsNull());
|
|
|
|
CompressionType compression_type;
|
|
FetchBlock(file.get(), index_handle, BlockType::kIndex,
|
|
false /* compressed */, false /* do_uncompress */,
|
|
heap_buf_allocator, compressed_buf_allocator, index_block,
|
|
memcpy_stats, &compression_type);
|
|
ASSERT_EQ(compression_type, CompressionType::kNoCompression);
|
|
result->assign(index_block->data.ToString());
|
|
}
|
|
|
|
// Fetches the first data block in both direct IO and non-direct IO mode.
|
|
//
|
|
// compressed: whether the data blocks are compressed;
|
|
// do_uncompress: whether the data blocks should be uncompressed on fetching.
|
|
// compression_type: the expected compression type.
|
|
//
|
|
// Expects:
|
|
// Block contents are the same.
|
|
// Bufferr allocation and memory copy statistics are expected.
|
|
void TestFetchDataBlock(
|
|
const std::string& table_name_prefix, bool compressed, bool do_uncompress,
|
|
std::array<TestStats, NumModes> expected_stats_by_mode) {
|
|
for (CompressionType compression_type : GetSupportedCompressions()) {
|
|
bool do_compress = compression_type != kNoCompression;
|
|
if (compressed != do_compress) {
|
|
continue;
|
|
}
|
|
std::string compression_type_str =
|
|
CompressionTypeToString(compression_type);
|
|
|
|
std::string table_name = table_name_prefix + compression_type_str;
|
|
CreateTable(table_name, compression_type);
|
|
|
|
CompressionType expected_compression_type_after_fetch =
|
|
(compressed && !do_uncompress) ? compression_type : kNoCompression;
|
|
|
|
BlockContents blocks[NumModes];
|
|
std::string block_datas[NumModes];
|
|
MemcpyStats memcpy_stats[NumModes];
|
|
CountedMemoryAllocator heap_buf_allocators[NumModes];
|
|
CountedMemoryAllocator compressed_buf_allocators[NumModes];
|
|
for (int i = 0; i < NumModes; ++i) {
|
|
SetMode(static_cast<Mode>(i));
|
|
FetchFirstDataBlock(table_name, compressed, do_uncompress,
|
|
expected_compression_type_after_fetch,
|
|
&heap_buf_allocators[i],
|
|
&compressed_buf_allocators[i], &blocks[i],
|
|
&block_datas[i], &memcpy_stats[i]);
|
|
}
|
|
|
|
for (int i = 0; i < NumModes - 1; ++i) {
|
|
AssertSameBlock(block_datas[i], block_datas[i + 1]);
|
|
}
|
|
|
|
// Check memcpy and buffer allocation statistics.
|
|
for (int i = 0; i < NumModes; ++i) {
|
|
const TestStats& expected_stats = expected_stats_by_mode[i];
|
|
|
|
ASSERT_EQ(memcpy_stats[i].num_stack_buf_memcpy,
|
|
expected_stats.memcpy_stats.num_stack_buf_memcpy);
|
|
ASSERT_EQ(memcpy_stats[i].num_heap_buf_memcpy,
|
|
expected_stats.memcpy_stats.num_heap_buf_memcpy);
|
|
ASSERT_EQ(memcpy_stats[i].num_compressed_buf_memcpy,
|
|
expected_stats.memcpy_stats.num_compressed_buf_memcpy);
|
|
|
|
if (kXpressCompression == compression_type) {
|
|
// XPRESS allocates memory internally, thus does not support for
|
|
// custom allocator verification
|
|
continue;
|
|
} else {
|
|
ASSERT_EQ(
|
|
heap_buf_allocators[i].GetNumAllocations(),
|
|
expected_stats.buf_allocation_stats.num_heap_buf_allocations);
|
|
ASSERT_EQ(compressed_buf_allocators[i].GetNumAllocations(),
|
|
expected_stats.buf_allocation_stats
|
|
.num_compressed_buf_allocations);
|
|
|
|
// The allocated buffers are not deallocated until
|
|
// the block content is deleted.
|
|
ASSERT_EQ(heap_buf_allocators[i].GetNumDeallocations(), 0);
|
|
ASSERT_EQ(compressed_buf_allocators[i].GetNumDeallocations(), 0);
|
|
blocks[i].allocation.reset();
|
|
ASSERT_EQ(
|
|
heap_buf_allocators[i].GetNumDeallocations(),
|
|
expected_stats.buf_allocation_stats.num_heap_buf_allocations);
|
|
ASSERT_EQ(compressed_buf_allocators[i].GetNumDeallocations(),
|
|
expected_stats.buf_allocation_stats
|
|
.num_compressed_buf_allocations);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
void SetMode(Mode mode) {
|
|
switch (mode) {
|
|
case Mode::kBufferedRead:
|
|
options_.use_direct_reads = false;
|
|
options_.allow_mmap_reads = false;
|
|
break;
|
|
case Mode::kBufferedMmap:
|
|
options_.use_direct_reads = false;
|
|
options_.allow_mmap_reads = true;
|
|
break;
|
|
case Mode::kDirectRead:
|
|
options_.use_direct_reads = true;
|
|
options_.allow_mmap_reads = false;
|
|
break;
|
|
case Mode::kNumModes:
|
|
assert(false);
|
|
}
|
|
}
|
|
|
|
private:
|
|
std::string test_dir_;
|
|
Env* env_;
|
|
std::shared_ptr<FileSystem> fs_;
|
|
BlockBasedTableFactory table_factory_;
|
|
Options options_;
|
|
|
|
std::string Path(const std::string& fname) { return test_dir_ + "/" + fname; }
|
|
|
|
void WriteToFile(const std::string& content, const std::string& filename) {
|
|
std::unique_ptr<FSWritableFile> f;
|
|
ASSERT_OK(fs_->NewWritableFile(Path(filename), FileOptions(), &f, nullptr));
|
|
ASSERT_OK(f->Append(content, IOOptions(), nullptr));
|
|
ASSERT_OK(f->Close(IOOptions(), nullptr));
|
|
}
|
|
|
|
void NewFileWriter(const std::string& filename,
|
|
std::unique_ptr<WritableFileWriter>* writer) {
|
|
std::string path = Path(filename);
|
|
FileOptions file_options;
|
|
ASSERT_OK(WritableFileWriter::Create(env_->GetFileSystem(), path,
|
|
file_options, writer, nullptr));
|
|
}
|
|
|
|
void NewFileReader(const std::string& filename, const FileOptions& opt,
|
|
std::unique_ptr<RandomAccessFileReader>* reader) {
|
|
std::string path = Path(filename);
|
|
std::unique_ptr<FSRandomAccessFile> f;
|
|
ASSERT_OK(fs_->NewRandomAccessFile(path, opt, &f, nullptr));
|
|
reader->reset(new RandomAccessFileReader(std::move(f), path,
|
|
env_->GetSystemClock().get()));
|
|
}
|
|
|
|
void NewTableReader(const ImmutableOptions& ioptions,
|
|
const FileOptions& foptions,
|
|
const InternalKeyComparator& comparator,
|
|
const std::string& table_name,
|
|
std::unique_ptr<BlockBasedTable>* table) {
|
|
std::unique_ptr<RandomAccessFileReader> file;
|
|
NewFileReader(table_name, foptions, &file);
|
|
|
|
uint64_t file_size = 0;
|
|
ASSERT_OK(env_->GetFileSize(Path(table_name), &file_size));
|
|
|
|
std::unique_ptr<TableReader> table_reader;
|
|
ReadOptions ro;
|
|
const auto* table_options =
|
|
table_factory_.GetOptions<BlockBasedTableOptions>();
|
|
ASSERT_NE(table_options, nullptr);
|
|
ASSERT_OK(BlockBasedTable::Open(ro, ioptions, EnvOptions(), *table_options,
|
|
comparator, std::move(file), file_size,
|
|
0 /* block_protection_bytes_per_key */,
|
|
&table_reader, 0 /* tail_size */));
|
|
|
|
table->reset(reinterpret_cast<BlockBasedTable*>(table_reader.release()));
|
|
}
|
|
|
|
std::string ToInternalKey(const std::string& key) {
|
|
InternalKey internal_key(key, 0, ValueType::kTypeValue);
|
|
return internal_key.Encode().ToString();
|
|
}
|
|
|
|
void ReadFooter(RandomAccessFileReader* file, Footer* footer) {
|
|
uint64_t file_size = 0;
|
|
ASSERT_OK(env_->GetFileSize(file->file_name(), &file_size));
|
|
IOOptions opts;
|
|
ASSERT_OK(ReadFooterFromFile(opts, file, *fs_,
|
|
nullptr /* prefetch_buffer */, file_size,
|
|
footer, kBlockBasedTableMagicNumber));
|
|
}
|
|
|
|
// NOTE: compression_type returns the compression type of the fetched block
|
|
// contents, so if the block is fetched and uncompressed, then it's
|
|
// kNoCompression.
|
|
void FetchBlock(RandomAccessFileReader* file, const BlockHandle& block,
|
|
BlockType block_type, bool compressed, bool do_uncompress,
|
|
MemoryAllocator* heap_buf_allocator,
|
|
MemoryAllocator* compressed_buf_allocator,
|
|
BlockContents* contents, MemcpyStats* stats,
|
|
CompressionType* compression_type) {
|
|
ImmutableOptions ioptions(options_);
|
|
ReadOptions roptions;
|
|
PersistentCacheOptions persistent_cache_options;
|
|
Footer footer;
|
|
ReadFooter(file, &footer);
|
|
std::unique_ptr<BlockFetcher> fetcher(new BlockFetcher(
|
|
file, nullptr /* prefetch_buffer */, footer, roptions, block, contents,
|
|
ioptions, do_uncompress, compressed, block_type,
|
|
UncompressionDict::GetEmptyDict(), persistent_cache_options,
|
|
heap_buf_allocator, compressed_buf_allocator));
|
|
|
|
ASSERT_OK(fetcher->ReadBlockContents());
|
|
|
|
stats->num_stack_buf_memcpy = fetcher->TEST_GetNumStackBufMemcpy();
|
|
stats->num_heap_buf_memcpy = fetcher->TEST_GetNumHeapBufMemcpy();
|
|
stats->num_compressed_buf_memcpy =
|
|
fetcher->TEST_GetNumCompressedBufMemcpy();
|
|
|
|
if (do_uncompress) {
|
|
*compression_type = kNoCompression;
|
|
} else {
|
|
*compression_type = fetcher->get_compression_type();
|
|
}
|
|
}
|
|
|
|
// NOTE: expected_compression_type is the expected compression
|
|
// type of the fetched block content, if the block is uncompressed,
|
|
// then the expected compression type is kNoCompression.
|
|
void FetchFirstDataBlock(const std::string& table_name, bool compressed,
|
|
bool do_uncompress,
|
|
CompressionType expected_compression_type,
|
|
MemoryAllocator* heap_buf_allocator,
|
|
MemoryAllocator* compressed_buf_allocator,
|
|
BlockContents* block, std::string* result,
|
|
MemcpyStats* memcpy_stats) {
|
|
ImmutableOptions ioptions(options_);
|
|
InternalKeyComparator comparator(options_.comparator);
|
|
FileOptions foptions(options_);
|
|
|
|
// Get block handle for the first data block.
|
|
std::unique_ptr<BlockBasedTable> table;
|
|
NewTableReader(ioptions, foptions, comparator, table_name, &table);
|
|
|
|
std::unique_ptr<BlockBasedTable::IndexReader> index_reader;
|
|
ReadOptions ro;
|
|
ASSERT_OK(BinarySearchIndexReader::Create(
|
|
table.get(), ro, nullptr /* prefetch_buffer */, false /* use_cache */,
|
|
false /* prefetch */, false /* pin */, nullptr /* lookup_context */,
|
|
&index_reader));
|
|
|
|
std::unique_ptr<InternalIteratorBase<IndexValue>> iter(
|
|
index_reader->NewIterator(
|
|
ReadOptions(), false /* disable_prefix_seek */, nullptr /* iter */,
|
|
nullptr /* get_context */, nullptr /* lookup_context */));
|
|
ASSERT_OK(iter->status());
|
|
iter->SeekToFirst();
|
|
BlockHandle first_block_handle = iter->value().handle;
|
|
|
|
// Fetch first data block.
|
|
std::unique_ptr<RandomAccessFileReader> file;
|
|
NewFileReader(table_name, foptions, &file);
|
|
CompressionType compression_type;
|
|
FetchBlock(file.get(), first_block_handle, BlockType::kData, compressed,
|
|
do_uncompress, heap_buf_allocator, compressed_buf_allocator,
|
|
block, memcpy_stats, &compression_type);
|
|
ASSERT_EQ(compression_type, expected_compression_type);
|
|
result->assign(block->data.ToString());
|
|
}
|
|
};
|
|
|
|
// Skip the following tests in lite mode since direct I/O is unsupported.
|
|
|
|
// Fetch index block under both direct IO and non-direct IO.
|
|
// Expects:
|
|
// the index block contents are the same for both read modes.
|
|
TEST_F(BlockFetcherTest, FetchIndexBlock) {
|
|
for (CompressionType compression : GetSupportedCompressions()) {
|
|
std::string table_name =
|
|
"FetchIndexBlock" + CompressionTypeToString(compression);
|
|
CreateTable(table_name, compression);
|
|
|
|
CountedMemoryAllocator allocator;
|
|
MemcpyStats memcpy_stats;
|
|
BlockContents indexes[NumModes];
|
|
std::string index_datas[NumModes];
|
|
for (int i = 0; i < NumModes; ++i) {
|
|
SetMode(static_cast<Mode>(i));
|
|
FetchIndexBlock(table_name, &allocator, &allocator, &memcpy_stats,
|
|
&indexes[i], &index_datas[i]);
|
|
}
|
|
for (int i = 0; i < NumModes - 1; ++i) {
|
|
AssertSameBlock(index_datas[i], index_datas[i + 1]);
|
|
}
|
|
}
|
|
}
|
|
|
|
// Data blocks are not compressed,
|
|
// fetch data block under direct IO, mmap IO,and non-direct IO.
|
|
// Expects:
|
|
// 1. in non-direct IO mode, allocate a heap buffer and memcpy the block
|
|
// into the buffer;
|
|
// 2. in direct IO mode, allocate a heap buffer and memcpy from the
|
|
// direct IO buffer to the heap buffer.
|
|
TEST_F(BlockFetcherTest, FetchUncompressedDataBlock) {
|
|
TestStats expected_non_mmap_stats = {
|
|
{
|
|
0 /* num_stack_buf_memcpy */,
|
|
1 /* num_heap_buf_memcpy */,
|
|
0 /* num_compressed_buf_memcpy */,
|
|
},
|
|
{
|
|
1 /* num_heap_buf_allocations */,
|
|
0 /* num_compressed_buf_allocations */,
|
|
}};
|
|
TestStats expected_mmap_stats = {{
|
|
0 /* num_stack_buf_memcpy */,
|
|
0 /* num_heap_buf_memcpy */,
|
|
0 /* num_compressed_buf_memcpy */,
|
|
},
|
|
{
|
|
0 /* num_heap_buf_allocations */,
|
|
0 /* num_compressed_buf_allocations */,
|
|
}};
|
|
std::array<TestStats, NumModes> expected_stats_by_mode{{
|
|
expected_non_mmap_stats /* kBufferedRead */,
|
|
expected_mmap_stats /* kBufferedMmap */,
|
|
expected_non_mmap_stats /* kDirectRead */,
|
|
}};
|
|
TestFetchDataBlock("FetchUncompressedDataBlock", false, false,
|
|
expected_stats_by_mode);
|
|
}
|
|
|
|
// Data blocks are compressed,
|
|
// fetch data block under both direct IO and non-direct IO,
|
|
// but do not uncompress.
|
|
// Expects:
|
|
// 1. in non-direct IO mode, allocate a compressed buffer and memcpy the block
|
|
// into the buffer;
|
|
// 2. in direct IO mode, allocate a compressed buffer and memcpy from the
|
|
// direct IO buffer to the compressed buffer.
|
|
TEST_F(BlockFetcherTest, FetchCompressedDataBlock) {
|
|
TestStats expected_non_mmap_stats = {
|
|
{
|
|
0 /* num_stack_buf_memcpy */,
|
|
0 /* num_heap_buf_memcpy */,
|
|
1 /* num_compressed_buf_memcpy */,
|
|
},
|
|
{
|
|
0 /* num_heap_buf_allocations */,
|
|
1 /* num_compressed_buf_allocations */,
|
|
}};
|
|
TestStats expected_mmap_stats = {{
|
|
0 /* num_stack_buf_memcpy */,
|
|
0 /* num_heap_buf_memcpy */,
|
|
0 /* num_compressed_buf_memcpy */,
|
|
},
|
|
{
|
|
0 /* num_heap_buf_allocations */,
|
|
0 /* num_compressed_buf_allocations */,
|
|
}};
|
|
std::array<TestStats, NumModes> expected_stats_by_mode{{
|
|
expected_non_mmap_stats /* kBufferedRead */,
|
|
expected_mmap_stats /* kBufferedMmap */,
|
|
expected_non_mmap_stats /* kDirectRead */,
|
|
}};
|
|
TestFetchDataBlock("FetchCompressedDataBlock", true, false,
|
|
expected_stats_by_mode);
|
|
}
|
|
|
|
// Data blocks are compressed,
|
|
// fetch and uncompress data block under both direct IO and non-direct IO.
|
|
// Expects:
|
|
// 1. in non-direct IO mode, since the block is small, so it's first memcpyed
|
|
// to the stack buffer, then a heap buffer is allocated and the block is
|
|
// uncompressed into the heap.
|
|
// 2. in direct IO mode mode, allocate a heap buffer, then directly uncompress
|
|
// and memcpy from the direct IO buffer to the heap buffer.
|
|
TEST_F(BlockFetcherTest, FetchAndUncompressCompressedDataBlock) {
|
|
TestStats expected_buffered_read_stats = {
|
|
{
|
|
1 /* num_stack_buf_memcpy */,
|
|
1 /* num_heap_buf_memcpy */,
|
|
0 /* num_compressed_buf_memcpy */,
|
|
},
|
|
{
|
|
1 /* num_heap_buf_allocations */,
|
|
0 /* num_compressed_buf_allocations */,
|
|
}};
|
|
TestStats expected_mmap_stats = {{
|
|
0 /* num_stack_buf_memcpy */,
|
|
1 /* num_heap_buf_memcpy */,
|
|
0 /* num_compressed_buf_memcpy */,
|
|
},
|
|
{
|
|
1 /* num_heap_buf_allocations */,
|
|
0 /* num_compressed_buf_allocations */,
|
|
}};
|
|
TestStats expected_direct_read_stats = {
|
|
{
|
|
0 /* num_stack_buf_memcpy */,
|
|
1 /* num_heap_buf_memcpy */,
|
|
0 /* num_compressed_buf_memcpy */,
|
|
},
|
|
{
|
|
1 /* num_heap_buf_allocations */,
|
|
0 /* num_compressed_buf_allocations */,
|
|
}};
|
|
std::array<TestStats, NumModes> expected_stats_by_mode{{
|
|
expected_buffered_read_stats,
|
|
expected_mmap_stats,
|
|
expected_direct_read_stats,
|
|
}};
|
|
TestFetchDataBlock("FetchAndUncompressCompressedDataBlock", true, true,
|
|
expected_stats_by_mode);
|
|
}
|
|
|
|
|
|
} // namespace
|
|
} // namespace ROCKSDB_NAMESPACE
|
|
|
|
int main(int argc, char** argv) {
|
|
ROCKSDB_NAMESPACE::port::InstallStackTraceHandler();
|
|
::testing::InitGoogleTest(&argc, argv);
|
|
return RUN_ALL_TESTS();
|
|
}
|