rocksdb/table/cuckoo/cuckoo_table_reader_test.cc
Hui Xiao 06e593376c Group SST write in flush, compaction and db open with new stats (#11910)
Summary:
## Context/Summary
Similar to https://github.com/facebook/rocksdb/pull/11288, https://github.com/facebook/rocksdb/pull/11444, categorizing SST/blob file write according to different io activities allows more insight into the activity.

For that, this PR does the following:
- Tag different write IOs by passing down and converting WriteOptions to IOOptions
- Add new SST_WRITE_MICROS histogram in WritableFileWriter::Append() and breakdown FILE_WRITE_{FLUSH|COMPACTION|DB_OPEN}_MICROS

Some related code refactory to make implementation cleaner:
- Blob stats
   - Replace high-level write measurement with low-level WritableFileWriter::Append() measurement for BLOB_DB_BLOB_FILE_WRITE_MICROS. This is to make FILE_WRITE_{FLUSH|COMPACTION|DB_OPEN}_MICROS  include blob file. As a consequence, this introduces some behavioral changes on it, see HISTORY and db bench test plan below for more info.
   - Fix bugs where BLOB_DB_BLOB_FILE_SYNCED/BLOB_DB_BLOB_FILE_BYTES_WRITTEN include file failed to sync and bytes failed to write.
- Refactor WriteOptions constructor for easier construction with io_activity and rate_limiter_priority
- Refactor DBImpl::~DBImpl()/BlobDBImpl::Close() to bypass thread op verification
- Build table
   - TableBuilderOptions now includes Read/WriteOpitons so BuildTable() do not need to take these two variables
   - Replace the io_priority passed into BuildTable() with TableBuilderOptions::WriteOpitons::rate_limiter_priority. Similar for BlobFileBuilder.
This parameter is used for dynamically changing file io priority for flush, see  https://github.com/facebook/rocksdb/pull/9988?fbclid=IwAR1DtKel6c-bRJAdesGo0jsbztRtciByNlvokbxkV6h_L-AE9MACzqRTT5s for more
   - Update ThreadStatus::FLUSH_BYTES_WRITTEN to use io_activity to track flush IO in flush job and db open instead of io_priority

## Test
### db bench

Flush
```
./db_bench --statistics=1 --benchmarks=fillseq --num=100000 --write_buffer_size=100

rocksdb.sst.write.micros P50 : 1.830863 P95 : 4.094720 P99 : 6.578947 P100 : 26.000000 COUNT : 7875 SUM : 20377
rocksdb.file.write.flush.micros P50 : 1.830863 P95 : 4.094720 P99 : 6.578947 P100 : 26.000000 COUNT : 7875 SUM : 20377
rocksdb.file.write.compaction.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.file.write.db.open.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
```

compaction, db oopen
```
Setup: ./db_bench --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench

Run:./db_bench --statistics=1 --benchmarks=compact  --db=../db_bench --use_existing_db=1

rocksdb.sst.write.micros P50 : 2.675325 P95 : 9.578788 P99 : 18.780000 P100 : 314.000000 COUNT : 638 SUM : 3279
rocksdb.file.write.flush.micros P50 : 0.000000 P95 : 0.000000 P99 : 0.000000 P100 : 0.000000 COUNT : 0 SUM : 0
rocksdb.file.write.compaction.micros P50 : 2.757353 P95 : 9.610687 P99 : 19.316667 P100 : 314.000000 COUNT : 615 SUM : 3213
rocksdb.file.write.db.open.micros P50 : 2.055556 P95 : 3.925000 P99 : 9.000000 P100 : 9.000000 COUNT : 23 SUM : 66
```

blob stats - just to make sure they aren't broken by this PR
```
Integrated Blob DB

Setup: ./db_bench --enable_blob_files=1 --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench

Run:./db_bench --enable_blob_files=1 --statistics=1 --benchmarks=compact  --db=../db_bench --use_existing_db=1

pre-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 7.298246 P95 : 9.771930 P99 : 9.991813 P100 : 16.000000 COUNT : 235 SUM : 1600
rocksdb.blobdb.blob.file.synced COUNT : 1
rocksdb.blobdb.blob.file.bytes.written COUNT : 34842

post-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 2.000000 P95 : 2.829360 P99 : 2.993779 P100 : 9.000000 COUNT : 707 SUM : 1614
- COUNT is higher and values are smaller as it includes header and footer write
- COUNT is 3X higher due to each Append() count as one post-PR, while in pre-PR, 3 Append()s counts as one. See https://github.com/facebook/rocksdb/pull/11910/files#diff-32b811c0a1c000768cfb2532052b44dc0b3bf82253f3eab078e15ff201a0dabfL157-L164

rocksdb.blobdb.blob.file.synced COUNT : 1 (stay the same)
rocksdb.blobdb.blob.file.bytes.written COUNT : 34842 (stay the same)
```

```
Stacked Blob DB

Run: ./db_bench --use_blob_db=1 --statistics=1 --benchmarks=fillseq --num=10000 --disable_auto_compactions=1 -write_buffer_size=100 --db=../db_bench

pre-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 12.808042 P95 : 19.674497 P99 : 28.539683 P100 : 51.000000 COUNT : 10000 SUM : 140876
rocksdb.blobdb.blob.file.synced COUNT : 8
rocksdb.blobdb.blob.file.bytes.written COUNT : 1043445

post-PR:
rocksdb.blobdb.blob.file.write.micros P50 : 1.657370 P95 : 2.952175 P99 : 3.877519 P100 : 24.000000 COUNT : 30001 SUM : 67924
- COUNT is higher and values are smaller as it includes header and footer write
- COUNT is 3X higher due to each Append() count as one post-PR, while in pre-PR, 3 Append()s counts as one. See https://github.com/facebook/rocksdb/pull/11910/files#diff-32b811c0a1c000768cfb2532052b44dc0b3bf82253f3eab078e15ff201a0dabfL157-L164

rocksdb.blobdb.blob.file.synced COUNT : 8 (stay the same)
rocksdb.blobdb.blob.file.bytes.written COUNT : 1043445 (stay the same)
```

###  Rehearsal CI stress test
Trigger 3 full runs of all our CI stress tests

###  Performance

Flush
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_pre_pr --benchmark_filter=ManualFlush/key_num:524288/per_key_size:256 --benchmark_repetitions=1000
-- default: 1 thread is used to run benchmark; enable_statistics = true

Pre-pr: avg 507515519.3 ns
497686074,499444327,500862543,501389862,502994471,503744435,504142123,504224056,505724198,506610393,506837742,506955122,507695561,507929036,508307733,508312691,508999120,509963561,510142147,510698091,510743096,510769317,510957074,511053311,511371367,511409911,511432960,511642385,511691964,511730908,

Post-pr: avg 511971266.5 ns, regressed 0.88%
502744835,506502498,507735420,507929724,508313335,509548582,509994942,510107257,510715603,511046955,511352639,511458478,512117521,512317380,512766303,512972652,513059586,513804934,513808980,514059409,514187369,514389494,514447762,514616464,514622882,514641763,514666265,514716377,514990179,515502408,
```

Compaction
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_{pre|post}_pr --benchmark_filter=ManualCompaction/comp_style:0/max_data:134217728/per_key_size:256/enable_statistics:1  --benchmark_repetitions=1000
-- default: 1 thread is used to run benchmark

Pre-pr: avg 495346098.30 ns
492118301,493203526,494201411,494336607,495269217,495404950,496402598,497012157,497358370,498153846

Post-pr: avg 504528077.20, regressed 1.85%. "ManualCompaction" include flush so the isolated regression for compaction should be around 1.85-0.88 = 0.97%
502465338,502485945,502541789,502909283,503438601,504143885,506113087,506629423,507160414,507393007
```

Put with WAL (in case passing WriteOptions slows down this path even without collecting SST write stats)
```
TEST_TMPDIR=/dev/shm ./db_basic_bench_pre_pr --benchmark_filter=DBPut/comp_style:0/max_data:107374182400/per_key_size:256/enable_statistics:1/wal:1  --benchmark_repetitions=1000
-- default: 1 thread is used to run benchmark

Pre-pr: avg 3848.10 ns
3814,3838,3839,3848,3854,3854,3854,3860,3860,3860

Post-pr: avg 3874.20 ns, regressed 0.68%
3863,3867,3871,3874,3875,3877,3877,3877,3880,3881
```

Pull Request resolved: https://github.com/facebook/rocksdb/pull/11910

Reviewed By: ajkr

Differential Revision: D49788060

Pulled By: hx235

fbshipit-source-id: 79e73699cda5be3b66461687e5147c2484fc5eff
2023-12-29 15:29:23 -08:00

574 lines
20 KiB
C++

// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
// This source code is licensed under both the GPLv2 (found in the
// COPYING file in the root directory) and Apache 2.0 License
// (found in the LICENSE.Apache file in the root directory).
#ifndef GFLAGS
#include <cstdio>
int main() {
fprintf(stderr, "Please install gflags to run this test... Skipping...\n");
return 0;
}
#else
#include <cinttypes>
#include <map>
#include <string>
#include <vector>
#include "memory/arena.h"
#include "rocksdb/db.h"
#include "table/cuckoo/cuckoo_table_builder.h"
#include "table/cuckoo/cuckoo_table_factory.h"
#include "table/cuckoo/cuckoo_table_reader.h"
#include "table/get_context.h"
#include "table/meta_blocks.h"
#include "test_util/testharness.h"
#include "test_util/testutil.h"
#include "util/gflags_compat.h"
#include "util/random.h"
#include "util/string_util.h"
using GFLAGS_NAMESPACE::ParseCommandLineFlags;
DEFINE_string(file_dir, "",
"Directory where the files will be created"
" for benchmark. Added for using tmpfs.");
DEFINE_bool(enable_perf, false, "Run Benchmark Tests too.");
DEFINE_bool(write, false,
"Should write new values to file in performance tests?");
DEFINE_bool(identity_as_first_hash, true, "use identity as first hash");
namespace ROCKSDB_NAMESPACE {
namespace {
const uint32_t kNumHashFunc = 10;
// Methods, variables related to Hash functions.
std::unordered_map<std::string, std::vector<uint64_t> > hash_map;
void AddHashLookups(const std::string& s, uint64_t bucket_id,
uint32_t num_hash_fun) {
std::vector<uint64_t> v;
for (uint32_t i = 0; i < num_hash_fun; i++) {
v.push_back(bucket_id + i);
}
hash_map[s] = v;
}
uint64_t GetSliceHash(const Slice& s, uint32_t index,
uint64_t /*max_num_buckets*/) {
return hash_map[s.ToString()][index];
}
} // namespace
class CuckooReaderTest : public testing::Test {
public:
using testing::Test::SetUp;
CuckooReaderTest() {
options.allow_mmap_reads = true;
env = options.env;
file_options = FileOptions(options);
}
void SetUp(int num) {
num_items = num;
hash_map.clear();
keys.clear();
keys.resize(num_items);
user_keys.clear();
user_keys.resize(num_items);
values.clear();
values.resize(num_items);
}
std::string NumToStr(int64_t i) {
return std::string(reinterpret_cast<char*>(&i), sizeof(i));
}
void CreateCuckooFileAndCheckReader(
const Comparator* ucomp = BytewiseComparator()) {
std::unique_ptr<WritableFileWriter> file_writer;
ASSERT_OK(WritableFileWriter::Create(env->GetFileSystem(), fname,
file_options, &file_writer, nullptr));
CuckooTableBuilder builder(
file_writer.get(), 0.9, kNumHashFunc, 100, ucomp, 2, false, false,
GetSliceHash, 0 /* column_family_id */, kDefaultColumnFamilyName);
ASSERT_OK(builder.status());
for (uint32_t key_idx = 0; key_idx < num_items; ++key_idx) {
builder.Add(Slice(keys[key_idx]), Slice(values[key_idx]));
ASSERT_OK(builder.status());
ASSERT_EQ(builder.NumEntries(), key_idx + 1);
}
ASSERT_OK(builder.Finish());
ASSERT_EQ(num_items, builder.NumEntries());
file_size = builder.FileSize();
ASSERT_OK(file_writer->Close(IOOptions()));
// Check reader now.
std::unique_ptr<RandomAccessFileReader> file_reader;
ASSERT_OK(RandomAccessFileReader::Create(
env->GetFileSystem(), fname, file_options, &file_reader, nullptr));
const ImmutableOptions ioptions(options);
CuckooTableReader reader(ioptions, std::move(file_reader), file_size, ucomp,
GetSliceHash);
ASSERT_OK(reader.status());
// Assume no merge/deletion
for (uint32_t i = 0; i < num_items; ++i) {
PinnableSlice value;
GetContext get_context(ucomp, nullptr, nullptr, nullptr,
GetContext::kNotFound, Slice(user_keys[i]), &value,
nullptr, nullptr, nullptr, nullptr, true, nullptr,
nullptr);
ASSERT_OK(
reader.Get(ReadOptions(), Slice(keys[i]), &get_context, nullptr));
ASSERT_STREQ(values[i].c_str(), value.data());
}
}
void UpdateKeys(bool with_zero_seqno) {
for (uint32_t i = 0; i < num_items; i++) {
ParsedInternalKey ikey(user_keys[i], with_zero_seqno ? 0 : i + 1000,
kTypeValue);
keys[i].clear();
AppendInternalKey(&keys[i], ikey);
}
}
void CheckIterator(const Comparator* ucomp = BytewiseComparator()) {
std::unique_ptr<RandomAccessFileReader> file_reader;
ASSERT_OK(RandomAccessFileReader::Create(
env->GetFileSystem(), fname, file_options, &file_reader, nullptr));
const ImmutableOptions ioptions(options);
CuckooTableReader reader(ioptions, std::move(file_reader), file_size, ucomp,
GetSliceHash);
ASSERT_OK(reader.status());
InternalIterator* it = reader.NewIterator(
ReadOptions(), /*prefix_extractor=*/nullptr, /*arena=*/nullptr,
/*skip_filters=*/false, TableReaderCaller::kUncategorized);
ASSERT_OK(it->status());
ASSERT_TRUE(!it->Valid());
it->SeekToFirst();
int cnt = 0;
while (it->Valid()) {
ASSERT_OK(it->status());
ASSERT_TRUE(Slice(keys[cnt]) == it->key());
ASSERT_TRUE(Slice(values[cnt]) == it->value());
++cnt;
it->Next();
}
ASSERT_EQ(static_cast<uint32_t>(cnt), num_items);
it->SeekToLast();
cnt = static_cast<int>(num_items) - 1;
ASSERT_TRUE(it->Valid());
while (it->Valid()) {
ASSERT_OK(it->status());
ASSERT_TRUE(Slice(keys[cnt]) == it->key());
ASSERT_TRUE(Slice(values[cnt]) == it->value());
--cnt;
it->Prev();
}
ASSERT_EQ(cnt, -1);
cnt = static_cast<int>(num_items) / 2;
it->Seek(keys[cnt]);
while (it->Valid()) {
ASSERT_OK(it->status());
ASSERT_TRUE(Slice(keys[cnt]) == it->key());
ASSERT_TRUE(Slice(values[cnt]) == it->value());
++cnt;
it->Next();
}
ASSERT_EQ(static_cast<uint32_t>(cnt), num_items);
delete it;
Arena arena;
it = reader.NewIterator(ReadOptions(), /*prefix_extractor=*/nullptr, &arena,
/*skip_filters=*/false,
TableReaderCaller::kUncategorized);
ASSERT_OK(it->status());
ASSERT_TRUE(!it->Valid());
it->Seek(keys[num_items / 2]);
ASSERT_TRUE(it->Valid());
ASSERT_OK(it->status());
ASSERT_TRUE(keys[num_items / 2] == it->key());
ASSERT_TRUE(values[num_items / 2] == it->value());
ASSERT_OK(it->status());
it->~InternalIterator();
}
std::vector<std::string> keys;
std::vector<std::string> user_keys;
std::vector<std::string> values;
uint64_t num_items;
std::string fname;
uint64_t file_size;
Options options;
Env* env;
FileOptions file_options;
};
TEST_F(CuckooReaderTest, FileNotMmaped) {
options.allow_mmap_reads = false;
ImmutableOptions ioptions(options);
CuckooTableReader reader(ioptions, nullptr, 0, nullptr, nullptr);
ASSERT_TRUE(reader.status().IsInvalidArgument());
ASSERT_STREQ("File is not mmaped", reader.status().getState());
}
TEST_F(CuckooReaderTest, WhenKeyExists) {
SetUp(kNumHashFunc);
fname = test::PerThreadDBPath("CuckooReader_WhenKeyExists");
for (uint64_t i = 0; i < num_items; i++) {
user_keys[i] = "key" + NumToStr(i);
ParsedInternalKey ikey(user_keys[i], i + 1000, kTypeValue);
AppendInternalKey(&keys[i], ikey);
values[i] = "value" + NumToStr(i);
// Give disjoint hash values.
AddHashLookups(user_keys[i], i, kNumHashFunc);
}
CreateCuckooFileAndCheckReader();
// Last level file.
UpdateKeys(true);
CreateCuckooFileAndCheckReader();
// Test with collision. Make all hash values collide.
hash_map.clear();
for (uint32_t i = 0; i < num_items; i++) {
AddHashLookups(user_keys[i], 0, kNumHashFunc);
}
UpdateKeys(false);
CreateCuckooFileAndCheckReader();
// Last level file.
UpdateKeys(true);
CreateCuckooFileAndCheckReader();
}
TEST_F(CuckooReaderTest, WhenKeyExistsWithUint64Comparator) {
SetUp(kNumHashFunc);
fname = test::PerThreadDBPath("CuckooReaderUint64_WhenKeyExists");
for (uint64_t i = 0; i < num_items; i++) {
user_keys[i].resize(8);
memcpy(user_keys[i].data(), static_cast<void*>(&i), 8);
ParsedInternalKey ikey(user_keys[i], i + 1000, kTypeValue);
AppendInternalKey(&keys[i], ikey);
values[i] = "value" + NumToStr(i);
// Give disjoint hash values.
AddHashLookups(user_keys[i], i, kNumHashFunc);
}
CreateCuckooFileAndCheckReader(test::Uint64Comparator());
// Last level file.
UpdateKeys(true);
CreateCuckooFileAndCheckReader(test::Uint64Comparator());
// Test with collision. Make all hash values collide.
hash_map.clear();
for (uint32_t i = 0; i < num_items; i++) {
AddHashLookups(user_keys[i], 0, kNumHashFunc);
}
UpdateKeys(false);
CreateCuckooFileAndCheckReader(test::Uint64Comparator());
// Last level file.
UpdateKeys(true);
CreateCuckooFileAndCheckReader(test::Uint64Comparator());
}
TEST_F(CuckooReaderTest, CheckIterator) {
SetUp(2 * kNumHashFunc);
fname = test::PerThreadDBPath("CuckooReader_CheckIterator");
for (uint64_t i = 0; i < num_items; i++) {
user_keys[i] = "key" + NumToStr(i);
ParsedInternalKey ikey(user_keys[i], 1000, kTypeValue);
AppendInternalKey(&keys[i], ikey);
values[i] = "value" + NumToStr(i);
// Give disjoint hash values, in reverse order.
AddHashLookups(user_keys[i], num_items - i - 1, kNumHashFunc);
}
CreateCuckooFileAndCheckReader();
CheckIterator();
// Last level file.
UpdateKeys(true);
CreateCuckooFileAndCheckReader();
CheckIterator();
}
TEST_F(CuckooReaderTest, CheckIteratorUint64) {
SetUp(2 * kNumHashFunc);
fname = test::PerThreadDBPath("CuckooReader_CheckIterator");
for (uint64_t i = 0; i < num_items; i++) {
user_keys[i].resize(8);
memcpy(user_keys[i].data(), static_cast<void*>(&i), 8);
ParsedInternalKey ikey(user_keys[i], 1000, kTypeValue);
AppendInternalKey(&keys[i], ikey);
values[i] = "value" + NumToStr(i);
// Give disjoint hash values, in reverse order.
AddHashLookups(user_keys[i], num_items - i - 1, kNumHashFunc);
}
CreateCuckooFileAndCheckReader(test::Uint64Comparator());
CheckIterator(test::Uint64Comparator());
// Last level file.
UpdateKeys(true);
CreateCuckooFileAndCheckReader(test::Uint64Comparator());
CheckIterator(test::Uint64Comparator());
}
TEST_F(CuckooReaderTest, WhenKeyNotFound) {
// Add keys with colliding hash values.
SetUp(kNumHashFunc);
fname = test::PerThreadDBPath("CuckooReader_WhenKeyNotFound");
for (uint64_t i = 0; i < num_items; i++) {
user_keys[i] = "key" + NumToStr(i);
ParsedInternalKey ikey(user_keys[i], i + 1000, kTypeValue);
AppendInternalKey(&keys[i], ikey);
values[i] = "value" + NumToStr(i);
// Make all hash values collide.
AddHashLookups(user_keys[i], 0, kNumHashFunc);
}
auto* ucmp = BytewiseComparator();
CreateCuckooFileAndCheckReader();
std::unique_ptr<RandomAccessFileReader> file_reader;
ASSERT_OK(RandomAccessFileReader::Create(
env->GetFileSystem(), fname, file_options, &file_reader, nullptr));
const ImmutableOptions ioptions(options);
CuckooTableReader reader(ioptions, std::move(file_reader), file_size, ucmp,
GetSliceHash);
ASSERT_OK(reader.status());
// Search for a key with colliding hash values.
std::string not_found_user_key = "key" + NumToStr(num_items);
std::string not_found_key;
AddHashLookups(not_found_user_key, 0, kNumHashFunc);
ParsedInternalKey ikey(not_found_user_key, 1000, kTypeValue);
AppendInternalKey(&not_found_key, ikey);
PinnableSlice value;
GetContext get_context(ucmp, nullptr, nullptr, nullptr, GetContext::kNotFound,
Slice(not_found_key), &value, nullptr, nullptr,
nullptr, nullptr, true, nullptr, nullptr);
ASSERT_OK(
reader.Get(ReadOptions(), Slice(not_found_key), &get_context, nullptr));
ASSERT_TRUE(value.empty());
ASSERT_OK(reader.status());
// Search for a key with an independent hash value.
std::string not_found_user_key2 = "key" + NumToStr(num_items + 1);
AddHashLookups(not_found_user_key2, kNumHashFunc, kNumHashFunc);
ParsedInternalKey ikey2(not_found_user_key2, 1000, kTypeValue);
std::string not_found_key2;
AppendInternalKey(&not_found_key2, ikey2);
value.Reset();
GetContext get_context2(ucmp, nullptr, nullptr, nullptr,
GetContext::kNotFound, Slice(not_found_key2), &value,
nullptr, nullptr, nullptr, nullptr, true, nullptr,
nullptr);
ASSERT_OK(
reader.Get(ReadOptions(), Slice(not_found_key2), &get_context2, nullptr));
ASSERT_TRUE(value.empty());
ASSERT_OK(reader.status());
// Test read when key is unused key.
std::string unused_key =
reader.GetTableProperties()->user_collected_properties.at(
CuckooTablePropertyNames::kEmptyKey);
// Add hash values that map to empty buckets.
AddHashLookups(ExtractUserKey(unused_key).ToString(), kNumHashFunc,
kNumHashFunc);
value.Reset();
GetContext get_context3(
ucmp, nullptr, nullptr, nullptr, GetContext::kNotFound, Slice(unused_key),
&value, nullptr, nullptr, nullptr, nullptr, true, nullptr, nullptr);
ASSERT_OK(
reader.Get(ReadOptions(), Slice(unused_key), &get_context3, nullptr));
ASSERT_TRUE(value.empty());
ASSERT_OK(reader.status());
}
// Performance tests
namespace {
void GetKeys(uint64_t num, std::vector<std::string>* keys) {
keys->clear();
IterKey k;
k.SetInternalKey("", 0, kTypeValue);
std::string internal_key_suffix = k.GetInternalKey().ToString();
ASSERT_EQ(static_cast<size_t>(8), internal_key_suffix.size());
for (uint64_t key_idx = 0; key_idx < num; ++key_idx) {
uint64_t value = 2 * key_idx;
std::string new_key(reinterpret_cast<char*>(&value), sizeof(value));
new_key += internal_key_suffix;
keys->push_back(new_key);
}
}
std::string GetFileName(uint64_t num) {
if (FLAGS_file_dir.empty()) {
FLAGS_file_dir = test::TmpDir();
}
return test::PerThreadDBPath(FLAGS_file_dir, "cuckoo_read_benchmark") +
std::to_string(num / 1000000) + "Mkeys";
}
// Create last level file as we are interested in measuring performance of
// last level file only.
void WriteFile(const std::vector<std::string>& keys, const uint64_t num,
double hash_ratio) {
Options options;
options.allow_mmap_reads = true;
const auto& fs = options.env->GetFileSystem();
FileOptions file_options(options);
std::string fname = GetFileName(num);
std::unique_ptr<WritableFileWriter> file_writer;
ASSERT_OK(WritableFileWriter::Create(fs, fname, file_options, &file_writer,
nullptr));
CuckooTableBuilder builder(
file_writer.get(), hash_ratio, 64, 1000, test::Uint64Comparator(), 5,
false, FLAGS_identity_as_first_hash, nullptr, 0 /* column_family_id */,
kDefaultColumnFamilyName);
ASSERT_OK(builder.status());
for (uint64_t key_idx = 0; key_idx < num; ++key_idx) {
// Value is just a part of key.
builder.Add(Slice(keys[key_idx]), Slice(keys[key_idx].data(), 4));
ASSERT_EQ(builder.NumEntries(), key_idx + 1);
ASSERT_OK(builder.status());
}
ASSERT_OK(builder.Finish());
ASSERT_EQ(num, builder.NumEntries());
ASSERT_OK(file_writer->Close(IOOptions()));
uint64_t file_size;
ASSERT_OK(
fs->GetFileSize(fname, file_options.io_options, &file_size, nullptr));
std::unique_ptr<RandomAccessFileReader> file_reader;
ASSERT_OK(RandomAccessFileReader::Create(fs, fname, file_options,
&file_reader, nullptr));
const ImmutableOptions ioptions(options);
CuckooTableReader reader(ioptions, std::move(file_reader), file_size,
test::Uint64Comparator(), nullptr);
ASSERT_OK(reader.status());
ReadOptions r_options;
PinnableSlice value;
// Assume only the fast path is triggered
GetContext get_context(nullptr, nullptr, nullptr, nullptr,
GetContext::kNotFound, Slice(), &value, nullptr,
nullptr, nullptr, true, nullptr, nullptr);
for (uint64_t i = 0; i < num; ++i) {
value.Reset();
value.clear();
ASSERT_OK(reader.Get(r_options, Slice(keys[i]), &get_context, nullptr));
ASSERT_TRUE(Slice(keys[i]) == Slice(keys[i].data(), 4));
}
}
void ReadKeys(uint64_t num, uint32_t batch_size) {
Options options;
options.allow_mmap_reads = true;
Env* env = options.env;
const auto& fs = options.env->GetFileSystem();
FileOptions file_options(options);
std::string fname = GetFileName(num);
uint64_t file_size;
ASSERT_OK(
fs->GetFileSize(fname, file_options.io_options, &file_size, nullptr));
std::unique_ptr<RandomAccessFileReader> file_reader;
ASSERT_OK(RandomAccessFileReader::Create(fs, fname, file_options,
&file_reader, nullptr));
const ImmutableOptions ioptions(options);
CuckooTableReader reader(ioptions, std::move(file_reader), file_size,
test::Uint64Comparator(), nullptr);
ASSERT_OK(reader.status());
const UserCollectedProperties user_props =
reader.GetTableProperties()->user_collected_properties;
const uint32_t num_hash_fun = *reinterpret_cast<const uint32_t*>(
user_props.at(CuckooTablePropertyNames::kNumHashFunc).data());
const uint64_t table_size = *reinterpret_cast<const uint64_t*>(
user_props.at(CuckooTablePropertyNames::kHashTableSize).data());
fprintf(stderr,
"With %" PRIu64
" items, utilization is %.2f%%, number of"
" hash functions: %u.\n",
num, num * 100.0 / (table_size), num_hash_fun);
ReadOptions r_options;
std::vector<uint64_t> keys;
keys.reserve(num);
for (uint64_t i = 0; i < num; ++i) {
keys.push_back(2 * i);
}
RandomShuffle(keys.begin(), keys.end());
PinnableSlice value;
// Assume only the fast path is triggered
GetContext get_context(nullptr, nullptr, nullptr, nullptr,
GetContext::kNotFound, Slice(), &value, nullptr,
nullptr, nullptr, true, nullptr, nullptr);
uint64_t start_time = env->NowMicros();
if (batch_size > 0) {
for (uint64_t i = 0; i < num; i += batch_size) {
for (uint64_t j = i; j < i + batch_size && j < num; ++j) {
reader.Prepare(Slice(reinterpret_cast<char*>(&keys[j]), 16));
}
for (uint64_t j = i; j < i + batch_size && j < num; ++j) {
reader.Get(r_options, Slice(reinterpret_cast<char*>(&keys[j]), 16),
&get_context, nullptr);
}
}
} else {
for (uint64_t i = 0; i < num; i++) {
reader.Get(r_options, Slice(reinterpret_cast<char*>(&keys[i]), 16),
&get_context, nullptr);
}
}
float time_per_op = (env->NowMicros() - start_time) * 1.0f / num;
fprintf(stderr,
"Time taken per op is %.3fus (%.1f Mqps) with batch size of %u\n",
time_per_op, 1.0 / time_per_op, batch_size);
}
} // namespace.
TEST_F(CuckooReaderTest, TestReadPerformance) {
if (!FLAGS_enable_perf) {
return;
}
double hash_ratio = 0.95;
// These numbers are chosen to have a hash utilization % close to
// 0.9, 0.75, 0.6 and 0.5 respectively.
// They all create 128 M buckets.
std::vector<uint64_t> nums = {120 * 1024 * 1024, 100 * 1024 * 1024,
80 * 1024 * 1024, 70 * 1024 * 1024};
#ifndef NDEBUG
fprintf(
stdout,
"WARNING: Not compiled with DNDEBUG. Performance tests may be slow.\n");
#endif
for (uint64_t num : nums) {
if (FLAGS_write ||
Env::Default()->FileExists(GetFileName(num)).IsNotFound()) {
std::vector<std::string> all_keys;
GetKeys(num, &all_keys);
WriteFile(all_keys, num, hash_ratio);
}
ReadKeys(num, 0);
ReadKeys(num, 10);
ReadKeys(num, 25);
ReadKeys(num, 50);
ReadKeys(num, 100);
fprintf(stderr, "\n");
}
}
} // namespace ROCKSDB_NAMESPACE
int main(int argc, char** argv) {
if (ROCKSDB_NAMESPACE::port::kLittleEndian) {
ROCKSDB_NAMESPACE::port::InstallStackTraceHandler();
::testing::InitGoogleTest(&argc, argv);
ParseCommandLineFlags(&argc, &argv, true);
return RUN_ALL_TESTS();
} else {
fprintf(stderr, "SKIPPED as Cuckoo table doesn't support Big Endian\n");
return 0;
}
}
#endif // GFLAGS.