2016-02-09 23:12:00 +00:00
|
|
|
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
2017-07-15 23:03:42 +00:00
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
2013-10-16 21:59:46 +00:00
|
|
|
//
|
2019-05-30 21:47:29 +00:00
|
|
|
|
Improve / clean up meta block code & integrity (#9163)
Summary:
* Checksums are now checked on meta blocks unless specifically
suppressed or not applicable (e.g. plain table). (Was other way around.)
This means a number of cases that were not checking checksums now are,
including direct read TableProperties in Version::GetTableProperties
(fixed in meta_blocks ReadTableProperties), reading any block from
PersistentCache (fixed in BlockFetcher), read TableProperties in
SstFileDumper (ldb/sst_dump/BackupEngine) before table reader open,
maybe more.
* For that to work, I moved the global_seqno+TableProperties checksum
logic to the shared table/ code, because that is used by many utilies
such as SstFileDumper.
* Also for that to work, we have to know when we're dealing with a block
that has a checksum (trailer), so added that capability to Footer based
on magic number, and from there BlockFetcher.
* Knowledge of trailer presence has also fixed a problem where other
table formats were reading blocks including bytes for a non-existant
trailer--and awkwardly kind-of not using them, e.g. no shared code
checking checksums. (BlockFetcher compression type was populated
incorrectly.) Now we only read what is needed.
* Minimized code duplication and differing/incompatible/awkward
abstractions in meta_blocks.{cc,h} (e.g. SeekTo in metaindex block
without parsing block handle)
* Moved some meta block handling code from table_properties*.*
* Moved some code specific to block-based table from shared table/ code
to BlockBasedTable class. The checksum stuff means we can't completely
separate it, but things that don't need to be in shared table/ code
should not be.
* Use unique_ptr rather than raw ptr in more places. (Note: you can
std::move from unique_ptr to shared_ptr.)
Without enhancements to GetPropertiesOfAllTablesTest (see below),
net reduction of roughly 100 lines of code.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9163
Test Plan:
existing tests and
* Enhanced DBTablePropertiesTest.GetPropertiesOfAllTablesTest to verify that
checksums are now checked on direct read of table properties by TableCache
(new test would fail before this change)
* Also enhanced DBTablePropertiesTest.GetPropertiesOfAllTablesTest to test
putting table properties under old meta name
* Also generally enhanced that same test to actually test what it was
supposed to be testing already, by kicking things out of table cache when
we don't want them there.
Reviewed By: ajkr, mrambacher
Differential Revision: D32514757
Pulled By: pdillinger
fbshipit-source-id: 507964b9311d186ae8d1131182290cbd97a99fa9
2021-11-18 19:42:12 +00:00
|
|
|
#include "table/block_based/block.h"
|
|
|
|
|
2016-08-27 01:55:58 +00:00
|
|
|
#include <algorithm>
|
2023-12-01 19:10:30 +00:00
|
|
|
#include <cstdio>
|
2016-08-27 01:55:58 +00:00
|
|
|
#include <set>
|
2012-12-20 19:05:41 +00:00
|
|
|
#include <string>
|
2016-08-27 01:55:58 +00:00
|
|
|
#include <unordered_set>
|
|
|
|
#include <utility>
|
2014-04-10 21:19:43 +00:00
|
|
|
#include <vector>
|
|
|
|
|
2023-04-25 19:08:23 +00:00
|
|
|
#include "db/db_test_util.h"
|
2012-12-20 19:05:41 +00:00
|
|
|
#include "db/dbformat.h"
|
2017-04-06 20:59:31 +00:00
|
|
|
#include "db/memtable.h"
|
2019-03-27 23:13:08 +00:00
|
|
|
#include "db/write_batch_internal.h"
|
2013-08-23 15:38:13 +00:00
|
|
|
#include "rocksdb/db.h"
|
|
|
|
#include "rocksdb/env.h"
|
|
|
|
#include "rocksdb/iterator.h"
|
2014-04-10 21:19:43 +00:00
|
|
|
#include "rocksdb/slice_transform.h"
|
2019-03-27 23:13:08 +00:00
|
|
|
#include "rocksdb/table.h"
|
Improve / clean up meta block code & integrity (#9163)
Summary:
* Checksums are now checked on meta blocks unless specifically
suppressed or not applicable (e.g. plain table). (Was other way around.)
This means a number of cases that were not checking checksums now are,
including direct read TableProperties in Version::GetTableProperties
(fixed in meta_blocks ReadTableProperties), reading any block from
PersistentCache (fixed in BlockFetcher), read TableProperties in
SstFileDumper (ldb/sst_dump/BackupEngine) before table reader open,
maybe more.
* For that to work, I moved the global_seqno+TableProperties checksum
logic to the shared table/ code, because that is used by many utilies
such as SstFileDumper.
* Also for that to work, we have to know when we're dealing with a block
that has a checksum (trailer), so added that capability to Footer based
on magic number, and from there BlockFetcher.
* Knowledge of trailer presence has also fixed a problem where other
table formats were reading blocks including bytes for a non-existant
trailer--and awkwardly kind-of not using them, e.g. no shared code
checking checksums. (BlockFetcher compression type was populated
incorrectly.) Now we only read what is needed.
* Minimized code duplication and differing/incompatible/awkward
abstractions in meta_blocks.{cc,h} (e.g. SeekTo in metaindex block
without parsing block handle)
* Moved some meta block handling code from table_properties*.*
* Moved some code specific to block-based table from shared table/ code
to BlockBasedTable class. The checksum stuff means we can't completely
separate it, but things that don't need to be in shared table/ code
should not be.
* Use unique_ptr rather than raw ptr in more places. (Note: you can
std::move from unique_ptr to shared_ptr.)
Without enhancements to GetPropertiesOfAllTablesTest (see below),
net reduction of roughly 100 lines of code.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9163
Test Plan:
existing tests and
* Enhanced DBTablePropertiesTest.GetPropertiesOfAllTablesTest to verify that
checksums are now checked on direct read of table properties by TableCache
(new test would fail before this change)
* Also enhanced DBTablePropertiesTest.GetPropertiesOfAllTablesTest to test
putting table properties under old meta name
* Also generally enhanced that same test to actually test what it was
supposed to be testing already, by kicking things out of table cache when
we don't want them there.
Reviewed By: ajkr, mrambacher
Differential Revision: D32514757
Pulled By: pdillinger
fbshipit-source-id: 507964b9311d186ae8d1131182290cbd97a99fa9
2021-11-18 19:42:12 +00:00
|
|
|
#include "table/block_based/block_based_table_reader.h"
|
2019-05-30 21:47:29 +00:00
|
|
|
#include "table/block_based/block_builder.h"
|
2012-12-20 19:05:41 +00:00
|
|
|
#include "table/format.h"
|
2019-05-30 18:21:38 +00:00
|
|
|
#include "test_util/testharness.h"
|
|
|
|
#include "test_util/testutil.h"
|
2019-05-31 00:39:43 +00:00
|
|
|
#include "util/random.h"
|
2012-12-20 19:05:41 +00:00
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
namespace ROCKSDB_NAMESPACE {
|
2012-12-20 19:05:41 +00:00
|
|
|
|
2020-07-08 00:25:08 +00:00
|
|
|
std::string GenerateInternalKey(int primary_key, int secondary_key,
|
2023-05-25 22:41:32 +00:00
|
|
|
int padding_size, Random *rnd,
|
|
|
|
size_t ts_sz = 0) {
|
2014-04-10 21:19:43 +00:00
|
|
|
char buf[50];
|
|
|
|
char *p = &buf[0];
|
|
|
|
snprintf(buf, sizeof(buf), "%6d%4d", primary_key, secondary_key);
|
|
|
|
std::string k(p);
|
|
|
|
if (padding_size) {
|
2020-07-09 21:33:42 +00:00
|
|
|
k += rnd->RandomString(padding_size);
|
2014-04-10 21:19:43 +00:00
|
|
|
}
|
2020-07-08 00:25:08 +00:00
|
|
|
AppendInternalKeyFooter(&k, 0 /* seqno */, kTypeValue);
|
2023-05-25 22:41:32 +00:00
|
|
|
std::string key_with_ts;
|
|
|
|
if (ts_sz > 0) {
|
|
|
|
PadInternalKeyWithMinTimestamp(&key_with_ts, k, ts_sz);
|
|
|
|
return key_with_ts;
|
|
|
|
}
|
2014-04-10 21:19:43 +00:00
|
|
|
|
|
|
|
return k;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Generate random key value pairs.
|
|
|
|
// The generated key will be sorted. You can tune the parameters to generated
|
|
|
|
// different kinds of test key/value pairs for different scenario.
|
|
|
|
void GenerateRandomKVs(std::vector<std::string> *keys,
|
|
|
|
std::vector<std::string> *values, const int from,
|
|
|
|
const int len, const int step = 1,
|
|
|
|
const int padding_size = 0,
|
2023-05-25 22:41:32 +00:00
|
|
|
const int keys_share_prefix = 1, size_t ts_sz = 0) {
|
2014-04-10 21:19:43 +00:00
|
|
|
Random rnd(302);
|
|
|
|
|
|
|
|
// generate different prefix
|
|
|
|
for (int i = from; i < from + len; i += step) {
|
|
|
|
// generating keys that shares the prefix
|
|
|
|
for (int j = 0; j < keys_share_prefix; ++j) {
|
2020-07-08 00:25:08 +00:00
|
|
|
// `DataBlockIter` assumes it reads only internal keys.
|
2023-05-25 22:41:32 +00:00
|
|
|
keys->emplace_back(GenerateInternalKey(i, j, padding_size, &rnd, ts_sz));
|
2014-04-10 21:19:43 +00:00
|
|
|
|
|
|
|
// 100 bytes values
|
2020-07-09 21:33:42 +00:00
|
|
|
values->emplace_back(rnd.RandomString(100));
|
2014-04-10 21:19:43 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2012-12-20 19:05:41 +00:00
|
|
|
|
2023-06-08 18:04:43 +00:00
|
|
|
// Test Param 1): key use delta encoding.
|
|
|
|
// Test Param 2): user-defined timestamp test mode.
|
|
|
|
// Test Param 3): data block index type.
|
2023-05-25 22:41:32 +00:00
|
|
|
class BlockTest : public testing::Test,
|
|
|
|
public testing::WithParamInterface<
|
2023-06-08 18:04:43 +00:00
|
|
|
std::tuple<bool, test::UserDefinedTimestampTestMode,
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType>> {
|
2023-05-25 22:41:32 +00:00
|
|
|
public:
|
|
|
|
bool keyUseDeltaEncoding() const { return std::get<0>(GetParam()); }
|
|
|
|
bool isUDTEnabled() const {
|
|
|
|
return test::IsUDTEnabled(std::get<1>(GetParam()));
|
|
|
|
}
|
|
|
|
bool shouldPersistUDT() const {
|
|
|
|
return test::ShouldPersistUDT(std::get<1>(GetParam()));
|
|
|
|
}
|
2023-06-08 18:04:43 +00:00
|
|
|
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType dataBlockIndexType() const {
|
|
|
|
return std::get<2>(GetParam());
|
|
|
|
}
|
2023-05-25 22:41:32 +00:00
|
|
|
};
|
2012-12-20 19:05:41 +00:00
|
|
|
|
|
|
|
// block test
|
2023-05-25 22:41:32 +00:00
|
|
|
TEST_P(BlockTest, SimpleTest) {
|
2012-12-20 19:05:41 +00:00
|
|
|
Random rnd(301);
|
|
|
|
Options options = Options();
|
2023-05-25 22:41:32 +00:00
|
|
|
if (isUDTEnabled()) {
|
|
|
|
options.comparator = test::BytewiseComparatorWithU64TsWrapper();
|
|
|
|
}
|
|
|
|
size_t ts_sz = options.comparator->timestamp_size();
|
2014-01-27 21:53:22 +00:00
|
|
|
|
2012-12-20 19:05:41 +00:00
|
|
|
std::vector<std::string> keys;
|
|
|
|
std::vector<std::string> values;
|
2023-06-08 18:04:43 +00:00
|
|
|
BlockBasedTableOptions::DataBlockIndexType index_type =
|
|
|
|
isUDTEnabled() ? BlockBasedTableOptions::kDataBlockBinarySearch
|
|
|
|
: dataBlockIndexType();
|
2023-05-25 22:41:32 +00:00
|
|
|
BlockBuilder builder(16, keyUseDeltaEncoding(),
|
2023-06-08 18:04:43 +00:00
|
|
|
false /* use_value_delta_encoding */, index_type,
|
2023-05-25 22:41:32 +00:00
|
|
|
0.75 /* data_block_hash_table_util_ratio */, ts_sz,
|
|
|
|
shouldPersistUDT(), false /* is_user_key */);
|
2012-12-20 19:05:41 +00:00
|
|
|
int num_records = 100000;
|
|
|
|
|
2023-05-25 22:41:32 +00:00
|
|
|
GenerateRandomKVs(&keys, &values, 0, num_records, 1 /* step */,
|
|
|
|
0 /* padding_size */, 1 /* keys_share_prefix */, ts_sz);
|
2012-12-20 19:05:41 +00:00
|
|
|
// add a bunch of records to a block
|
|
|
|
for (int i = 0; i < num_records; i++) {
|
2014-04-10 21:19:43 +00:00
|
|
|
builder.Add(keys[i], values[i]);
|
2012-12-20 19:05:41 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// read serialized contents of the block
|
|
|
|
Slice rawblock = builder.Finish();
|
|
|
|
|
2013-04-22 22:20:20 +00:00
|
|
|
// create block reader
|
2012-12-20 19:05:41 +00:00
|
|
|
BlockContents contents;
|
|
|
|
contents.data = rawblock;
|
2020-02-25 23:29:17 +00:00
|
|
|
Block reader(std::move(contents));
|
2012-12-20 19:05:41 +00:00
|
|
|
|
|
|
|
// read contents of block sequentially
|
|
|
|
int count = 0;
|
2023-05-25 22:41:32 +00:00
|
|
|
InternalIterator *iter = reader.NewDataIterator(
|
|
|
|
options.comparator, kDisableGlobalSequenceNumber, nullptr /* iter */,
|
|
|
|
nullptr /* stats */, false /* block_contents_pinned */,
|
|
|
|
shouldPersistUDT());
|
2019-03-27 23:13:08 +00:00
|
|
|
for (iter->SeekToFirst(); iter->Valid(); count++, iter->Next()) {
|
2012-12-20 19:05:41 +00:00
|
|
|
// read kv from block
|
|
|
|
Slice k = iter->key();
|
|
|
|
Slice v = iter->value();
|
|
|
|
|
|
|
|
// compare with lookaside array
|
|
|
|
ASSERT_EQ(k.ToString().compare(keys[count]), 0);
|
|
|
|
ASSERT_EQ(v.ToString().compare(values[count]), 0);
|
|
|
|
}
|
|
|
|
delete iter;
|
2013-04-22 22:20:20 +00:00
|
|
|
|
|
|
|
// read block contents randomly
|
2023-05-25 22:41:32 +00:00
|
|
|
iter = reader.NewDataIterator(
|
|
|
|
options.comparator, kDisableGlobalSequenceNumber, nullptr /* iter */,
|
|
|
|
nullptr /* stats */, false /* block_contents_pinned */,
|
|
|
|
shouldPersistUDT());
|
2012-12-20 19:05:41 +00:00
|
|
|
for (int i = 0; i < num_records; i++) {
|
|
|
|
// find a random key in the lookaside array
|
|
|
|
int index = rnd.Uniform(num_records);
|
|
|
|
Slice k(keys[index]);
|
|
|
|
|
|
|
|
// search in block for this key
|
|
|
|
iter->Seek(k);
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
Slice v = iter->value();
|
|
|
|
ASSERT_EQ(v.ToString().compare(values[index]), 0);
|
|
|
|
}
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
|
2014-04-10 21:19:43 +00:00
|
|
|
// return the block contents
|
2023-06-08 18:04:43 +00:00
|
|
|
BlockContents GetBlockContents(
|
|
|
|
std::unique_ptr<BlockBuilder> *builder,
|
|
|
|
const std::vector<std::string> &keys,
|
|
|
|
const std::vector<std::string> &values, bool key_use_delta_encoding,
|
|
|
|
size_t ts_sz, bool should_persist_udt, const int /*prefix_group_size*/ = 1,
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType dblock_index_type =
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::kDataBlockBinarySearch) {
|
2023-05-25 22:41:32 +00:00
|
|
|
builder->reset(
|
|
|
|
new BlockBuilder(1 /* restart interval */, key_use_delta_encoding,
|
2023-06-08 18:04:43 +00:00
|
|
|
false /* use_value_delta_encoding */, dblock_index_type,
|
2023-05-25 22:41:32 +00:00
|
|
|
0.75 /* data_block_hash_table_util_ratio */, ts_sz,
|
|
|
|
should_persist_udt, false /* is_user_key */));
|
2014-04-10 21:19:43 +00:00
|
|
|
|
|
|
|
// Add only half of the keys
|
|
|
|
for (size_t i = 0; i < keys.size(); ++i) {
|
|
|
|
(*builder)->Add(keys[i], values[i]);
|
|
|
|
}
|
|
|
|
Slice rawblock = (*builder)->Finish();
|
|
|
|
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = rawblock;
|
|
|
|
|
|
|
|
return contents;
|
|
|
|
}
|
|
|
|
|
2014-09-17 23:45:58 +00:00
|
|
|
void CheckBlockContents(BlockContents contents, const int max_key,
|
2014-04-10 21:19:43 +00:00
|
|
|
const std::vector<std::string> &keys,
|
2023-05-25 22:41:32 +00:00
|
|
|
const std::vector<std::string> &values,
|
|
|
|
bool is_udt_enabled, bool should_persist_udt) {
|
2014-04-10 21:19:43 +00:00
|
|
|
const size_t prefix_size = 6;
|
|
|
|
// create block reader
|
2018-11-14 01:00:49 +00:00
|
|
|
BlockContents contents_ref(contents.data);
|
2020-02-25 23:29:17 +00:00
|
|
|
Block reader1(std::move(contents));
|
|
|
|
Block reader2(std::move(contents_ref));
|
2014-04-10 21:19:43 +00:00
|
|
|
|
|
|
|
std::unique_ptr<const SliceTransform> prefix_extractor(
|
|
|
|
NewFixedPrefixTransform(prefix_size));
|
|
|
|
|
2020-07-08 00:25:08 +00:00
|
|
|
std::unique_ptr<InternalIterator> regular_iter(reader2.NewDataIterator(
|
2023-05-25 22:41:32 +00:00
|
|
|
is_udt_enabled ? test::BytewiseComparatorWithU64TsWrapper()
|
|
|
|
: BytewiseComparator(),
|
|
|
|
kDisableGlobalSequenceNumber, nullptr /* iter */, nullptr /* stats */,
|
|
|
|
false /* block_contents_pinned */, should_persist_udt));
|
2014-04-10 21:19:43 +00:00
|
|
|
|
|
|
|
// Seek existent keys
|
|
|
|
for (size_t i = 0; i < keys.size(); i++) {
|
2016-05-21 00:14:38 +00:00
|
|
|
regular_iter->Seek(keys[i]);
|
|
|
|
ASSERT_OK(regular_iter->status());
|
|
|
|
ASSERT_TRUE(regular_iter->Valid());
|
2014-04-10 21:19:43 +00:00
|
|
|
|
2016-05-21 00:14:38 +00:00
|
|
|
Slice v = regular_iter->value();
|
2014-04-10 21:19:43 +00:00
|
|
|
ASSERT_EQ(v.ToString().compare(values[i]), 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Seek non-existent keys.
|
|
|
|
// For hash index, if no key with a given prefix is not found, iterator will
|
|
|
|
// simply be set as invalid; whereas the binary search based iterator will
|
|
|
|
// return the one that is closest.
|
|
|
|
for (int i = 1; i < max_key - 1; i += 2) {
|
2020-07-08 00:25:08 +00:00
|
|
|
// `DataBlockIter` assumes its APIs receive only internal keys.
|
2023-05-25 22:41:32 +00:00
|
|
|
auto key = GenerateInternalKey(i, 0, 0, nullptr,
|
|
|
|
is_udt_enabled ? 8 : 0 /* ts_sz */);
|
2014-04-10 21:19:43 +00:00
|
|
|
regular_iter->Seek(key);
|
|
|
|
ASSERT_TRUE(regular_iter->Valid());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// In this test case, no two key share same prefix.
|
2023-05-25 22:41:32 +00:00
|
|
|
TEST_P(BlockTest, SimpleIndexHash) {
|
2014-04-10 21:19:43 +00:00
|
|
|
const int kMaxKey = 100000;
|
2023-05-25 22:41:32 +00:00
|
|
|
size_t ts_sz = isUDTEnabled() ? 8 : 0;
|
2014-04-10 21:19:43 +00:00
|
|
|
std::vector<std::string> keys;
|
|
|
|
std::vector<std::string> values;
|
|
|
|
GenerateRandomKVs(&keys, &values, 0 /* first key id */,
|
|
|
|
kMaxKey /* last key id */, 2 /* step */,
|
2023-05-25 22:41:32 +00:00
|
|
|
8 /* padding size (8 bytes randomly generated suffix) */,
|
|
|
|
1 /* keys_share_prefix */, ts_sz);
|
2014-04-10 21:19:43 +00:00
|
|
|
|
|
|
|
std::unique_ptr<BlockBuilder> builder;
|
2023-06-08 18:04:43 +00:00
|
|
|
|
2023-05-25 22:41:32 +00:00
|
|
|
auto contents = GetBlockContents(
|
2023-06-08 18:04:43 +00:00
|
|
|
&builder, keys, values, keyUseDeltaEncoding(), ts_sz, shouldPersistUDT(),
|
|
|
|
1 /* prefix_group_size */,
|
|
|
|
isUDTEnabled()
|
|
|
|
? BlockBasedTableOptions::DataBlockIndexType::kDataBlockBinarySearch
|
|
|
|
: dataBlockIndexType());
|
2014-04-10 21:19:43 +00:00
|
|
|
|
2023-05-25 22:41:32 +00:00
|
|
|
CheckBlockContents(std::move(contents), kMaxKey, keys, values, isUDTEnabled(),
|
|
|
|
shouldPersistUDT());
|
2014-04-10 21:19:43 +00:00
|
|
|
}
|
|
|
|
|
2023-05-25 22:41:32 +00:00
|
|
|
TEST_P(BlockTest, IndexHashWithSharedPrefix) {
|
2014-04-10 21:19:43 +00:00
|
|
|
const int kMaxKey = 100000;
|
|
|
|
// for each prefix, there will be 5 keys starts with it.
|
|
|
|
const int kPrefixGroup = 5;
|
2023-05-25 22:41:32 +00:00
|
|
|
size_t ts_sz = isUDTEnabled() ? 8 : 0;
|
2014-04-10 21:19:43 +00:00
|
|
|
std::vector<std::string> keys;
|
|
|
|
std::vector<std::string> values;
|
|
|
|
// Generate keys with same prefix.
|
|
|
|
GenerateRandomKVs(&keys, &values, 0, // first key id
|
|
|
|
kMaxKey, // last key id
|
2023-05-25 22:41:32 +00:00
|
|
|
2 /* step */,
|
|
|
|
10 /* padding size (8 bytes randomly generated suffix) */,
|
|
|
|
kPrefixGroup /* keys_share_prefix */, ts_sz);
|
2014-04-10 21:19:43 +00:00
|
|
|
|
|
|
|
std::unique_ptr<BlockBuilder> builder;
|
2023-06-08 18:04:43 +00:00
|
|
|
|
|
|
|
auto contents = GetBlockContents(
|
|
|
|
&builder, keys, values, keyUseDeltaEncoding(), isUDTEnabled(),
|
|
|
|
shouldPersistUDT(), kPrefixGroup,
|
|
|
|
isUDTEnabled()
|
|
|
|
? BlockBasedTableOptions::DataBlockIndexType::kDataBlockBinarySearch
|
|
|
|
: dataBlockIndexType());
|
2014-04-10 21:19:43 +00:00
|
|
|
|
2023-05-25 22:41:32 +00:00
|
|
|
CheckBlockContents(std::move(contents), kMaxKey, keys, values, isUDTEnabled(),
|
|
|
|
shouldPersistUDT());
|
2014-04-10 21:19:43 +00:00
|
|
|
}
|
|
|
|
|
2023-05-25 22:41:32 +00:00
|
|
|
// Param 0: key use delta encoding
|
|
|
|
// Param 1: user-defined timestamp test mode
|
2023-06-08 18:04:43 +00:00
|
|
|
// Param 2: data block index type. User-defined timestamp feature is not
|
|
|
|
// compatible with `kDataBlockBinaryAndHash` data block index type because the
|
|
|
|
// user comparator doesn't provide a `CanKeysWithDifferentByteContentsBeEqual`
|
|
|
|
// override. This combination is disabled.
|
2023-05-25 22:41:32 +00:00
|
|
|
INSTANTIATE_TEST_CASE_P(
|
|
|
|
P, BlockTest,
|
2023-06-08 18:04:43 +00:00
|
|
|
::testing::Combine(
|
|
|
|
::testing::Bool(), ::testing::ValuesIn(test::GetUDTTestModes()),
|
|
|
|
::testing::Values(
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::kDataBlockBinarySearch,
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::
|
|
|
|
kDataBlockBinaryAndHash)));
|
2023-05-25 22:41:32 +00:00
|
|
|
|
2016-08-27 01:55:58 +00:00
|
|
|
// A slow and accurate version of BlockReadAmpBitmap that simply store
|
|
|
|
// all the marked ranges in a set.
|
|
|
|
class BlockReadAmpBitmapSlowAndAccurate {
|
|
|
|
public:
|
|
|
|
void Mark(size_t start_offset, size_t end_offset) {
|
|
|
|
assert(end_offset >= start_offset);
|
|
|
|
marked_ranges_.emplace(end_offset, start_offset);
|
|
|
|
}
|
|
|
|
|
2018-01-02 18:34:51 +00:00
|
|
|
void ResetCheckSequence() { iter_valid_ = false; }
|
|
|
|
|
2016-08-27 01:55:58 +00:00
|
|
|
// Return true if any byte in this range was Marked
|
2018-01-02 18:34:51 +00:00
|
|
|
// This does linear search from the previous position. When calling
|
|
|
|
// multiple times, `offset` needs to be incremental to get correct results.
|
|
|
|
// Call ResetCheckSequence() to reset it.
|
unbiase readamp bitmap
Summary:
Consider BlockReadAmpBitmap with bytes_per_bit = 32. Suppose bytes [a, b) were used, while bytes [a-32, a)
and [b+1, b+33) weren't used; more formally, the union of ranges passed to BlockReadAmpBitmap::Mark() contains [a, b) and doesn't intersect with [a-32, a) and [b+1, b+33). Then bits [floor(a/32), ceil(b/32)] will be set, and so the number of useful bytes will be estimated as (ceil(b/32) - floor(a/32)) * 32, which is on average equal to b-a+31.
An extreme example: if we use 1 byte from each block, it'll be counted as 32 bytes from each block.
It's easy to remove this bias by slightly changing the semantics of the bitmap. Currently each bit represents a byte range [i*32, (i+1)*32).
This diff makes each bit represent a single byte: i*32 + X, where X is a random number in [0, 31] generated when bitmap is created. So, e.g., if you read a single byte at random, with probability 31/32 it won't be counted at all, and with probability 1/32 it will be counted as 32 bytes; so, on average it's counted as 1 byte.
*But there is one exception: the last bit will always set with the old way.*
(*) - assuming read_amp_bytes_per_bit = 32.
Closes https://github.com/facebook/rocksdb/pull/2259
Differential Revision: D5035652
Pulled By: lightmark
fbshipit-source-id: bd98b1b9b49fbe61f9e3781d07f624e3cbd92356
2017-05-10 08:32:52 +00:00
|
|
|
bool IsPinMarked(size_t offset) {
|
2018-01-02 18:34:51 +00:00
|
|
|
if (iter_valid_) {
|
|
|
|
// Has existing iterator, try linear search from
|
|
|
|
// the iterator.
|
|
|
|
for (int i = 0; i < 64; i++) {
|
|
|
|
if (offset < iter_->second) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
if (offset <= iter_->first) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
iter_++;
|
|
|
|
if (iter_ == marked_ranges_.end()) {
|
|
|
|
iter_valid_ = false;
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// Initial call or have linear searched too many times.
|
|
|
|
// Do binary search.
|
|
|
|
iter_ = marked_ranges_.lower_bound(
|
unbiase readamp bitmap
Summary:
Consider BlockReadAmpBitmap with bytes_per_bit = 32. Suppose bytes [a, b) were used, while bytes [a-32, a)
and [b+1, b+33) weren't used; more formally, the union of ranges passed to BlockReadAmpBitmap::Mark() contains [a, b) and doesn't intersect with [a-32, a) and [b+1, b+33). Then bits [floor(a/32), ceil(b/32)] will be set, and so the number of useful bytes will be estimated as (ceil(b/32) - floor(a/32)) * 32, which is on average equal to b-a+31.
An extreme example: if we use 1 byte from each block, it'll be counted as 32 bytes from each block.
It's easy to remove this bias by slightly changing the semantics of the bitmap. Currently each bit represents a byte range [i*32, (i+1)*32).
This diff makes each bit represent a single byte: i*32 + X, where X is a random number in [0, 31] generated when bitmap is created. So, e.g., if you read a single byte at random, with probability 31/32 it won't be counted at all, and with probability 1/32 it will be counted as 32 bytes; so, on average it's counted as 1 byte.
*But there is one exception: the last bit will always set with the old way.*
(*) - assuming read_amp_bytes_per_bit = 32.
Closes https://github.com/facebook/rocksdb/pull/2259
Differential Revision: D5035652
Pulled By: lightmark
fbshipit-source-id: bd98b1b9b49fbe61f9e3781d07f624e3cbd92356
2017-05-10 08:32:52 +00:00
|
|
|
std::make_pair(offset, static_cast<size_t>(0)));
|
2018-01-02 18:34:51 +00:00
|
|
|
if (iter_ == marked_ranges_.end()) {
|
|
|
|
iter_valid_ = false;
|
2016-08-27 01:55:58 +00:00
|
|
|
return false;
|
|
|
|
}
|
2018-01-02 18:34:51 +00:00
|
|
|
iter_valid_ = true;
|
|
|
|
return offset <= iter_->first && offset >= iter_->second;
|
2016-08-27 01:55:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
2016-08-31 22:10:12 +00:00
|
|
|
std::set<std::pair<size_t, size_t>> marked_ranges_;
|
2018-01-02 18:34:51 +00:00
|
|
|
std::set<std::pair<size_t, size_t>>::iterator iter_;
|
|
|
|
bool iter_valid_ = false;
|
2016-08-27 01:55:58 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
TEST_F(BlockTest, BlockReadAmpBitmap) {
|
unbiase readamp bitmap
Summary:
Consider BlockReadAmpBitmap with bytes_per_bit = 32. Suppose bytes [a, b) were used, while bytes [a-32, a)
and [b+1, b+33) weren't used; more formally, the union of ranges passed to BlockReadAmpBitmap::Mark() contains [a, b) and doesn't intersect with [a-32, a) and [b+1, b+33). Then bits [floor(a/32), ceil(b/32)] will be set, and so the number of useful bytes will be estimated as (ceil(b/32) - floor(a/32)) * 32, which is on average equal to b-a+31.
An extreme example: if we use 1 byte from each block, it'll be counted as 32 bytes from each block.
It's easy to remove this bias by slightly changing the semantics of the bitmap. Currently each bit represents a byte range [i*32, (i+1)*32).
This diff makes each bit represent a single byte: i*32 + X, where X is a random number in [0, 31] generated when bitmap is created. So, e.g., if you read a single byte at random, with probability 31/32 it won't be counted at all, and with probability 1/32 it will be counted as 32 bytes; so, on average it's counted as 1 byte.
*But there is one exception: the last bit will always set with the old way.*
(*) - assuming read_amp_bytes_per_bit = 32.
Closes https://github.com/facebook/rocksdb/pull/2259
Differential Revision: D5035652
Pulled By: lightmark
fbshipit-source-id: bd98b1b9b49fbe61f9e3781d07f624e3cbd92356
2017-05-10 08:32:52 +00:00
|
|
|
uint32_t pin_offset = 0;
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
2019-03-27 23:13:08 +00:00
|
|
|
"BlockReadAmpBitmap:rnd", [&pin_offset](void *arg) {
|
|
|
|
pin_offset = *(static_cast<uint32_t *>(arg));
|
|
|
|
});
|
unbiase readamp bitmap
Summary:
Consider BlockReadAmpBitmap with bytes_per_bit = 32. Suppose bytes [a, b) were used, while bytes [a-32, a)
and [b+1, b+33) weren't used; more formally, the union of ranges passed to BlockReadAmpBitmap::Mark() contains [a, b) and doesn't intersect with [a-32, a) and [b+1, b+33). Then bits [floor(a/32), ceil(b/32)] will be set, and so the number of useful bytes will be estimated as (ceil(b/32) - floor(a/32)) * 32, which is on average equal to b-a+31.
An extreme example: if we use 1 byte from each block, it'll be counted as 32 bytes from each block.
It's easy to remove this bias by slightly changing the semantics of the bitmap. Currently each bit represents a byte range [i*32, (i+1)*32).
This diff makes each bit represent a single byte: i*32 + X, where X is a random number in [0, 31] generated when bitmap is created. So, e.g., if you read a single byte at random, with probability 31/32 it won't be counted at all, and with probability 1/32 it will be counted as 32 bytes; so, on average it's counted as 1 byte.
*But there is one exception: the last bit will always set with the old way.*
(*) - assuming read_amp_bytes_per_bit = 32.
Closes https://github.com/facebook/rocksdb/pull/2259
Differential Revision: D5035652
Pulled By: lightmark
fbshipit-source-id: bd98b1b9b49fbe61f9e3781d07f624e3cbd92356
2017-05-10 08:32:52 +00:00
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
2016-08-27 01:55:58 +00:00
|
|
|
std::vector<size_t> block_sizes = {
|
2018-01-02 18:34:51 +00:00
|
|
|
1, // 1 byte
|
|
|
|
32, // 32 bytes
|
|
|
|
61, // 61 bytes
|
|
|
|
64, // 64 bytes
|
|
|
|
512, // 0.5 KB
|
|
|
|
1024, // 1 KB
|
|
|
|
1024 * 4, // 4 KB
|
|
|
|
1024 * 10, // 10 KB
|
|
|
|
1024 * 50, // 50 KB
|
|
|
|
1024 * 1024 * 4, // 5 MB
|
2016-08-27 01:55:58 +00:00
|
|
|
777,
|
|
|
|
124653,
|
|
|
|
};
|
|
|
|
const size_t kBytesPerBit = 64;
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
for (size_t block_size : block_sizes) {
|
2020-02-20 20:07:53 +00:00
|
|
|
std::shared_ptr<Statistics> stats = ROCKSDB_NAMESPACE::CreateDBStatistics();
|
2016-08-27 01:55:58 +00:00
|
|
|
BlockReadAmpBitmap read_amp_bitmap(block_size, kBytesPerBit, stats.get());
|
|
|
|
BlockReadAmpBitmapSlowAndAccurate read_amp_slow_and_accurate;
|
|
|
|
|
|
|
|
size_t needed_bits = (block_size / kBytesPerBit);
|
|
|
|
if (block_size % kBytesPerBit != 0) {
|
|
|
|
needed_bits++;
|
|
|
|
}
|
|
|
|
|
unbiase readamp bitmap
Summary:
Consider BlockReadAmpBitmap with bytes_per_bit = 32. Suppose bytes [a, b) were used, while bytes [a-32, a)
and [b+1, b+33) weren't used; more formally, the union of ranges passed to BlockReadAmpBitmap::Mark() contains [a, b) and doesn't intersect with [a-32, a) and [b+1, b+33). Then bits [floor(a/32), ceil(b/32)] will be set, and so the number of useful bytes will be estimated as (ceil(b/32) - floor(a/32)) * 32, which is on average equal to b-a+31.
An extreme example: if we use 1 byte from each block, it'll be counted as 32 bytes from each block.
It's easy to remove this bias by slightly changing the semantics of the bitmap. Currently each bit represents a byte range [i*32, (i+1)*32).
This diff makes each bit represent a single byte: i*32 + X, where X is a random number in [0, 31] generated when bitmap is created. So, e.g., if you read a single byte at random, with probability 31/32 it won't be counted at all, and with probability 1/32 it will be counted as 32 bytes; so, on average it's counted as 1 byte.
*But there is one exception: the last bit will always set with the old way.*
(*) - assuming read_amp_bytes_per_bit = 32.
Closes https://github.com/facebook/rocksdb/pull/2259
Differential Revision: D5035652
Pulled By: lightmark
fbshipit-source-id: bd98b1b9b49fbe61f9e3781d07f624e3cbd92356
2017-05-10 08:32:52 +00:00
|
|
|
ASSERT_EQ(stats->getTickerCount(READ_AMP_TOTAL_READ_BYTES), block_size);
|
2016-08-27 01:55:58 +00:00
|
|
|
|
|
|
|
// Generate some random entries
|
|
|
|
std::vector<size_t> random_entry_offsets;
|
|
|
|
for (int i = 0; i < 1000; i++) {
|
|
|
|
random_entry_offsets.push_back(rnd.Next() % block_size);
|
|
|
|
}
|
|
|
|
std::sort(random_entry_offsets.begin(), random_entry_offsets.end());
|
|
|
|
auto it =
|
|
|
|
std::unique(random_entry_offsets.begin(), random_entry_offsets.end());
|
|
|
|
random_entry_offsets.resize(
|
|
|
|
std::distance(random_entry_offsets.begin(), it));
|
|
|
|
|
2016-09-06 15:41:43 +00:00
|
|
|
std::vector<std::pair<size_t, size_t>> random_entries;
|
2016-08-27 01:55:58 +00:00
|
|
|
for (size_t i = 0; i < random_entry_offsets.size(); i++) {
|
|
|
|
size_t entry_start = random_entry_offsets[i];
|
|
|
|
size_t entry_end;
|
|
|
|
if (i + 1 < random_entry_offsets.size()) {
|
|
|
|
entry_end = random_entry_offsets[i + 1] - 1;
|
|
|
|
} else {
|
|
|
|
entry_end = block_size - 1;
|
|
|
|
}
|
|
|
|
random_entries.emplace_back(entry_start, entry_end);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (size_t i = 0; i < random_entries.size(); i++) {
|
2018-01-02 18:34:51 +00:00
|
|
|
read_amp_slow_and_accurate.ResetCheckSequence();
|
2016-08-27 01:55:58 +00:00
|
|
|
auto ¤t_entry = random_entries[rnd.Next() % random_entries.size()];
|
|
|
|
|
2016-09-06 19:28:55 +00:00
|
|
|
read_amp_bitmap.Mark(static_cast<uint32_t>(current_entry.first),
|
|
|
|
static_cast<uint32_t>(current_entry.second));
|
2016-08-27 01:55:58 +00:00
|
|
|
read_amp_slow_and_accurate.Mark(current_entry.first,
|
|
|
|
current_entry.second);
|
|
|
|
|
|
|
|
size_t total_bits = 0;
|
unbiase readamp bitmap
Summary:
Consider BlockReadAmpBitmap with bytes_per_bit = 32. Suppose bytes [a, b) were used, while bytes [a-32, a)
and [b+1, b+33) weren't used; more formally, the union of ranges passed to BlockReadAmpBitmap::Mark() contains [a, b) and doesn't intersect with [a-32, a) and [b+1, b+33). Then bits [floor(a/32), ceil(b/32)] will be set, and so the number of useful bytes will be estimated as (ceil(b/32) - floor(a/32)) * 32, which is on average equal to b-a+31.
An extreme example: if we use 1 byte from each block, it'll be counted as 32 bytes from each block.
It's easy to remove this bias by slightly changing the semantics of the bitmap. Currently each bit represents a byte range [i*32, (i+1)*32).
This diff makes each bit represent a single byte: i*32 + X, where X is a random number in [0, 31] generated when bitmap is created. So, e.g., if you read a single byte at random, with probability 31/32 it won't be counted at all, and with probability 1/32 it will be counted as 32 bytes; so, on average it's counted as 1 byte.
*But there is one exception: the last bit will always set with the old way.*
(*) - assuming read_amp_bytes_per_bit = 32.
Closes https://github.com/facebook/rocksdb/pull/2259
Differential Revision: D5035652
Pulled By: lightmark
fbshipit-source-id: bd98b1b9b49fbe61f9e3781d07f624e3cbd92356
2017-05-10 08:32:52 +00:00
|
|
|
for (size_t bit_idx = 0; bit_idx < needed_bits; bit_idx++) {
|
|
|
|
total_bits += read_amp_slow_and_accurate.IsPinMarked(
|
2019-03-27 23:13:08 +00:00
|
|
|
bit_idx * kBytesPerBit + pin_offset);
|
2016-08-27 01:55:58 +00:00
|
|
|
}
|
|
|
|
size_t expected_estimate_useful = total_bits * kBytesPerBit;
|
|
|
|
size_t got_estimate_useful =
|
2019-03-27 23:13:08 +00:00
|
|
|
stats->getTickerCount(READ_AMP_ESTIMATE_USEFUL_BYTES);
|
2016-08-27 01:55:58 +00:00
|
|
|
ASSERT_EQ(expected_estimate_useful, got_estimate_useful);
|
|
|
|
}
|
|
|
|
}
|
unbiase readamp bitmap
Summary:
Consider BlockReadAmpBitmap with bytes_per_bit = 32. Suppose bytes [a, b) were used, while bytes [a-32, a)
and [b+1, b+33) weren't used; more formally, the union of ranges passed to BlockReadAmpBitmap::Mark() contains [a, b) and doesn't intersect with [a-32, a) and [b+1, b+33). Then bits [floor(a/32), ceil(b/32)] will be set, and so the number of useful bytes will be estimated as (ceil(b/32) - floor(a/32)) * 32, which is on average equal to b-a+31.
An extreme example: if we use 1 byte from each block, it'll be counted as 32 bytes from each block.
It's easy to remove this bias by slightly changing the semantics of the bitmap. Currently each bit represents a byte range [i*32, (i+1)*32).
This diff makes each bit represent a single byte: i*32 + X, where X is a random number in [0, 31] generated when bitmap is created. So, e.g., if you read a single byte at random, with probability 31/32 it won't be counted at all, and with probability 1/32 it will be counted as 32 bytes; so, on average it's counted as 1 byte.
*But there is one exception: the last bit will always set with the old way.*
(*) - assuming read_amp_bytes_per_bit = 32.
Closes https://github.com/facebook/rocksdb/pull/2259
Differential Revision: D5035652
Pulled By: lightmark
fbshipit-source-id: bd98b1b9b49fbe61f9e3781d07f624e3cbd92356
2017-05-10 08:32:52 +00:00
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
SyncPoint::GetInstance()->ClearAllCallBacks();
|
2016-08-27 01:55:58 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(BlockTest, BlockWithReadAmpBitmap) {
|
|
|
|
Random rnd(301);
|
|
|
|
Options options = Options();
|
|
|
|
|
|
|
|
std::vector<std::string> keys;
|
|
|
|
std::vector<std::string> values;
|
|
|
|
BlockBuilder builder(16);
|
|
|
|
int num_records = 10000;
|
|
|
|
|
2023-05-25 22:41:32 +00:00
|
|
|
GenerateRandomKVs(&keys, &values, 0, num_records, 1 /* step */);
|
2016-08-27 01:55:58 +00:00
|
|
|
// add a bunch of records to a block
|
|
|
|
for (int i = 0; i < num_records; i++) {
|
|
|
|
builder.Add(keys[i], values[i]);
|
|
|
|
}
|
|
|
|
|
|
|
|
Slice rawblock = builder.Finish();
|
|
|
|
const size_t kBytesPerBit = 8;
|
|
|
|
|
|
|
|
// Read the block sequentially using Next()
|
|
|
|
{
|
2020-02-20 20:07:53 +00:00
|
|
|
std::shared_ptr<Statistics> stats = ROCKSDB_NAMESPACE::CreateDBStatistics();
|
2016-08-27 01:55:58 +00:00
|
|
|
|
|
|
|
// create block reader
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = rawblock;
|
2020-02-25 23:29:17 +00:00
|
|
|
Block reader(std::move(contents), kBytesPerBit, stats.get());
|
2016-08-27 01:55:58 +00:00
|
|
|
|
|
|
|
// read contents of block sequentially
|
|
|
|
size_t read_bytes = 0;
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
DataBlockIter *iter = reader.NewDataIterator(
|
2020-07-08 00:25:08 +00:00
|
|
|
options.comparator, kDisableGlobalSequenceNumber, nullptr, stats.get());
|
2016-08-27 01:55:58 +00:00
|
|
|
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
|
|
|
|
iter->value();
|
|
|
|
read_bytes += iter->TEST_CurrentEntrySize();
|
|
|
|
|
|
|
|
double semi_acc_read_amp =
|
|
|
|
static_cast<double>(read_bytes) / rawblock.size();
|
|
|
|
double read_amp = static_cast<double>(stats->getTickerCount(
|
|
|
|
READ_AMP_ESTIMATE_USEFUL_BYTES)) /
|
|
|
|
stats->getTickerCount(READ_AMP_TOTAL_READ_BYTES);
|
|
|
|
|
|
|
|
// Error in read amplification will be less than 1% if we are reading
|
|
|
|
// sequentially
|
|
|
|
double error_pct = fabs(semi_acc_read_amp - read_amp) * 100;
|
|
|
|
EXPECT_LT(error_pct, 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Read the block sequentially using Seek()
|
|
|
|
{
|
2020-02-20 20:07:53 +00:00
|
|
|
std::shared_ptr<Statistics> stats = ROCKSDB_NAMESPACE::CreateDBStatistics();
|
2016-08-27 01:55:58 +00:00
|
|
|
|
|
|
|
// create block reader
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = rawblock;
|
2020-02-25 23:29:17 +00:00
|
|
|
Block reader(std::move(contents), kBytesPerBit, stats.get());
|
2016-08-27 01:55:58 +00:00
|
|
|
|
|
|
|
size_t read_bytes = 0;
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
DataBlockIter *iter = reader.NewDataIterator(
|
2020-07-08 00:25:08 +00:00
|
|
|
options.comparator, kDisableGlobalSequenceNumber, nullptr, stats.get());
|
2016-08-27 01:55:58 +00:00
|
|
|
for (int i = 0; i < num_records; i++) {
|
|
|
|
Slice k(keys[i]);
|
|
|
|
|
|
|
|
// search in block for this key
|
|
|
|
iter->Seek(k);
|
|
|
|
iter->value();
|
|
|
|
read_bytes += iter->TEST_CurrentEntrySize();
|
|
|
|
|
|
|
|
double semi_acc_read_amp =
|
|
|
|
static_cast<double>(read_bytes) / rawblock.size();
|
|
|
|
double read_amp = static_cast<double>(stats->getTickerCount(
|
|
|
|
READ_AMP_ESTIMATE_USEFUL_BYTES)) /
|
|
|
|
stats->getTickerCount(READ_AMP_TOTAL_READ_BYTES);
|
|
|
|
|
|
|
|
// Error in read amplification will be less than 1% if we are reading
|
|
|
|
// sequentially
|
|
|
|
double error_pct = fabs(semi_acc_read_amp - read_amp) * 100;
|
|
|
|
EXPECT_LT(error_pct, 1);
|
|
|
|
}
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Read the block randomly
|
|
|
|
{
|
2020-02-20 20:07:53 +00:00
|
|
|
std::shared_ptr<Statistics> stats = ROCKSDB_NAMESPACE::CreateDBStatistics();
|
2016-08-27 01:55:58 +00:00
|
|
|
|
|
|
|
// create block reader
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = rawblock;
|
2020-02-25 23:29:17 +00:00
|
|
|
Block reader(std::move(contents), kBytesPerBit, stats.get());
|
2016-08-27 01:55:58 +00:00
|
|
|
|
|
|
|
size_t read_bytes = 0;
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
DataBlockIter *iter = reader.NewDataIterator(
|
2020-07-08 00:25:08 +00:00
|
|
|
options.comparator, kDisableGlobalSequenceNumber, nullptr, stats.get());
|
2016-08-27 01:55:58 +00:00
|
|
|
std::unordered_set<int> read_keys;
|
|
|
|
for (int i = 0; i < num_records; i++) {
|
|
|
|
int index = rnd.Uniform(num_records);
|
|
|
|
Slice k(keys[index]);
|
|
|
|
|
|
|
|
iter->Seek(k);
|
|
|
|
iter->value();
|
|
|
|
if (read_keys.find(index) == read_keys.end()) {
|
|
|
|
read_keys.insert(index);
|
|
|
|
read_bytes += iter->TEST_CurrentEntrySize();
|
|
|
|
}
|
|
|
|
|
|
|
|
double semi_acc_read_amp =
|
|
|
|
static_cast<double>(read_bytes) / rawblock.size();
|
|
|
|
double read_amp = static_cast<double>(stats->getTickerCount(
|
|
|
|
READ_AMP_ESTIMATE_USEFUL_BYTES)) /
|
|
|
|
stats->getTickerCount(READ_AMP_TOTAL_READ_BYTES);
|
|
|
|
|
|
|
|
double error_pct = fabs(semi_acc_read_amp - read_amp) * 100;
|
|
|
|
// Error in read amplification will be less than 2% if we are reading
|
|
|
|
// randomly
|
|
|
|
EXPECT_LT(error_pct, 2);
|
|
|
|
}
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(BlockTest, ReadAmpBitmapPow2) {
|
2020-02-20 20:07:53 +00:00
|
|
|
std::shared_ptr<Statistics> stats = ROCKSDB_NAMESPACE::CreateDBStatistics();
|
2019-09-09 18:22:28 +00:00
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 1, stats.get()).GetBytesPerBit(), 1u);
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 2, stats.get()).GetBytesPerBit(), 2u);
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 4, stats.get()).GetBytesPerBit(), 4u);
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 8, stats.get()).GetBytesPerBit(), 8u);
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 16, stats.get()).GetBytesPerBit(), 16u);
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 32, stats.get()).GetBytesPerBit(), 32u);
|
|
|
|
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 3, stats.get()).GetBytesPerBit(), 2u);
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 7, stats.get()).GetBytesPerBit(), 4u);
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 11, stats.get()).GetBytesPerBit(), 8u);
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 17, stats.get()).GetBytesPerBit(), 16u);
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 33, stats.get()).GetBytesPerBit(), 32u);
|
|
|
|
ASSERT_EQ(BlockReadAmpBitmap(100, 35, stats.get()).GetBytesPerBit(), 32u);
|
2016-08-27 01:55:58 +00:00
|
|
|
}
|
|
|
|
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
class IndexBlockTest
|
|
|
|
: public testing::Test,
|
2023-05-25 22:41:32 +00:00
|
|
|
public testing::WithParamInterface<
|
|
|
|
std::tuple<bool, bool, bool, test::UserDefinedTimestampTestMode>> {
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
public:
|
|
|
|
IndexBlockTest() = default;
|
|
|
|
|
2023-05-25 22:41:32 +00:00
|
|
|
bool keyIncludesSeq() const { return std::get<0>(GetParam()); }
|
|
|
|
bool useValueDeltaEncoding() const { return std::get<1>(GetParam()); }
|
|
|
|
bool includeFirstKey() const { return std::get<2>(GetParam()); }
|
|
|
|
bool isUDTEnabled() const {
|
|
|
|
return test::IsUDTEnabled(std::get<3>(GetParam()));
|
|
|
|
}
|
|
|
|
bool shouldPersistUDT() const {
|
|
|
|
return test::ShouldPersistUDT(std::get<3>(GetParam()));
|
|
|
|
}
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
// Similar to GenerateRandomKVs but for index block contents.
|
|
|
|
void GenerateRandomIndexEntries(std::vector<std::string> *separators,
|
|
|
|
std::vector<BlockHandle> *block_handles,
|
|
|
|
std::vector<std::string> *first_keys,
|
2023-05-25 22:41:32 +00:00
|
|
|
const int len, size_t ts_sz = 0,
|
|
|
|
bool zero_seqno = false) {
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
Random rnd(42);
|
|
|
|
|
|
|
|
// For each of `len` blocks, we need to generate a first and last key.
|
|
|
|
// Let's generate n*2 random keys, sort them, group into consecutive pairs.
|
|
|
|
std::set<std::string> keys;
|
|
|
|
while ((int)keys.size() < len * 2) {
|
|
|
|
// Keys need to be at least 8 bytes long to look like internal keys.
|
2023-04-25 19:08:23 +00:00
|
|
|
std::string new_key = test::RandomKey(&rnd, 12);
|
|
|
|
if (zero_seqno) {
|
|
|
|
AppendInternalKeyFooter(&new_key, 0 /* seqno */, kTypeValue);
|
|
|
|
}
|
2023-05-25 22:41:32 +00:00
|
|
|
if (ts_sz > 0) {
|
|
|
|
std::string key;
|
|
|
|
PadInternalKeyWithMinTimestamp(&key, new_key, ts_sz);
|
|
|
|
keys.insert(std::move(key));
|
|
|
|
} else {
|
|
|
|
keys.insert(std::move(new_key));
|
|
|
|
}
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t offset = 0;
|
|
|
|
for (auto it = keys.begin(); it != keys.end();) {
|
|
|
|
first_keys->emplace_back(*it++);
|
|
|
|
separators->emplace_back(*it++);
|
|
|
|
uint64_t size = rnd.Uniform(1024 * 16);
|
|
|
|
BlockHandle handle(offset, size);
|
Improve / clean up meta block code & integrity (#9163)
Summary:
* Checksums are now checked on meta blocks unless specifically
suppressed or not applicable (e.g. plain table). (Was other way around.)
This means a number of cases that were not checking checksums now are,
including direct read TableProperties in Version::GetTableProperties
(fixed in meta_blocks ReadTableProperties), reading any block from
PersistentCache (fixed in BlockFetcher), read TableProperties in
SstFileDumper (ldb/sst_dump/BackupEngine) before table reader open,
maybe more.
* For that to work, I moved the global_seqno+TableProperties checksum
logic to the shared table/ code, because that is used by many utilies
such as SstFileDumper.
* Also for that to work, we have to know when we're dealing with a block
that has a checksum (trailer), so added that capability to Footer based
on magic number, and from there BlockFetcher.
* Knowledge of trailer presence has also fixed a problem where other
table formats were reading blocks including bytes for a non-existant
trailer--and awkwardly kind-of not using them, e.g. no shared code
checking checksums. (BlockFetcher compression type was populated
incorrectly.) Now we only read what is needed.
* Minimized code duplication and differing/incompatible/awkward
abstractions in meta_blocks.{cc,h} (e.g. SeekTo in metaindex block
without parsing block handle)
* Moved some meta block handling code from table_properties*.*
* Moved some code specific to block-based table from shared table/ code
to BlockBasedTable class. The checksum stuff means we can't completely
separate it, but things that don't need to be in shared table/ code
should not be.
* Use unique_ptr rather than raw ptr in more places. (Note: you can
std::move from unique_ptr to shared_ptr.)
Without enhancements to GetPropertiesOfAllTablesTest (see below),
net reduction of roughly 100 lines of code.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9163
Test Plan:
existing tests and
* Enhanced DBTablePropertiesTest.GetPropertiesOfAllTablesTest to verify that
checksums are now checked on direct read of table properties by TableCache
(new test would fail before this change)
* Also enhanced DBTablePropertiesTest.GetPropertiesOfAllTablesTest to test
putting table properties under old meta name
* Also generally enhanced that same test to actually test what it was
supposed to be testing already, by kicking things out of table cache when
we don't want them there.
Reviewed By: ajkr, mrambacher
Differential Revision: D32514757
Pulled By: pdillinger
fbshipit-source-id: 507964b9311d186ae8d1131182290cbd97a99fa9
2021-11-18 19:42:12 +00:00
|
|
|
offset += size + BlockBasedTable::kBlockTrailerSize;
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
block_handles->emplace_back(handle);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_P(IndexBlockTest, IndexValueEncodingTest) {
|
|
|
|
Random rnd(301);
|
|
|
|
Options options = Options();
|
2023-05-25 22:41:32 +00:00
|
|
|
if (isUDTEnabled()) {
|
|
|
|
options.comparator = test::BytewiseComparatorWithU64TsWrapper();
|
|
|
|
}
|
|
|
|
size_t ts_sz = options.comparator->timestamp_size();
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
|
|
|
|
std::vector<std::string> separators;
|
|
|
|
std::vector<BlockHandle> block_handles;
|
|
|
|
std::vector<std::string> first_keys;
|
|
|
|
const bool kUseDeltaEncoding = true;
|
2023-05-25 22:41:32 +00:00
|
|
|
BlockBuilder builder(16, kUseDeltaEncoding, useValueDeltaEncoding(),
|
|
|
|
BlockBasedTableOptions::kDataBlockBinarySearch,
|
|
|
|
0.75 /* data_block_hash_table_util_ratio */, ts_sz,
|
|
|
|
shouldPersistUDT(), !keyIncludesSeq());
|
|
|
|
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
int num_records = 100;
|
|
|
|
|
|
|
|
GenerateRandomIndexEntries(&separators, &block_handles, &first_keys,
|
2023-05-25 22:41:32 +00:00
|
|
|
num_records, ts_sz, false /* zero_seqno */);
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
BlockHandle last_encoded_handle;
|
|
|
|
for (int i = 0; i < num_records; i++) {
|
2023-05-25 22:41:32 +00:00
|
|
|
std::string first_key_to_persist_buf;
|
|
|
|
Slice first_internal_key = first_keys[i];
|
|
|
|
if (ts_sz > 0 && !shouldPersistUDT()) {
|
|
|
|
StripTimestampFromInternalKey(&first_key_to_persist_buf, first_keys[i],
|
|
|
|
ts_sz);
|
|
|
|
first_internal_key = first_key_to_persist_buf;
|
|
|
|
}
|
|
|
|
IndexValue entry(block_handles[i], first_internal_key);
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
std::string encoded_entry;
|
|
|
|
std::string delta_encoded_entry;
|
|
|
|
entry.EncodeTo(&encoded_entry, includeFirstKey(), nullptr);
|
|
|
|
if (useValueDeltaEncoding() && i > 0) {
|
|
|
|
entry.EncodeTo(&delta_encoded_entry, includeFirstKey(),
|
|
|
|
&last_encoded_handle);
|
|
|
|
}
|
|
|
|
last_encoded_handle = entry.handle;
|
|
|
|
const Slice delta_encoded_entry_slice(delta_encoded_entry);
|
2023-05-25 22:41:32 +00:00
|
|
|
|
|
|
|
if (keyIncludesSeq()) {
|
|
|
|
builder.Add(separators[i], encoded_entry, &delta_encoded_entry_slice);
|
|
|
|
} else {
|
|
|
|
const Slice user_key = ExtractUserKey(separators[i]);
|
|
|
|
builder.Add(user_key, encoded_entry, &delta_encoded_entry_slice);
|
|
|
|
}
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// read serialized contents of the block
|
|
|
|
Slice rawblock = builder.Finish();
|
|
|
|
|
|
|
|
// create block reader
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = rawblock;
|
2020-02-25 23:29:17 +00:00
|
|
|
Block reader(std::move(contents));
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
|
|
|
|
const bool kTotalOrderSeek = true;
|
|
|
|
IndexBlockIter *kNullIter = nullptr;
|
|
|
|
Statistics *kNullStats = nullptr;
|
|
|
|
// read contents of block sequentially
|
|
|
|
InternalIteratorBase<IndexValue> *iter = reader.NewIndexIterator(
|
2020-07-08 00:25:08 +00:00
|
|
|
options.comparator, kDisableGlobalSequenceNumber, kNullIter, kNullStats,
|
2023-05-25 22:41:32 +00:00
|
|
|
kTotalOrderSeek, includeFirstKey(), keyIncludesSeq(),
|
|
|
|
!useValueDeltaEncoding(), false /* block_contents_pinned */,
|
|
|
|
shouldPersistUDT());
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
iter->SeekToFirst();
|
|
|
|
for (int index = 0; index < num_records; ++index) {
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
|
|
|
|
Slice k = iter->key();
|
|
|
|
IndexValue v = iter->value();
|
|
|
|
|
2023-05-25 22:41:32 +00:00
|
|
|
if (keyIncludesSeq()) {
|
|
|
|
EXPECT_EQ(separators[index], k.ToString());
|
|
|
|
} else {
|
|
|
|
const Slice user_key = ExtractUserKey(separators[index]);
|
|
|
|
EXPECT_EQ(user_key, k);
|
|
|
|
}
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
EXPECT_EQ(block_handles[index].offset(), v.handle.offset());
|
|
|
|
EXPECT_EQ(block_handles[index].size(), v.handle.size());
|
|
|
|
EXPECT_EQ(includeFirstKey() ? first_keys[index] : "",
|
|
|
|
v.first_internal_key.ToString());
|
|
|
|
|
|
|
|
iter->Next();
|
|
|
|
}
|
|
|
|
delete iter;
|
|
|
|
|
|
|
|
// read block contents randomly
|
2020-07-08 00:25:08 +00:00
|
|
|
iter = reader.NewIndexIterator(
|
|
|
|
options.comparator, kDisableGlobalSequenceNumber, kNullIter, kNullStats,
|
2023-05-25 22:41:32 +00:00
|
|
|
kTotalOrderSeek, includeFirstKey(), keyIncludesSeq(),
|
|
|
|
!useValueDeltaEncoding(), false /* block_contents_pinned */,
|
|
|
|
shouldPersistUDT());
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
for (int i = 0; i < num_records * 2; i++) {
|
|
|
|
// find a random key in the lookaside array
|
|
|
|
int index = rnd.Uniform(num_records);
|
|
|
|
Slice k(separators[index]);
|
|
|
|
|
|
|
|
// search in block for this key
|
|
|
|
iter->Seek(k);
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
IndexValue v = iter->value();
|
2023-05-25 22:41:32 +00:00
|
|
|
if (keyIncludesSeq()) {
|
|
|
|
EXPECT_EQ(separators[index], iter->key().ToString());
|
|
|
|
} else {
|
|
|
|
const Slice user_key = ExtractUserKey(separators[index]);
|
|
|
|
EXPECT_EQ(user_key, iter->key());
|
|
|
|
}
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
EXPECT_EQ(block_handles[index].offset(), v.handle.offset());
|
|
|
|
EXPECT_EQ(block_handles[index].size(), v.handle.size());
|
|
|
|
EXPECT_EQ(includeFirstKey() ? first_keys[index] : "",
|
|
|
|
v.first_internal_key.ToString());
|
|
|
|
}
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
|
2023-05-25 22:41:32 +00:00
|
|
|
// Param 0: key includes sequence number (whether to use user key or internal
|
|
|
|
// key as key entry in index block).
|
|
|
|
// Param 1: use value delta encoding
|
|
|
|
// Param 2: include first key
|
|
|
|
// Param 3: user-defined timestamp test mode
|
|
|
|
INSTANTIATE_TEST_CASE_P(
|
|
|
|
P, IndexBlockTest,
|
|
|
|
::testing::Combine(::testing::Bool(), ::testing::Bool(), ::testing::Bool(),
|
|
|
|
::testing::ValuesIn(test::GetUDTTestModes())));
|
Add an option to put first key of each sst block in the index (#5289)
Summary:
The first key is used to defer reading the data block until this file gets to the top of merging iterator's heap. For short range scans, most files never make it to the top of the heap, so this change can reduce read amplification by a lot sometimes.
Consider the following workload. There are a few data streams (we'll be calling them "logs"), each stream consisting of a sequence of blobs (we'll be calling them "records"). Each record is identified by log ID and a sequence number within the log. RocksDB key is concatenation of log ID and sequence number (big endian). Reads are mostly relatively short range scans, each within a single log. Writes are mostly sequential for each log, but writes to different logs are randomly interleaved. Compactions are disabled; instead, when we accumulate a few tens of sst files, we create a new column family and start writing to it.
So, a typical sst file consists of a few ranges of blocks, each range corresponding to one log ID (we use FlushBlockPolicy to cut blocks at log boundaries). A typical read would go like this. First, iterator Seek() reads one block from each sst file. Then a series of Next()s move through one sst file (since writes to each log are mostly sequential) until the subiterator reaches the end of this log in this sst file; then Next() switches to the next sst file and reads sequentially from that, and so on. Often a range scan will only return records from a small number of blocks in small number of sst files; in this case, the cost of initial Seek() reading one block from each file may be bigger than the cost of reading the actually useful blocks.
Neither iterate_upper_bound nor bloom filters can prevent reading one block from each file in Seek(). But this PR can: if the index contains first key from each block, we don't have to read the block until this block actually makes it to the top of merging iterator's heap, so for short range scans we won't read any blocks from most of the sst files.
This PR does the deferred block loading inside value() call. This is not ideal: there's no good way to report an IO error from inside value(). As discussed with siying offline, it would probably be better to change InternalIterator's interface to explicitly fetch deferred value and get status. I'll do it in a separate PR.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5289
Differential Revision: D15256423
Pulled By: al13n321
fbshipit-source-id: 750e4c39ce88e8d41662f701cf6275d9388ba46a
2019-06-25 03:50:35 +00:00
|
|
|
|
2023-04-25 19:08:23 +00:00
|
|
|
class BlockPerKVChecksumTest : public DBTestBase {
|
|
|
|
public:
|
|
|
|
BlockPerKVChecksumTest()
|
|
|
|
: DBTestBase("block_per_kv_checksum", /*env_do_fsync=*/false) {}
|
|
|
|
|
|
|
|
template <typename TBlockIter>
|
|
|
|
void TestIterateForward(std::unique_ptr<TBlockIter> &biter,
|
|
|
|
size_t &verification_count) {
|
|
|
|
while (biter->Valid()) {
|
|
|
|
verification_count = 0;
|
|
|
|
biter->Next();
|
|
|
|
if (biter->Valid()) {
|
|
|
|
ASSERT_GE(verification_count, 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
template <typename TBlockIter>
|
|
|
|
void TestIterateBackward(std::unique_ptr<TBlockIter> &biter,
|
|
|
|
size_t &verification_count) {
|
|
|
|
while (biter->Valid()) {
|
|
|
|
verification_count = 0;
|
|
|
|
biter->Prev();
|
|
|
|
if (biter->Valid()) {
|
|
|
|
ASSERT_GE(verification_count, 1);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
template <typename TBlockIter>
|
|
|
|
void TestSeekToFirst(std::unique_ptr<TBlockIter> &biter,
|
|
|
|
size_t &verification_count) {
|
|
|
|
verification_count = 0;
|
|
|
|
biter->SeekToFirst();
|
|
|
|
ASSERT_GE(verification_count, 1);
|
|
|
|
TestIterateForward(biter, verification_count);
|
|
|
|
}
|
|
|
|
|
|
|
|
template <typename TBlockIter>
|
|
|
|
void TestSeekToLast(std::unique_ptr<TBlockIter> &biter,
|
|
|
|
size_t &verification_count) {
|
|
|
|
verification_count = 0;
|
|
|
|
biter->SeekToLast();
|
|
|
|
ASSERT_GE(verification_count, 1);
|
|
|
|
TestIterateBackward(biter, verification_count);
|
|
|
|
}
|
|
|
|
|
|
|
|
template <typename TBlockIter>
|
|
|
|
void TestSeekForPrev(std::unique_ptr<TBlockIter> &biter,
|
|
|
|
size_t &verification_count, std::string k) {
|
|
|
|
verification_count = 0;
|
|
|
|
biter->SeekForPrev(k);
|
|
|
|
ASSERT_GE(verification_count, 1);
|
|
|
|
TestIterateBackward(biter, verification_count);
|
|
|
|
}
|
|
|
|
|
|
|
|
template <typename TBlockIter>
|
|
|
|
void TestSeek(std::unique_ptr<TBlockIter> &biter, size_t &verification_count,
|
|
|
|
std::string k) {
|
|
|
|
verification_count = 0;
|
|
|
|
biter->Seek(k);
|
|
|
|
ASSERT_GE(verification_count, 1);
|
|
|
|
TestIterateForward(biter, verification_count);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool VerifyChecksum(uint32_t checksum_len, const char *checksum_ptr,
|
|
|
|
const Slice &key, const Slice &val) {
|
|
|
|
if (!checksum_len) {
|
|
|
|
return checksum_ptr == nullptr;
|
|
|
|
}
|
|
|
|
return ProtectionInfo64().ProtectKV(key, val).Verify(
|
|
|
|
static_cast<uint8_t>(checksum_len), checksum_ptr);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
TEST_F(BlockPerKVChecksumTest, EmptyBlock) {
|
|
|
|
// Tests that empty block code path is not broken by per kv checksum.
|
|
|
|
BlockBuilder builder(
|
|
|
|
16 /* block_restart_interval */, true /* use_delta_encoding */,
|
|
|
|
false /* use_value_delta_encoding */,
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::kDataBlockBinarySearch);
|
|
|
|
Slice raw_block = builder.Finish();
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = raw_block;
|
|
|
|
|
|
|
|
std::unique_ptr<Block_kData> data_block;
|
|
|
|
Options options = Options();
|
|
|
|
BlockBasedTableOptions tbo;
|
|
|
|
uint8_t protection_bytes_per_key = 8;
|
Support compressed and local flash secondary cache stacking (#11812)
Summary:
This PR implements support for a three tier cache - primary block cache, compressed secondary cache, and a nvm (local flash) secondary cache. This allows more effective utilization of the nvm cache, and minimizes the number of reads from local flash by caching compressed blocks in the compressed secondary cache.
The basic design is as follows -
1. A new secondary cache implementation, ```TieredSecondaryCache```, is introduced. It keeps the compressed and nvm secondary caches and manages the movement of blocks between them and the primary block cache. To setup a three tier cache, we allocate a ```CacheWithSecondaryAdapter```, with a ```TieredSecondaryCache``` instance as the secondary cache.
2. The table reader passes both the uncompressed and compressed block to ```FullTypedCacheInterface::InsertFull```, allowing the block cache to optionally store the compressed block.
3. When there's a miss, the block object is constructed and inserted in the primary cache, and the compressed block is inserted into the nvm cache by calling ```InsertSaved```. This avoids the overhead of recompressing the block, as well as avoiding putting more memory pressure on the compressed secondary cache.
4. When there's a hit in the nvm cache, we attempt to insert the block in the compressed secondary cache and the primary cache, subject to the admission policy of those caches (i.e admit on second access). Blocks/items evicted from any tier are simply discarded.
We can easily implement additional admission policies if desired.
Todo (In a subsequent PR):
1. Add to db_bench and run benchmarks
2. Add to db_stress
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11812
Reviewed By: pdillinger
Differential Revision: D49461842
Pulled By: anand1976
fbshipit-source-id: b40ac1330ef7cd8c12efa0a3ca75128e602e3a0b
2023-09-22 03:30:53 +00:00
|
|
|
BlockCreateContext create_context{&tbo,
|
|
|
|
nullptr,
|
|
|
|
nullptr /* statistics */,
|
|
|
|
false /* using_zstd */,
|
|
|
|
protection_bytes_per_key,
|
|
|
|
options.comparator};
|
2023-04-25 19:08:23 +00:00
|
|
|
create_context.Create(&data_block, std::move(contents));
|
|
|
|
std::unique_ptr<DataBlockIter> biter{data_block->NewDataIterator(
|
|
|
|
options.comparator, kDisableGlobalSequenceNumber)};
|
|
|
|
biter->SeekToFirst();
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
Random rnd(33);
|
|
|
|
biter->SeekForGet(GenerateInternalKey(1, 1, 10, &rnd));
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
biter->SeekToLast();
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
biter->Seek(GenerateInternalKey(1, 1, 10, &rnd));
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
biter->SeekForPrev(GenerateInternalKey(1, 1, 10, &rnd));
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(BlockPerKVChecksumTest, UnsupportedOptionValue) {
|
|
|
|
Options options = Options();
|
|
|
|
options.block_protection_bytes_per_key = 128;
|
|
|
|
Destroy(options);
|
|
|
|
ASSERT_TRUE(TryReopen(options).IsNotSupported());
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(BlockPerKVChecksumTest, InitializeProtectionInfo) {
|
|
|
|
// Make sure that the checksum construction code path does not break
|
|
|
|
// when the block is itself already corrupted.
|
|
|
|
Options options = Options();
|
|
|
|
BlockBasedTableOptions tbo;
|
|
|
|
uint8_t protection_bytes_per_key = 8;
|
Support compressed and local flash secondary cache stacking (#11812)
Summary:
This PR implements support for a three tier cache - primary block cache, compressed secondary cache, and a nvm (local flash) secondary cache. This allows more effective utilization of the nvm cache, and minimizes the number of reads from local flash by caching compressed blocks in the compressed secondary cache.
The basic design is as follows -
1. A new secondary cache implementation, ```TieredSecondaryCache```, is introduced. It keeps the compressed and nvm secondary caches and manages the movement of blocks between them and the primary block cache. To setup a three tier cache, we allocate a ```CacheWithSecondaryAdapter```, with a ```TieredSecondaryCache``` instance as the secondary cache.
2. The table reader passes both the uncompressed and compressed block to ```FullTypedCacheInterface::InsertFull```, allowing the block cache to optionally store the compressed block.
3. When there's a miss, the block object is constructed and inserted in the primary cache, and the compressed block is inserted into the nvm cache by calling ```InsertSaved```. This avoids the overhead of recompressing the block, as well as avoiding putting more memory pressure on the compressed secondary cache.
4. When there's a hit in the nvm cache, we attempt to insert the block in the compressed secondary cache and the primary cache, subject to the admission policy of those caches (i.e admit on second access). Blocks/items evicted from any tier are simply discarded.
We can easily implement additional admission policies if desired.
Todo (In a subsequent PR):
1. Add to db_bench and run benchmarks
2. Add to db_stress
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11812
Reviewed By: pdillinger
Differential Revision: D49461842
Pulled By: anand1976
fbshipit-source-id: b40ac1330ef7cd8c12efa0a3ca75128e602e3a0b
2023-09-22 03:30:53 +00:00
|
|
|
BlockCreateContext create_context{&tbo,
|
|
|
|
nullptr /* ioptions */,
|
|
|
|
nullptr /* statistics */,
|
|
|
|
false /* using_zstd */,
|
|
|
|
protection_bytes_per_key,
|
|
|
|
options.comparator};
|
2023-04-25 19:08:23 +00:00
|
|
|
|
|
|
|
{
|
|
|
|
std::string invalid_content = "1";
|
|
|
|
Slice raw_block = invalid_content;
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = raw_block;
|
|
|
|
std::unique_ptr<Block_kData> data_block;
|
|
|
|
create_context.Create(&data_block, std::move(contents));
|
|
|
|
std::unique_ptr<DataBlockIter> iter{data_block->NewDataIterator(
|
|
|
|
options.comparator, kDisableGlobalSequenceNumber)};
|
|
|
|
ASSERT_TRUE(iter->status().IsCorruption());
|
|
|
|
}
|
|
|
|
{
|
|
|
|
std::string invalid_content = "1";
|
|
|
|
Slice raw_block = invalid_content;
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = raw_block;
|
|
|
|
std::unique_ptr<Block_kIndex> index_block;
|
|
|
|
create_context.Create(&index_block, std::move(contents));
|
|
|
|
std::unique_ptr<IndexBlockIter> iter{index_block->NewIndexIterator(
|
|
|
|
options.comparator, kDisableGlobalSequenceNumber, nullptr, nullptr,
|
|
|
|
true, false, true, true)};
|
|
|
|
ASSERT_TRUE(iter->status().IsCorruption());
|
|
|
|
}
|
|
|
|
{
|
|
|
|
std::string invalid_content = "1";
|
|
|
|
Slice raw_block = invalid_content;
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = raw_block;
|
|
|
|
std::unique_ptr<Block_kMetaIndex> meta_block;
|
|
|
|
create_context.Create(&meta_block, std::move(contents));
|
|
|
|
std::unique_ptr<MetaBlockIter> iter{meta_block->NewMetaIterator(true)};
|
|
|
|
ASSERT_TRUE(iter->status().IsCorruption());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(BlockPerKVChecksumTest, ApproximateMemory) {
|
|
|
|
// Tests that ApproximateMemoryUsage() includes memory used by block kv
|
|
|
|
// checksum.
|
|
|
|
const int kNumRecords = 20;
|
|
|
|
std::vector<std::string> keys;
|
|
|
|
std::vector<std::string> values;
|
|
|
|
GenerateRandomKVs(&keys, &values, 0, kNumRecords, 1 /* step */,
|
|
|
|
24 /* padding_size */);
|
|
|
|
std::unique_ptr<BlockBuilder> builder;
|
|
|
|
auto generate_block_content = [&]() {
|
|
|
|
builder = std::make_unique<BlockBuilder>(16 /* restart_interval */);
|
|
|
|
for (int i = 0; i < kNumRecords; ++i) {
|
|
|
|
builder->Add(keys[i], values[i]);
|
|
|
|
}
|
|
|
|
Slice raw_block = builder->Finish();
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = raw_block;
|
|
|
|
return contents;
|
|
|
|
};
|
|
|
|
|
|
|
|
Options options = Options();
|
|
|
|
BlockBasedTableOptions tbo;
|
|
|
|
uint8_t protection_bytes_per_key = 8;
|
|
|
|
BlockCreateContext with_checksum_create_context{
|
|
|
|
&tbo,
|
Support compressed and local flash secondary cache stacking (#11812)
Summary:
This PR implements support for a three tier cache - primary block cache, compressed secondary cache, and a nvm (local flash) secondary cache. This allows more effective utilization of the nvm cache, and minimizes the number of reads from local flash by caching compressed blocks in the compressed secondary cache.
The basic design is as follows -
1. A new secondary cache implementation, ```TieredSecondaryCache```, is introduced. It keeps the compressed and nvm secondary caches and manages the movement of blocks between them and the primary block cache. To setup a three tier cache, we allocate a ```CacheWithSecondaryAdapter```, with a ```TieredSecondaryCache``` instance as the secondary cache.
2. The table reader passes both the uncompressed and compressed block to ```FullTypedCacheInterface::InsertFull```, allowing the block cache to optionally store the compressed block.
3. When there's a miss, the block object is constructed and inserted in the primary cache, and the compressed block is inserted into the nvm cache by calling ```InsertSaved```. This avoids the overhead of recompressing the block, as well as avoiding putting more memory pressure on the compressed secondary cache.
4. When there's a hit in the nvm cache, we attempt to insert the block in the compressed secondary cache and the primary cache, subject to the admission policy of those caches (i.e admit on second access). Blocks/items evicted from any tier are simply discarded.
We can easily implement additional admission policies if desired.
Todo (In a subsequent PR):
1. Add to db_bench and run benchmarks
2. Add to db_stress
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11812
Reviewed By: pdillinger
Differential Revision: D49461842
Pulled By: anand1976
fbshipit-source-id: b40ac1330ef7cd8c12efa0a3ca75128e602e3a0b
2023-09-22 03:30:53 +00:00
|
|
|
nullptr /* ioptions */,
|
2023-04-25 19:08:23 +00:00
|
|
|
nullptr /* statistics */,
|
|
|
|
false /* using_zstd */,
|
|
|
|
protection_bytes_per_key,
|
|
|
|
options.comparator,
|
|
|
|
true /* index_value_is_full */};
|
Support compressed and local flash secondary cache stacking (#11812)
Summary:
This PR implements support for a three tier cache - primary block cache, compressed secondary cache, and a nvm (local flash) secondary cache. This allows more effective utilization of the nvm cache, and minimizes the number of reads from local flash by caching compressed blocks in the compressed secondary cache.
The basic design is as follows -
1. A new secondary cache implementation, ```TieredSecondaryCache```, is introduced. It keeps the compressed and nvm secondary caches and manages the movement of blocks between them and the primary block cache. To setup a three tier cache, we allocate a ```CacheWithSecondaryAdapter```, with a ```TieredSecondaryCache``` instance as the secondary cache.
2. The table reader passes both the uncompressed and compressed block to ```FullTypedCacheInterface::InsertFull```, allowing the block cache to optionally store the compressed block.
3. When there's a miss, the block object is constructed and inserted in the primary cache, and the compressed block is inserted into the nvm cache by calling ```InsertSaved```. This avoids the overhead of recompressing the block, as well as avoiding putting more memory pressure on the compressed secondary cache.
4. When there's a hit in the nvm cache, we attempt to insert the block in the compressed secondary cache and the primary cache, subject to the admission policy of those caches (i.e admit on second access). Blocks/items evicted from any tier are simply discarded.
We can easily implement additional admission policies if desired.
Todo (In a subsequent PR):
1. Add to db_bench and run benchmarks
2. Add to db_stress
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11812
Reviewed By: pdillinger
Differential Revision: D49461842
Pulled By: anand1976
fbshipit-source-id: b40ac1330ef7cd8c12efa0a3ca75128e602e3a0b
2023-09-22 03:30:53 +00:00
|
|
|
BlockCreateContext create_context{&tbo,
|
|
|
|
nullptr /* ioptions */,
|
|
|
|
nullptr /* statistics */,
|
|
|
|
false /* using_zstd */,
|
|
|
|
0,
|
|
|
|
options.comparator,
|
|
|
|
true /* index_value_is_full */};
|
2023-04-25 19:08:23 +00:00
|
|
|
|
|
|
|
{
|
|
|
|
std::unique_ptr<Block_kData> data_block;
|
|
|
|
create_context.Create(&data_block, generate_block_content());
|
|
|
|
size_t block_memory = data_block->ApproximateMemoryUsage();
|
|
|
|
std::unique_ptr<Block_kData> with_checksum_data_block;
|
|
|
|
with_checksum_create_context.Create(&with_checksum_data_block,
|
|
|
|
generate_block_content());
|
|
|
|
ASSERT_GT(with_checksum_data_block->ApproximateMemoryUsage() - block_memory,
|
|
|
|
100);
|
|
|
|
}
|
|
|
|
|
|
|
|
{
|
|
|
|
std::unique_ptr<Block_kData> meta_block;
|
|
|
|
create_context.Create(&meta_block, generate_block_content());
|
|
|
|
size_t block_memory = meta_block->ApproximateMemoryUsage();
|
|
|
|
std::unique_ptr<Block_kData> with_checksum_meta_block;
|
|
|
|
with_checksum_create_context.Create(&with_checksum_meta_block,
|
|
|
|
generate_block_content());
|
|
|
|
// Rough comparison to avoid flaky test due to memory allocation alignment.
|
|
|
|
ASSERT_GT(with_checksum_meta_block->ApproximateMemoryUsage() - block_memory,
|
|
|
|
100);
|
|
|
|
}
|
|
|
|
|
|
|
|
{
|
|
|
|
// Index block has different contents.
|
|
|
|
std::vector<std::string> separators;
|
|
|
|
std::vector<BlockHandle> block_handles;
|
|
|
|
std::vector<std::string> first_keys;
|
|
|
|
GenerateRandomIndexEntries(&separators, &block_handles, &first_keys,
|
|
|
|
kNumRecords);
|
|
|
|
auto generate_index_content = [&]() {
|
|
|
|
builder = std::make_unique<BlockBuilder>(16 /* restart_interval */);
|
|
|
|
BlockHandle last_encoded_handle;
|
|
|
|
for (int i = 0; i < kNumRecords; ++i) {
|
|
|
|
IndexValue entry(block_handles[i], first_keys[i]);
|
|
|
|
std::string encoded_entry;
|
|
|
|
std::string delta_encoded_entry;
|
|
|
|
entry.EncodeTo(&encoded_entry, false, nullptr);
|
|
|
|
last_encoded_handle = entry.handle;
|
|
|
|
const Slice delta_encoded_entry_slice(delta_encoded_entry);
|
|
|
|
builder->Add(separators[i], encoded_entry, &delta_encoded_entry_slice);
|
|
|
|
}
|
|
|
|
Slice raw_block = builder->Finish();
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = raw_block;
|
|
|
|
return contents;
|
|
|
|
};
|
|
|
|
|
|
|
|
std::unique_ptr<Block_kIndex> index_block;
|
|
|
|
create_context.Create(&index_block, generate_index_content());
|
|
|
|
size_t block_memory = index_block->ApproximateMemoryUsage();
|
|
|
|
std::unique_ptr<Block_kIndex> with_checksum_index_block;
|
|
|
|
with_checksum_create_context.Create(&with_checksum_index_block,
|
|
|
|
generate_index_content());
|
|
|
|
ASSERT_GT(
|
|
|
|
with_checksum_index_block->ApproximateMemoryUsage() - block_memory,
|
|
|
|
100);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
std::string GetDataBlockIndexTypeStr(
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType t) {
|
|
|
|
return t == BlockBasedTableOptions::DataBlockIndexType::kDataBlockBinarySearch
|
|
|
|
? "BinarySearch"
|
|
|
|
: "BinaryAndHash";
|
|
|
|
}
|
|
|
|
|
|
|
|
class DataBlockKVChecksumTest
|
|
|
|
: public BlockPerKVChecksumTest,
|
|
|
|
public testing::WithParamInterface<std::tuple<
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType,
|
|
|
|
uint8_t /* block_protection_bytes_per_key */,
|
|
|
|
uint32_t /* restart_interval*/, bool /* use_delta_encoding */>> {
|
|
|
|
public:
|
|
|
|
DataBlockKVChecksumTest() = default;
|
|
|
|
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType GetDataBlockIndexType() const {
|
|
|
|
return std::get<0>(GetParam());
|
|
|
|
}
|
|
|
|
uint8_t GetChecksumLen() const { return std::get<1>(GetParam()); }
|
|
|
|
uint32_t GetRestartInterval() const { return std::get<2>(GetParam()); }
|
|
|
|
bool GetUseDeltaEncoding() const { return std::get<3>(GetParam()); }
|
|
|
|
|
|
|
|
std::unique_ptr<Block_kData> GenerateDataBlock(
|
|
|
|
std::vector<std::string> &keys, std::vector<std::string> &values,
|
|
|
|
int num_record) {
|
|
|
|
BlockBasedTableOptions tbo;
|
Support compressed and local flash secondary cache stacking (#11812)
Summary:
This PR implements support for a three tier cache - primary block cache, compressed secondary cache, and a nvm (local flash) secondary cache. This allows more effective utilization of the nvm cache, and minimizes the number of reads from local flash by caching compressed blocks in the compressed secondary cache.
The basic design is as follows -
1. A new secondary cache implementation, ```TieredSecondaryCache```, is introduced. It keeps the compressed and nvm secondary caches and manages the movement of blocks between them and the primary block cache. To setup a three tier cache, we allocate a ```CacheWithSecondaryAdapter```, with a ```TieredSecondaryCache``` instance as the secondary cache.
2. The table reader passes both the uncompressed and compressed block to ```FullTypedCacheInterface::InsertFull```, allowing the block cache to optionally store the compressed block.
3. When there's a miss, the block object is constructed and inserted in the primary cache, and the compressed block is inserted into the nvm cache by calling ```InsertSaved```. This avoids the overhead of recompressing the block, as well as avoiding putting more memory pressure on the compressed secondary cache.
4. When there's a hit in the nvm cache, we attempt to insert the block in the compressed secondary cache and the primary cache, subject to the admission policy of those caches (i.e admit on second access). Blocks/items evicted from any tier are simply discarded.
We can easily implement additional admission policies if desired.
Todo (In a subsequent PR):
1. Add to db_bench and run benchmarks
2. Add to db_stress
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11812
Reviewed By: pdillinger
Differential Revision: D49461842
Pulled By: anand1976
fbshipit-source-id: b40ac1330ef7cd8c12efa0a3ca75128e602e3a0b
2023-09-22 03:30:53 +00:00
|
|
|
BlockCreateContext create_context{&tbo,
|
|
|
|
nullptr /* statistics */,
|
|
|
|
nullptr /* ioptions */,
|
|
|
|
false /* using_zstd */,
|
|
|
|
GetChecksumLen(),
|
2023-04-25 19:08:23 +00:00
|
|
|
Options().comparator};
|
|
|
|
builder_ = std::make_unique<BlockBuilder>(
|
|
|
|
static_cast<int>(GetRestartInterval()),
|
|
|
|
GetUseDeltaEncoding() /* use_delta_encoding */,
|
|
|
|
false /* use_value_delta_encoding */, GetDataBlockIndexType());
|
|
|
|
for (int i = 0; i < num_record; i++) {
|
|
|
|
builder_->Add(keys[i], values[i]);
|
|
|
|
}
|
|
|
|
Slice raw_block = builder_->Finish();
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = raw_block;
|
|
|
|
std::unique_ptr<Block_kData> data_block;
|
|
|
|
create_context.Create(&data_block, std::move(contents));
|
|
|
|
return data_block;
|
|
|
|
}
|
|
|
|
|
|
|
|
std::unique_ptr<BlockBuilder> builder_;
|
|
|
|
};
|
|
|
|
|
|
|
|
INSTANTIATE_TEST_CASE_P(
|
|
|
|
P, DataBlockKVChecksumTest,
|
|
|
|
::testing::Combine(
|
|
|
|
::testing::Values(
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::kDataBlockBinarySearch,
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::
|
|
|
|
kDataBlockBinaryAndHash),
|
|
|
|
::testing::Values(0, 1, 2, 4, 8) /* protection_bytes_per_key */,
|
|
|
|
::testing::Values(1, 2, 3, 8, 16) /* restart_interval */,
|
|
|
|
::testing::Values(false, true)) /* delta_encoding */,
|
|
|
|
[](const testing::TestParamInfo<std::tuple<
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType, uint8_t, uint32_t, bool>>
|
|
|
|
&args) {
|
|
|
|
std::ostringstream oss;
|
|
|
|
oss << GetDataBlockIndexTypeStr(std::get<0>(args.param))
|
|
|
|
<< "ProtectionPerKey" << std::to_string(std::get<1>(args.param))
|
|
|
|
<< "RestartInterval" << std::to_string(std::get<2>(args.param))
|
|
|
|
<< "DeltaEncode" << std::to_string(std::get<3>(args.param));
|
|
|
|
return oss.str();
|
|
|
|
});
|
|
|
|
|
|
|
|
TEST_P(DataBlockKVChecksumTest, ChecksumConstructionAndVerification) {
|
|
|
|
uint8_t protection_bytes_per_key = GetChecksumLen();
|
|
|
|
std::vector<int> num_restart_intervals = {1, 16};
|
|
|
|
for (const auto num_restart_interval : num_restart_intervals) {
|
|
|
|
const int kNumRecords =
|
|
|
|
num_restart_interval * static_cast<int>(GetRestartInterval());
|
|
|
|
std::vector<std::string> keys;
|
|
|
|
std::vector<std::string> values;
|
|
|
|
GenerateRandomKVs(&keys, &values, 0, kNumRecords + 1, 1 /* step */,
|
|
|
|
24 /* padding_size */);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
std::unique_ptr<Block_kData> data_block =
|
|
|
|
GenerateDataBlock(keys, values, kNumRecords);
|
|
|
|
|
|
|
|
const char *checksum_ptr = data_block->TEST_GetKVChecksum();
|
|
|
|
// Check checksum of correct length is generated
|
|
|
|
for (int i = 0; i < kNumRecords; i++) {
|
|
|
|
ASSERT_TRUE(VerifyChecksum(protection_bytes_per_key,
|
|
|
|
checksum_ptr + i * protection_bytes_per_key,
|
|
|
|
keys[i], values[i]));
|
|
|
|
}
|
|
|
|
std::vector<SequenceNumber> seqnos{kDisableGlobalSequenceNumber, 0};
|
|
|
|
|
|
|
|
// Could just use a boolean flag. Use a counter here just to keep open the
|
|
|
|
// possibility of checking the exact number of verifications in the future.
|
|
|
|
size_t verification_count = 0;
|
|
|
|
// The SyncPoint is placed before checking checksum_len == 0 in
|
|
|
|
// Block::VerifyChecksum(). So verification count is incremented even with
|
|
|
|
// protection_bytes_per_key = 0. No actual checksum computation is done in
|
|
|
|
// that case (see Block::VerifyChecksum()).
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"Block::VerifyChecksum::checksum_len",
|
|
|
|
[&verification_count, protection_bytes_per_key](void *checksum_len) {
|
|
|
|
ASSERT_EQ((*static_cast<uint8_t *>(checksum_len)),
|
|
|
|
protection_bytes_per_key);
|
|
|
|
++verification_count;
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
for (const auto seqno : seqnos) {
|
|
|
|
std::unique_ptr<DataBlockIter> biter{
|
|
|
|
data_block->NewDataIterator(Options().comparator, seqno)};
|
|
|
|
|
|
|
|
// SeekForGet() some key that does not exist
|
|
|
|
biter->SeekForGet(keys[kNumRecords]);
|
|
|
|
TestIterateForward(biter, verification_count);
|
|
|
|
|
|
|
|
verification_count = 0;
|
|
|
|
biter->SeekForGet(keys[kNumRecords / 2]);
|
|
|
|
ASSERT_GE(verification_count, 1);
|
|
|
|
TestIterateForward(biter, verification_count);
|
|
|
|
|
|
|
|
TestSeekToFirst(biter, verification_count);
|
|
|
|
TestSeekToLast(biter, verification_count);
|
|
|
|
TestSeekForPrev(biter, verification_count, keys[kNumRecords / 2]);
|
|
|
|
TestSeek(biter, verification_count, keys[kNumRecords / 2]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
class IndexBlockKVChecksumTest
|
|
|
|
: public BlockPerKVChecksumTest,
|
|
|
|
public testing::WithParamInterface<
|
|
|
|
std::tuple<BlockBasedTableOptions::DataBlockIndexType, uint8_t,
|
|
|
|
uint32_t, bool, bool>> {
|
|
|
|
public:
|
|
|
|
IndexBlockKVChecksumTest() = default;
|
|
|
|
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType GetDataBlockIndexType() const {
|
|
|
|
return std::get<0>(GetParam());
|
|
|
|
}
|
|
|
|
uint8_t GetChecksumLen() const { return std::get<1>(GetParam()); }
|
|
|
|
uint32_t GetRestartInterval() const { return std::get<2>(GetParam()); }
|
|
|
|
bool UseValueDeltaEncoding() const { return std::get<3>(GetParam()); }
|
|
|
|
bool IncludeFirstKey() const { return std::get<4>(GetParam()); }
|
|
|
|
|
|
|
|
std::unique_ptr<Block_kIndex> GenerateIndexBlock(
|
|
|
|
std::vector<std::string> &separators,
|
|
|
|
std::vector<BlockHandle> &block_handles,
|
|
|
|
std::vector<std::string> &first_keys, int num_record) {
|
|
|
|
Options options = Options();
|
|
|
|
BlockBasedTableOptions tbo;
|
|
|
|
uint8_t protection_bytes_per_key = GetChecksumLen();
|
|
|
|
BlockCreateContext create_context{
|
|
|
|
&tbo,
|
Support compressed and local flash secondary cache stacking (#11812)
Summary:
This PR implements support for a three tier cache - primary block cache, compressed secondary cache, and a nvm (local flash) secondary cache. This allows more effective utilization of the nvm cache, and minimizes the number of reads from local flash by caching compressed blocks in the compressed secondary cache.
The basic design is as follows -
1. A new secondary cache implementation, ```TieredSecondaryCache```, is introduced. It keeps the compressed and nvm secondary caches and manages the movement of blocks between them and the primary block cache. To setup a three tier cache, we allocate a ```CacheWithSecondaryAdapter```, with a ```TieredSecondaryCache``` instance as the secondary cache.
2. The table reader passes both the uncompressed and compressed block to ```FullTypedCacheInterface::InsertFull```, allowing the block cache to optionally store the compressed block.
3. When there's a miss, the block object is constructed and inserted in the primary cache, and the compressed block is inserted into the nvm cache by calling ```InsertSaved```. This avoids the overhead of recompressing the block, as well as avoiding putting more memory pressure on the compressed secondary cache.
4. When there's a hit in the nvm cache, we attempt to insert the block in the compressed secondary cache and the primary cache, subject to the admission policy of those caches (i.e admit on second access). Blocks/items evicted from any tier are simply discarded.
We can easily implement additional admission policies if desired.
Todo (In a subsequent PR):
1. Add to db_bench and run benchmarks
2. Add to db_stress
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11812
Reviewed By: pdillinger
Differential Revision: D49461842
Pulled By: anand1976
fbshipit-source-id: b40ac1330ef7cd8c12efa0a3ca75128e602e3a0b
2023-09-22 03:30:53 +00:00
|
|
|
nullptr /* ioptions */,
|
2023-04-25 19:08:23 +00:00
|
|
|
nullptr /* statistics */,
|
|
|
|
false /* _using_zstd */,
|
|
|
|
protection_bytes_per_key,
|
|
|
|
options.comparator,
|
|
|
|
!UseValueDeltaEncoding() /* value_is_full */,
|
|
|
|
IncludeFirstKey()};
|
|
|
|
builder_ = std::make_unique<BlockBuilder>(
|
|
|
|
static_cast<int>(GetRestartInterval()), true /* use_delta_encoding */,
|
|
|
|
UseValueDeltaEncoding() /* use_value_delta_encoding */,
|
|
|
|
GetDataBlockIndexType());
|
|
|
|
BlockHandle last_encoded_handle;
|
|
|
|
for (int i = 0; i < num_record; i++) {
|
|
|
|
IndexValue entry(block_handles[i], first_keys[i]);
|
|
|
|
std::string encoded_entry;
|
|
|
|
std::string delta_encoded_entry;
|
|
|
|
entry.EncodeTo(&encoded_entry, IncludeFirstKey(), nullptr);
|
|
|
|
if (UseValueDeltaEncoding() && i > 0) {
|
|
|
|
entry.EncodeTo(&delta_encoded_entry, IncludeFirstKey(),
|
|
|
|
&last_encoded_handle);
|
|
|
|
}
|
|
|
|
|
|
|
|
last_encoded_handle = entry.handle;
|
|
|
|
const Slice delta_encoded_entry_slice(delta_encoded_entry);
|
|
|
|
builder_->Add(separators[i], encoded_entry, &delta_encoded_entry_slice);
|
|
|
|
}
|
|
|
|
// read serialized contents of the block
|
|
|
|
Slice raw_block = builder_->Finish();
|
|
|
|
// create block reader
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = raw_block;
|
|
|
|
std::unique_ptr<Block_kIndex> index_block;
|
|
|
|
|
|
|
|
create_context.Create(&index_block, std::move(contents));
|
|
|
|
return index_block;
|
|
|
|
}
|
|
|
|
|
|
|
|
std::unique_ptr<BlockBuilder> builder_;
|
|
|
|
};
|
|
|
|
|
|
|
|
INSTANTIATE_TEST_CASE_P(
|
|
|
|
P, IndexBlockKVChecksumTest,
|
|
|
|
::testing::Combine(
|
|
|
|
::testing::Values(
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::kDataBlockBinarySearch,
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::
|
|
|
|
kDataBlockBinaryAndHash),
|
|
|
|
::testing::Values(0, 1, 2, 4, 8), ::testing::Values(1, 3, 8, 16),
|
|
|
|
::testing::Values(true, false), ::testing::Values(true, false)),
|
|
|
|
[](const testing::TestParamInfo<
|
|
|
|
std::tuple<BlockBasedTableOptions::DataBlockIndexType, uint8_t,
|
|
|
|
uint32_t, bool, bool>> &args) {
|
|
|
|
std::ostringstream oss;
|
|
|
|
oss << GetDataBlockIndexTypeStr(std::get<0>(args.param)) << "ProtBytes"
|
|
|
|
<< std::to_string(std::get<1>(args.param)) << "RestartInterval"
|
|
|
|
<< std::to_string(std::get<2>(args.param)) << "ValueDeltaEncode"
|
|
|
|
<< std::to_string(std::get<3>(args.param)) << "IncludeFirstKey"
|
|
|
|
<< std::to_string(std::get<4>(args.param));
|
|
|
|
return oss.str();
|
|
|
|
});
|
|
|
|
|
|
|
|
TEST_P(IndexBlockKVChecksumTest, ChecksumConstructionAndVerification) {
|
|
|
|
Options options = Options();
|
|
|
|
uint8_t protection_bytes_per_key = GetChecksumLen();
|
|
|
|
std::vector<int> num_restart_intervals = {1, 16};
|
|
|
|
std::vector<SequenceNumber> seqnos{kDisableGlobalSequenceNumber, 10001};
|
|
|
|
|
|
|
|
for (const auto num_restart_interval : num_restart_intervals) {
|
|
|
|
const int kNumRecords =
|
|
|
|
num_restart_interval * static_cast<int>(GetRestartInterval());
|
|
|
|
for (const auto seqno : seqnos) {
|
|
|
|
std::vector<std::string> separators;
|
|
|
|
std::vector<BlockHandle> block_handles;
|
|
|
|
std::vector<std::string> first_keys;
|
|
|
|
GenerateRandomIndexEntries(&separators, &block_handles, &first_keys,
|
2023-05-25 22:41:32 +00:00
|
|
|
kNumRecords, 0 /* ts_sz */,
|
2023-04-25 19:08:23 +00:00
|
|
|
seqno != kDisableGlobalSequenceNumber);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
std::unique_ptr<Block_kIndex> index_block = GenerateIndexBlock(
|
|
|
|
separators, block_handles, first_keys, kNumRecords);
|
|
|
|
IndexBlockIter *kNullIter = nullptr;
|
|
|
|
Statistics *kNullStats = nullptr;
|
|
|
|
// read contents of block sequentially
|
|
|
|
std::unique_ptr<IndexBlockIter> biter{index_block->NewIndexIterator(
|
|
|
|
options.comparator, seqno, kNullIter, kNullStats,
|
|
|
|
true /* total_order_seek */, IncludeFirstKey() /* have_first_key */,
|
|
|
|
true /* key_includes_seq */,
|
|
|
|
!UseValueDeltaEncoding() /* value_is_full */,
|
2023-05-25 22:41:32 +00:00
|
|
|
true /* block_contents_pinned*/,
|
|
|
|
true /* user_defined_timestamps_persisted */,
|
|
|
|
nullptr /* prefix_index */)};
|
2023-04-25 19:08:23 +00:00
|
|
|
biter->SeekToFirst();
|
|
|
|
const char *checksum_ptr = index_block->TEST_GetKVChecksum();
|
|
|
|
// Check checksum of correct length is generated
|
|
|
|
for (int i = 0; i < kNumRecords; i++) {
|
|
|
|
// Obtaining the actual content written as value to index block is not
|
|
|
|
// trivial: delta-encoded value is only persisted when not at block
|
|
|
|
// restart point and that keys share some byte (see more in
|
|
|
|
// BlockBuilder::AddWithLastKeyImpl()). So here we just do verification
|
|
|
|
// using value from iterator unlike tests for DataBlockIter or
|
|
|
|
// MetaBlockIter.
|
|
|
|
ASSERT_TRUE(VerifyChecksum(protection_bytes_per_key, checksum_ptr,
|
|
|
|
biter->key(), biter->raw_value()));
|
|
|
|
}
|
|
|
|
|
|
|
|
size_t verification_count = 0;
|
|
|
|
// The SyncPoint is placed before checking checksum_len == 0 in
|
|
|
|
// Block::VerifyChecksum(). To make the testing code below simpler and not
|
|
|
|
// having to differentiate 0 vs non-0 checksum_len, we do an explicit
|
|
|
|
// assert checking on checksum_len here.
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"Block::VerifyChecksum::checksum_len",
|
|
|
|
[&verification_count, protection_bytes_per_key](void *checksum_len) {
|
|
|
|
ASSERT_EQ((*static_cast<uint8_t *>(checksum_len)),
|
|
|
|
protection_bytes_per_key);
|
|
|
|
++verification_count;
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
TestSeekToFirst(biter, verification_count);
|
|
|
|
TestSeekToLast(biter, verification_count);
|
|
|
|
TestSeek(biter, verification_count, first_keys[kNumRecords / 2]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
class MetaIndexBlockKVChecksumTest
|
|
|
|
: public BlockPerKVChecksumTest,
|
|
|
|
public testing::WithParamInterface<
|
|
|
|
uint8_t /* block_protection_bytes_per_key */> {
|
|
|
|
public:
|
|
|
|
MetaIndexBlockKVChecksumTest() = default;
|
|
|
|
uint8_t GetChecksumLen() const { return GetParam(); }
|
|
|
|
uint32_t GetRestartInterval() const { return 1; }
|
|
|
|
|
|
|
|
std::unique_ptr<Block_kMetaIndex> GenerateMetaIndexBlock(
|
|
|
|
std::vector<std::string> &keys, std::vector<std::string> &values,
|
|
|
|
int num_record) {
|
|
|
|
Options options = Options();
|
|
|
|
BlockBasedTableOptions tbo;
|
|
|
|
uint8_t protection_bytes_per_key = GetChecksumLen();
|
Support compressed and local flash secondary cache stacking (#11812)
Summary:
This PR implements support for a three tier cache - primary block cache, compressed secondary cache, and a nvm (local flash) secondary cache. This allows more effective utilization of the nvm cache, and minimizes the number of reads from local flash by caching compressed blocks in the compressed secondary cache.
The basic design is as follows -
1. A new secondary cache implementation, ```TieredSecondaryCache```, is introduced. It keeps the compressed and nvm secondary caches and manages the movement of blocks between them and the primary block cache. To setup a three tier cache, we allocate a ```CacheWithSecondaryAdapter```, with a ```TieredSecondaryCache``` instance as the secondary cache.
2. The table reader passes both the uncompressed and compressed block to ```FullTypedCacheInterface::InsertFull```, allowing the block cache to optionally store the compressed block.
3. When there's a miss, the block object is constructed and inserted in the primary cache, and the compressed block is inserted into the nvm cache by calling ```InsertSaved```. This avoids the overhead of recompressing the block, as well as avoiding putting more memory pressure on the compressed secondary cache.
4. When there's a hit in the nvm cache, we attempt to insert the block in the compressed secondary cache and the primary cache, subject to the admission policy of those caches (i.e admit on second access). Blocks/items evicted from any tier are simply discarded.
We can easily implement additional admission policies if desired.
Todo (In a subsequent PR):
1. Add to db_bench and run benchmarks
2. Add to db_stress
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11812
Reviewed By: pdillinger
Differential Revision: D49461842
Pulled By: anand1976
fbshipit-source-id: b40ac1330ef7cd8c12efa0a3ca75128e602e3a0b
2023-09-22 03:30:53 +00:00
|
|
|
BlockCreateContext create_context{&tbo,
|
|
|
|
nullptr /* ioptions */,
|
|
|
|
nullptr /* statistics */,
|
|
|
|
false /* using_zstd */,
|
|
|
|
protection_bytes_per_key,
|
|
|
|
options.comparator};
|
2023-04-25 19:08:23 +00:00
|
|
|
builder_ =
|
|
|
|
std::make_unique<BlockBuilder>(static_cast<int>(GetRestartInterval()));
|
|
|
|
// add a bunch of records to a block
|
|
|
|
for (int i = 0; i < num_record; i++) {
|
|
|
|
builder_->Add(keys[i], values[i]);
|
|
|
|
}
|
|
|
|
Slice raw_block = builder_->Finish();
|
|
|
|
BlockContents contents;
|
|
|
|
contents.data = raw_block;
|
|
|
|
std::unique_ptr<Block_kMetaIndex> meta_block;
|
|
|
|
create_context.Create(&meta_block, std::move(contents));
|
|
|
|
return meta_block;
|
|
|
|
}
|
|
|
|
|
|
|
|
std::unique_ptr<BlockBuilder> builder_;
|
|
|
|
};
|
|
|
|
|
|
|
|
INSTANTIATE_TEST_CASE_P(P, MetaIndexBlockKVChecksumTest,
|
|
|
|
::testing::Values(0, 1, 2, 4, 8),
|
|
|
|
[](const testing::TestParamInfo<uint8_t> &args) {
|
|
|
|
std::ostringstream oss;
|
|
|
|
oss << "ProtBytes" << std::to_string(args.param);
|
|
|
|
return oss.str();
|
|
|
|
});
|
|
|
|
|
|
|
|
TEST_P(MetaIndexBlockKVChecksumTest, ChecksumConstructionAndVerification) {
|
|
|
|
Options options = Options();
|
|
|
|
BlockBasedTableOptions tbo;
|
|
|
|
uint8_t protection_bytes_per_key = GetChecksumLen();
|
Support compressed and local flash secondary cache stacking (#11812)
Summary:
This PR implements support for a three tier cache - primary block cache, compressed secondary cache, and a nvm (local flash) secondary cache. This allows more effective utilization of the nvm cache, and minimizes the number of reads from local flash by caching compressed blocks in the compressed secondary cache.
The basic design is as follows -
1. A new secondary cache implementation, ```TieredSecondaryCache```, is introduced. It keeps the compressed and nvm secondary caches and manages the movement of blocks between them and the primary block cache. To setup a three tier cache, we allocate a ```CacheWithSecondaryAdapter```, with a ```TieredSecondaryCache``` instance as the secondary cache.
2. The table reader passes both the uncompressed and compressed block to ```FullTypedCacheInterface::InsertFull```, allowing the block cache to optionally store the compressed block.
3. When there's a miss, the block object is constructed and inserted in the primary cache, and the compressed block is inserted into the nvm cache by calling ```InsertSaved```. This avoids the overhead of recompressing the block, as well as avoiding putting more memory pressure on the compressed secondary cache.
4. When there's a hit in the nvm cache, we attempt to insert the block in the compressed secondary cache and the primary cache, subject to the admission policy of those caches (i.e admit on second access). Blocks/items evicted from any tier are simply discarded.
We can easily implement additional admission policies if desired.
Todo (In a subsequent PR):
1. Add to db_bench and run benchmarks
2. Add to db_stress
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11812
Reviewed By: pdillinger
Differential Revision: D49461842
Pulled By: anand1976
fbshipit-source-id: b40ac1330ef7cd8c12efa0a3ca75128e602e3a0b
2023-09-22 03:30:53 +00:00
|
|
|
BlockCreateContext create_context{&tbo,
|
|
|
|
nullptr /* ioptions */,
|
|
|
|
nullptr /* statistics */,
|
|
|
|
false /* using_zstd */,
|
|
|
|
protection_bytes_per_key,
|
|
|
|
options.comparator};
|
2023-04-25 19:08:23 +00:00
|
|
|
std::vector<int> num_restart_intervals = {1, 16};
|
|
|
|
for (const auto num_restart_interval : num_restart_intervals) {
|
|
|
|
const int kNumRecords = num_restart_interval * GetRestartInterval();
|
|
|
|
std::vector<std::string> keys;
|
|
|
|
std::vector<std::string> values;
|
|
|
|
GenerateRandomKVs(&keys, &values, 0, kNumRecords + 1, 1 /* step */,
|
|
|
|
24 /* padding_size */);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
std::unique_ptr<Block_kMetaIndex> meta_block =
|
|
|
|
GenerateMetaIndexBlock(keys, values, kNumRecords);
|
|
|
|
const char *checksum_ptr = meta_block->TEST_GetKVChecksum();
|
|
|
|
// Check checksum of correct length is generated
|
|
|
|
for (int i = 0; i < kNumRecords; i++) {
|
|
|
|
ASSERT_TRUE(VerifyChecksum(protection_bytes_per_key,
|
|
|
|
checksum_ptr + i * protection_bytes_per_key,
|
|
|
|
keys[i], values[i]));
|
|
|
|
}
|
|
|
|
|
|
|
|
size_t verification_count = 0;
|
|
|
|
// The SyncPoint is placed before checking checksum_len == 0 in
|
|
|
|
// Block::VerifyChecksum(). To make the testing code below simpler and not
|
|
|
|
// having to differentiate 0 vs non-0 checksum_len, we do an explicit assert
|
|
|
|
// checking on checksum_len here.
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"Block::VerifyChecksum::checksum_len",
|
|
|
|
[&verification_count, protection_bytes_per_key](void *checksum_len) {
|
|
|
|
ASSERT_EQ((*static_cast<uint8_t *>(checksum_len)),
|
|
|
|
protection_bytes_per_key);
|
|
|
|
++verification_count;
|
|
|
|
});
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
// Check that block iterator does checksum verification
|
|
|
|
std::unique_ptr<MetaBlockIter> biter{
|
|
|
|
meta_block->NewMetaIterator(true /* block_contents_pinned */)};
|
|
|
|
TestSeekToFirst(biter, verification_count);
|
|
|
|
TestSeekToLast(biter, verification_count);
|
|
|
|
TestSeek(biter, verification_count, keys[kNumRecords / 2]);
|
|
|
|
TestSeekForPrev(biter, verification_count, keys[kNumRecords / 2]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
class DataBlockKVChecksumCorruptionTest : public DataBlockKVChecksumTest {
|
|
|
|
public:
|
|
|
|
DataBlockKVChecksumCorruptionTest() = default;
|
|
|
|
|
|
|
|
std::unique_ptr<DataBlockIter> GenerateDataBlockIter(
|
|
|
|
std::vector<std::string> &keys, std::vector<std::string> &values,
|
|
|
|
int num_record) {
|
|
|
|
// During Block construction, we may create block iter to initialize per kv
|
|
|
|
// checksum. Disable syncpoint that may be created for block iter methods.
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
block_ = GenerateDataBlock(keys, values, num_record);
|
|
|
|
std::unique_ptr<DataBlockIter> biter{block_->NewDataIterator(
|
|
|
|
Options().comparator, kDisableGlobalSequenceNumber)};
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
return biter;
|
|
|
|
}
|
|
|
|
|
|
|
|
protected:
|
|
|
|
std::unique_ptr<Block_kData> block_;
|
|
|
|
};
|
|
|
|
|
|
|
|
TEST_P(DataBlockKVChecksumCorruptionTest, CorruptEntry) {
|
|
|
|
std::vector<int> num_restart_intervals = {1, 3};
|
|
|
|
for (const auto num_restart_interval : num_restart_intervals) {
|
|
|
|
const int kNumRecords =
|
|
|
|
num_restart_interval * static_cast<int>(GetRestartInterval());
|
|
|
|
std::vector<std::string> keys;
|
|
|
|
std::vector<std::string> values;
|
|
|
|
GenerateRandomKVs(&keys, &values, 0, kNumRecords + 1, 1 /* step */,
|
|
|
|
24 /* padding_size */);
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"BlockIter::UpdateKey::value", [](void *arg) {
|
|
|
|
char *value = static_cast<char *>(arg);
|
|
|
|
// values generated by GenerateRandomKVs are of length 100
|
|
|
|
++value[10];
|
|
|
|
});
|
|
|
|
|
|
|
|
// Purely for reducing the number of lines of code.
|
|
|
|
typedef std::unique_ptr<DataBlockIter> IterPtr;
|
|
|
|
typedef void(IterAPI)(IterPtr & iter, std::string &);
|
|
|
|
|
|
|
|
std::string seek_key = keys[kNumRecords / 2];
|
|
|
|
auto test_seek = [&](IterAPI iter_api) {
|
|
|
|
IterPtr biter = GenerateDataBlockIter(keys, values, kNumRecords);
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
iter_api(biter, seek_key);
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_TRUE(biter->status().IsCorruption());
|
|
|
|
};
|
|
|
|
|
|
|
|
test_seek([](IterPtr &iter, std::string &) { iter->SeekToFirst(); });
|
|
|
|
test_seek([](IterPtr &iter, std::string &) { iter->SeekToLast(); });
|
|
|
|
test_seek([](IterPtr &iter, std::string &k) { iter->Seek(k); });
|
|
|
|
test_seek([](IterPtr &iter, std::string &k) { iter->SeekForPrev(k); });
|
|
|
|
test_seek([](IterPtr &iter, std::string &k) { iter->SeekForGet(k); });
|
|
|
|
|
|
|
|
typedef void (DataBlockIter::*IterStepAPI)();
|
|
|
|
auto test_step = [&](IterStepAPI iter_api, std::string &k) {
|
|
|
|
IterPtr biter = GenerateDataBlockIter(keys, values, kNumRecords);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
biter->Seek(k);
|
|
|
|
ASSERT_TRUE(biter->Valid());
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
std::invoke(iter_api, biter);
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_TRUE(biter->status().IsCorruption());
|
|
|
|
};
|
|
|
|
|
|
|
|
if (kNumRecords > 1) {
|
|
|
|
test_step(&DataBlockIter::Prev, seek_key);
|
|
|
|
test_step(&DataBlockIter::Next, seek_key);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
INSTANTIATE_TEST_CASE_P(
|
|
|
|
P, DataBlockKVChecksumCorruptionTest,
|
|
|
|
::testing::Combine(
|
|
|
|
::testing::Values(
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::kDataBlockBinarySearch,
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::
|
|
|
|
kDataBlockBinaryAndHash),
|
|
|
|
::testing::Values(4, 8) /* block_protection_bytes_per_key */,
|
|
|
|
::testing::Values(1, 3, 8, 16) /* restart_interval */,
|
|
|
|
::testing::Values(false, true)),
|
|
|
|
[](const testing::TestParamInfo<std::tuple<
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType, uint8_t, uint32_t, bool>>
|
|
|
|
&args) {
|
|
|
|
std::ostringstream oss;
|
|
|
|
oss << GetDataBlockIndexTypeStr(std::get<0>(args.param)) << "ProtBytes"
|
|
|
|
<< std::to_string(std::get<1>(args.param)) << "RestartInterval"
|
|
|
|
<< std::to_string(std::get<2>(args.param)) << "DeltaEncode"
|
|
|
|
<< std::to_string(std::get<3>(args.param));
|
|
|
|
return oss.str();
|
|
|
|
});
|
|
|
|
|
|
|
|
class IndexBlockKVChecksumCorruptionTest : public IndexBlockKVChecksumTest {
|
|
|
|
public:
|
|
|
|
IndexBlockKVChecksumCorruptionTest() = default;
|
|
|
|
|
|
|
|
std::unique_ptr<IndexBlockIter> GenerateIndexBlockIter(
|
|
|
|
std::vector<std::string> &separators,
|
|
|
|
std::vector<BlockHandle> &block_handles,
|
|
|
|
std::vector<std::string> &first_keys, int num_record,
|
|
|
|
SequenceNumber seqno) {
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
block_ =
|
|
|
|
GenerateIndexBlock(separators, block_handles, first_keys, num_record);
|
|
|
|
std::unique_ptr<IndexBlockIter> biter{block_->NewIndexIterator(
|
|
|
|
Options().comparator, seqno, nullptr, nullptr,
|
|
|
|
true /* total_order_seek */, IncludeFirstKey() /* have_first_key */,
|
|
|
|
true /* key_includes_seq */,
|
|
|
|
!UseValueDeltaEncoding() /* value_is_full */,
|
2023-05-25 22:41:32 +00:00
|
|
|
true /* block_contents_pinned */,
|
|
|
|
true /* user_defined_timestamps_persisted */,
|
|
|
|
nullptr /* prefix_index */)};
|
2023-04-25 19:08:23 +00:00
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
return biter;
|
|
|
|
}
|
|
|
|
|
|
|
|
protected:
|
|
|
|
std::unique_ptr<Block_kIndex> block_;
|
|
|
|
};
|
|
|
|
|
|
|
|
INSTANTIATE_TEST_CASE_P(
|
|
|
|
P, IndexBlockKVChecksumCorruptionTest,
|
|
|
|
::testing::Combine(
|
|
|
|
::testing::Values(
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::kDataBlockBinarySearch,
|
|
|
|
BlockBasedTableOptions::DataBlockIndexType::
|
|
|
|
kDataBlockBinaryAndHash),
|
|
|
|
::testing::Values(4, 8) /* block_protection_bytes_per_key */,
|
|
|
|
::testing::Values(1, 3, 8, 16) /* restart_interval */,
|
|
|
|
::testing::Values(true, false), ::testing::Values(true, false)),
|
|
|
|
[](const testing::TestParamInfo<
|
|
|
|
std::tuple<BlockBasedTableOptions::DataBlockIndexType, uint8_t,
|
|
|
|
uint32_t, bool, bool>> &args) {
|
|
|
|
std::ostringstream oss;
|
|
|
|
oss << GetDataBlockIndexTypeStr(std::get<0>(args.param)) << "ProtBytes"
|
|
|
|
<< std::to_string(std::get<1>(args.param)) << "RestartInterval"
|
|
|
|
<< std::to_string(std::get<2>(args.param)) << "ValueDeltaEncode"
|
|
|
|
<< std::to_string(std::get<3>(args.param)) << "IncludeFirstKey"
|
|
|
|
<< std::to_string(std::get<4>(args.param));
|
|
|
|
return oss.str();
|
|
|
|
});
|
|
|
|
|
|
|
|
TEST_P(IndexBlockKVChecksumCorruptionTest, CorruptEntry) {
|
|
|
|
std::vector<int> num_restart_intervals = {1, 3};
|
|
|
|
std::vector<SequenceNumber> seqnos{kDisableGlobalSequenceNumber, 10001};
|
|
|
|
|
|
|
|
for (const auto num_restart_interval : num_restart_intervals) {
|
|
|
|
const int kNumRecords =
|
|
|
|
num_restart_interval * static_cast<int>(GetRestartInterval());
|
|
|
|
for (const auto seqno : seqnos) {
|
|
|
|
std::vector<std::string> separators;
|
|
|
|
std::vector<BlockHandle> block_handles;
|
|
|
|
std::vector<std::string> first_keys;
|
|
|
|
GenerateRandomIndexEntries(&separators, &block_handles, &first_keys,
|
2023-05-25 22:41:32 +00:00
|
|
|
kNumRecords, 0 /* ts_sz */,
|
2023-04-25 19:08:23 +00:00
|
|
|
seqno != kDisableGlobalSequenceNumber);
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"BlockIter::UpdateKey::value", [](void *arg) {
|
|
|
|
char *value = static_cast<char *>(arg);
|
|
|
|
// value can be delta-encoded with different lengths, so we corrupt
|
|
|
|
// first bytes here to be safe
|
|
|
|
++value[0];
|
|
|
|
});
|
|
|
|
|
|
|
|
typedef std::unique_ptr<IndexBlockIter> IterPtr;
|
|
|
|
typedef void(IterAPI)(IterPtr & iter, std::string &);
|
|
|
|
std::string seek_key = first_keys[kNumRecords / 2];
|
|
|
|
auto test_seek = [&](IterAPI iter_api) {
|
|
|
|
std::unique_ptr<IndexBlockIter> biter = GenerateIndexBlockIter(
|
|
|
|
separators, block_handles, first_keys, kNumRecords, seqno);
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
iter_api(biter, seek_key);
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_TRUE(biter->status().IsCorruption());
|
|
|
|
};
|
|
|
|
test_seek([](IterPtr &iter, std::string &) { iter->SeekToFirst(); });
|
|
|
|
test_seek([](IterPtr &iter, std::string &) { iter->SeekToLast(); });
|
|
|
|
test_seek([](IterPtr &iter, std::string &k) { iter->Seek(k); });
|
|
|
|
|
|
|
|
typedef void (IndexBlockIter::*IterStepAPI)();
|
|
|
|
auto test_step = [&](IterStepAPI iter_api, std::string &k) {
|
|
|
|
std::unique_ptr<IndexBlockIter> biter = GenerateIndexBlockIter(
|
|
|
|
separators, block_handles, first_keys, kNumRecords, seqno);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
biter->Seek(k);
|
|
|
|
ASSERT_TRUE(biter->Valid());
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
std::invoke(iter_api, biter);
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_TRUE(biter->status().IsCorruption());
|
|
|
|
};
|
|
|
|
if (kNumRecords > 1) {
|
|
|
|
test_step(&IndexBlockIter::Prev, seek_key);
|
|
|
|
test_step(&IndexBlockIter::Next, seek_key);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
class MetaIndexBlockKVChecksumCorruptionTest
|
|
|
|
: public MetaIndexBlockKVChecksumTest {
|
|
|
|
public:
|
|
|
|
MetaIndexBlockKVChecksumCorruptionTest() = default;
|
|
|
|
|
|
|
|
std::unique_ptr<MetaBlockIter> GenerateMetaIndexBlockIter(
|
|
|
|
std::vector<std::string> &keys, std::vector<std::string> &values,
|
|
|
|
int num_record) {
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
block_ = GenerateMetaIndexBlock(keys, values, num_record);
|
|
|
|
std::unique_ptr<MetaBlockIter> biter{
|
|
|
|
block_->NewMetaIterator(true /* block_contents_pinned */)};
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
return biter;
|
|
|
|
}
|
|
|
|
|
|
|
|
protected:
|
|
|
|
std::unique_ptr<Block_kMetaIndex> block_;
|
|
|
|
};
|
|
|
|
|
|
|
|
INSTANTIATE_TEST_CASE_P(
|
|
|
|
P, MetaIndexBlockKVChecksumCorruptionTest,
|
|
|
|
::testing::Values(4, 8) /* block_protection_bytes_per_key */,
|
|
|
|
[](const testing::TestParamInfo<uint8_t> &args) {
|
|
|
|
std::ostringstream oss;
|
|
|
|
oss << "ProtBytes" << std::to_string(args.param);
|
|
|
|
return oss.str();
|
|
|
|
});
|
|
|
|
|
|
|
|
TEST_P(MetaIndexBlockKVChecksumCorruptionTest, CorruptEntry) {
|
|
|
|
Options options = Options();
|
|
|
|
std::vector<int> num_restart_intervals = {1, 3};
|
|
|
|
for (const auto num_restart_interval : num_restart_intervals) {
|
|
|
|
const int kNumRecords =
|
|
|
|
num_restart_interval * static_cast<int>(GetRestartInterval());
|
|
|
|
std::vector<std::string> keys;
|
|
|
|
std::vector<std::string> values;
|
|
|
|
GenerateRandomKVs(&keys, &values, 0, kNumRecords + 1, 1 /* step */,
|
|
|
|
24 /* padding_size */);
|
|
|
|
SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"BlockIter::UpdateKey::value", [](void *arg) {
|
|
|
|
char *value = static_cast<char *>(arg);
|
|
|
|
// values generated by GenerateRandomKVs are of length 100
|
|
|
|
++value[10];
|
|
|
|
});
|
|
|
|
|
|
|
|
typedef std::unique_ptr<MetaBlockIter> IterPtr;
|
|
|
|
typedef void(IterAPI)(IterPtr & iter, std::string &);
|
|
|
|
typedef void (MetaBlockIter::*IterStepAPI)();
|
|
|
|
std::string seek_key = keys[kNumRecords / 2];
|
|
|
|
auto test_seek = [&](IterAPI iter_api) {
|
|
|
|
IterPtr biter = GenerateMetaIndexBlockIter(keys, values, kNumRecords);
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
iter_api(biter, seek_key);
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_TRUE(biter->status().IsCorruption());
|
|
|
|
};
|
|
|
|
|
|
|
|
test_seek([](IterPtr &iter, std::string &) { iter->SeekToFirst(); });
|
|
|
|
test_seek([](IterPtr &iter, std::string &) { iter->SeekToLast(); });
|
|
|
|
test_seek([](IterPtr &iter, std::string &k) { iter->Seek(k); });
|
|
|
|
test_seek([](IterPtr &iter, std::string &k) { iter->SeekForPrev(k); });
|
|
|
|
|
|
|
|
auto test_step = [&](IterStepAPI iter_api, const std::string &k) {
|
|
|
|
IterPtr biter = GenerateMetaIndexBlockIter(keys, values, kNumRecords);
|
|
|
|
SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
biter->Seek(k);
|
|
|
|
ASSERT_TRUE(biter->Valid());
|
|
|
|
ASSERT_OK(biter->status());
|
|
|
|
SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
std::invoke(iter_api, biter);
|
|
|
|
ASSERT_FALSE(biter->Valid());
|
|
|
|
ASSERT_TRUE(biter->status().IsCorruption());
|
|
|
|
};
|
|
|
|
|
|
|
|
if (kNumRecords > 1) {
|
|
|
|
test_step(&MetaBlockIter::Prev, seek_key);
|
|
|
|
test_step(&MetaBlockIter::Next, seek_key);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-02-20 20:07:53 +00:00
|
|
|
} // namespace ROCKSDB_NAMESPACE
|
2012-12-20 19:05:41 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
int main(int argc, char **argv) {
|
2022-10-18 07:35:35 +00:00
|
|
|
ROCKSDB_NAMESPACE::port::InstallStackTraceHandler();
|
2015-03-17 21:08:00 +00:00
|
|
|
::testing::InitGoogleTest(&argc, argv);
|
|
|
|
return RUN_ALL_TESTS();
|
Support compressed and local flash secondary cache stacking (#11812)
Summary:
This PR implements support for a three tier cache - primary block cache, compressed secondary cache, and a nvm (local flash) secondary cache. This allows more effective utilization of the nvm cache, and minimizes the number of reads from local flash by caching compressed blocks in the compressed secondary cache.
The basic design is as follows -
1. A new secondary cache implementation, ```TieredSecondaryCache```, is introduced. It keeps the compressed and nvm secondary caches and manages the movement of blocks between them and the primary block cache. To setup a three tier cache, we allocate a ```CacheWithSecondaryAdapter```, with a ```TieredSecondaryCache``` instance as the secondary cache.
2. The table reader passes both the uncompressed and compressed block to ```FullTypedCacheInterface::InsertFull```, allowing the block cache to optionally store the compressed block.
3. When there's a miss, the block object is constructed and inserted in the primary cache, and the compressed block is inserted into the nvm cache by calling ```InsertSaved```. This avoids the overhead of recompressing the block, as well as avoiding putting more memory pressure on the compressed secondary cache.
4. When there's a hit in the nvm cache, we attempt to insert the block in the compressed secondary cache and the primary cache, subject to the admission policy of those caches (i.e admit on second access). Blocks/items evicted from any tier are simply discarded.
We can easily implement additional admission policies if desired.
Todo (In a subsequent PR):
1. Add to db_bench and run benchmarks
2. Add to db_stress
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11812
Reviewed By: pdillinger
Differential Revision: D49461842
Pulled By: anand1976
fbshipit-source-id: b40ac1330ef7cd8c12efa0a3ca75128e602e3a0b
2023-09-22 03:30:53 +00:00
|
|
|
}
|