2013-10-16 21:59:46 +00:00
|
|
|
// Copyright (c) 2013, Facebook, Inc. All rights reserved.
|
|
|
|
// This source code is licensed under the BSD-style license found in the
|
|
|
|
// LICENSE file in the root directory of this source tree. An additional grant
|
|
|
|
// of patent rights can be found in the PATENTS file in the same directory.
|
|
|
|
//
|
2011-03-18 22:37:00 +00:00
|
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
|
2015-07-13 19:11:05 +00:00
|
|
|
// Introduction of SyncPoint effectively disabled building and running this test
|
|
|
|
// in Release build.
|
2015-07-01 23:13:49 +00:00
|
|
|
// which is a pity, it is a good test
|
2015-07-13 19:11:05 +00:00
|
|
|
#if !(defined NDEBUG) || !defined(OS_WIN)
|
2015-07-01 23:13:49 +00:00
|
|
|
|
2012-09-24 21:01:01 +00:00
|
|
|
#include <algorithm>
|
2014-03-12 23:40:14 +00:00
|
|
|
#include <iostream>
|
2012-11-26 21:56:45 +00:00
|
|
|
#include <set>
|
2014-09-05 22:20:05 +00:00
|
|
|
#include <thread>
|
2014-02-14 00:28:21 +00:00
|
|
|
#include <unordered_set>
|
2014-07-15 23:10:18 +00:00
|
|
|
#include <utility>
|
2015-06-15 19:03:13 +00:00
|
|
|
#include <fcntl.h>
|
2015-07-13 19:11:05 +00:00
|
|
|
#ifndef OS_WIN
|
|
|
|
#include <unistd.h>
|
|
|
|
#endif
|
2012-11-26 21:56:45 +00:00
|
|
|
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
2014-11-07 22:45:18 +00:00
|
|
|
#include "db/filename.h"
|
2014-01-27 21:53:22 +00:00
|
|
|
#include "db/dbformat.h"
|
2011-03-18 22:37:00 +00:00
|
|
|
#include "db/db_impl.h"
|
|
|
|
#include "db/filename.h"
|
2014-11-14 23:43:10 +00:00
|
|
|
#include "db/job_context.h"
|
2011-03-18 22:37:00 +00:00
|
|
|
#include "db/version_set.h"
|
|
|
|
#include "db/write_batch_internal.h"
|
2015-02-10 01:38:32 +00:00
|
|
|
#include "port/stack_trace.h"
|
2013-08-23 15:38:13 +00:00
|
|
|
#include "rocksdb/cache.h"
|
|
|
|
#include "rocksdb/compaction_filter.h"
|
2015-07-15 21:51:51 +00:00
|
|
|
#include "rocksdb/convenience.h"
|
2014-01-24 00:32:49 +00:00
|
|
|
#include "rocksdb/db.h"
|
2015-08-05 03:45:27 +00:00
|
|
|
#include "rocksdb/delete_scheduler.h"
|
2013-08-23 15:38:13 +00:00
|
|
|
#include "rocksdb/env.h"
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
#include "rocksdb/experimental.h"
|
2014-01-24 00:32:49 +00:00
|
|
|
#include "rocksdb/filter_policy.h"
|
2013-11-18 19:32:54 +00:00
|
|
|
#include "rocksdb/perf_context.h"
|
2014-01-25 00:15:05 +00:00
|
|
|
#include "rocksdb/slice.h"
|
|
|
|
#include "rocksdb/slice_transform.h"
|
2013-10-29 00:54:09 +00:00
|
|
|
#include "rocksdb/table.h"
|
2014-07-08 19:31:49 +00:00
|
|
|
#include "rocksdb/options.h"
|
2014-02-14 00:28:21 +00:00
|
|
|
#include "rocksdb/table_properties.h"
|
2014-11-20 18:49:32 +00:00
|
|
|
#include "rocksdb/thread_status.h"
|
2014-08-18 22:19:17 +00:00
|
|
|
#include "rocksdb/utilities/write_batch_with_index.h"
|
2014-11-20 23:54:47 +00:00
|
|
|
#include "rocksdb/utilities/checkpoint.h"
|
2015-05-29 21:36:35 +00:00
|
|
|
#include "rocksdb/utilities/optimistic_transaction_db.h"
|
2014-01-28 18:35:48 +00:00
|
|
|
#include "table/block_based_table_factory.h"
|
2015-03-10 19:35:15 +00:00
|
|
|
#include "table/mock_table.h"
|
2014-03-12 23:40:14 +00:00
|
|
|
#include "table/plain_table_factory.h"
|
2015-07-13 23:53:38 +00:00
|
|
|
#include "util/db_test_util.h"
|
Move rate_limiter, write buffering, most perf context instrumentation and most random kill out of Env
Summary: We want to keep Env a think layer for better portability. Less platform dependent codes should be moved out of Env. In this patch, I create a wrapper of file readers and writers, and put rate limiting, write buffering, as well as most perf context instrumentation and random kill out of Env. It will make it easier to maintain multiple Env in the future.
Test Plan: Run all existing unit tests.
Reviewers: anthony, kradhakrishnan, IslamAbdelRahman, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D42321
2015-07-17 23:16:11 +00:00
|
|
|
#include "util/file_reader_writer.h"
|
2012-04-17 15:36:46 +00:00
|
|
|
#include "util/hash.h"
|
2014-01-24 00:32:49 +00:00
|
|
|
#include "util/hash_linklist_rep.h"
|
2014-03-12 23:40:14 +00:00
|
|
|
#include "utilities/merge_operators.h"
|
2011-03-18 22:37:00 +00:00
|
|
|
#include "util/logging.h"
|
2015-01-09 21:04:06 +00:00
|
|
|
#include "util/compression.h"
|
2015-08-05 03:45:27 +00:00
|
|
|
#include "util/delete_scheduler_impl.h"
|
2011-09-01 19:08:02 +00:00
|
|
|
#include "util/mutexlock.h"
|
2014-07-08 19:31:49 +00:00
|
|
|
#include "util/rate_limiter.h"
|
2014-01-24 00:32:49 +00:00
|
|
|
#include "util/statistics.h"
|
2011-03-18 22:37:00 +00:00
|
|
|
#include "util/testharness.h"
|
2014-09-05 00:40:41 +00:00
|
|
|
#include "util/scoped_arena_iterator.h"
|
2014-03-24 04:49:14 +00:00
|
|
|
#include "util/sync_point.h"
|
2011-03-18 22:37:00 +00:00
|
|
|
#include "util/testutil.h"
|
2014-10-31 22:08:10 +00:00
|
|
|
#include "util/mock_env.h"
|
2014-11-25 04:44:49 +00:00
|
|
|
#include "util/string_util.h"
|
2015-01-13 08:04:08 +00:00
|
|
|
#include "util/thread_status_util.h"
|
2015-02-02 22:49:22 +00:00
|
|
|
#include "util/xfunc.h"
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2013-10-04 04:49:15 +00:00
|
|
|
namespace rocksdb {
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2014-01-17 20:46:06 +00:00
|
|
|
static long TestGetTickerCount(const Options& options, Tickers ticker_type) {
|
|
|
|
return options.statistics->getTickerCount(ticker_type);
|
|
|
|
}
|
|
|
|
|
2014-02-14 00:28:21 +00:00
|
|
|
// A helper function that ensures the table properties returned in
|
|
|
|
// `GetPropertiesOfAllTablesTest` is correct.
|
2015-04-25 09:14:27 +00:00
|
|
|
// This test assumes entries size is different for each of the tables.
|
2014-04-10 04:17:14 +00:00
|
|
|
namespace {
|
2014-02-14 00:28:21 +00:00
|
|
|
void VerifyTableProperties(DB* db, uint64_t expected_entries_size) {
|
|
|
|
TablePropertiesCollection props;
|
|
|
|
ASSERT_OK(db->GetPropertiesOfAllTables(&props));
|
|
|
|
|
2014-02-15 00:18:55 +00:00
|
|
|
ASSERT_EQ(4U, props.size());
|
2014-02-14 00:28:21 +00:00
|
|
|
std::unordered_set<uint64_t> unique_entries;
|
|
|
|
|
|
|
|
// Indirect test
|
|
|
|
uint64_t sum = 0;
|
|
|
|
for (const auto& item : props) {
|
|
|
|
unique_entries.insert(item.second->num_entries);
|
|
|
|
sum += item.second->num_entries;
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_EQ(props.size(), unique_entries.size());
|
|
|
|
ASSERT_EQ(expected_entries_size, sum);
|
|
|
|
}
|
2014-09-11 01:46:09 +00:00
|
|
|
|
|
|
|
uint64_t GetNumberOfSstFilesForColumnFamily(DB* db,
|
|
|
|
std::string column_family_name) {
|
|
|
|
std::vector<LiveFileMetaData> metadata;
|
|
|
|
db->GetLiveFilesMetaData(&metadata);
|
|
|
|
uint64_t result = 0;
|
|
|
|
for (auto& fileMetadata : metadata) {
|
|
|
|
result += (fileMetadata.column_family_name == column_family_name);
|
|
|
|
}
|
|
|
|
return result;
|
|
|
|
}
|
2014-04-10 04:17:14 +00:00
|
|
|
} // namespace
|
2014-03-12 23:40:14 +00:00
|
|
|
|
2015-07-13 23:53:38 +00:00
|
|
|
class DBTest : public DBTestBase {
|
|
|
|
public:
|
|
|
|
DBTest() : DBTestBase("/db_test") {}
|
|
|
|
};
|
|
|
|
|
2015-08-05 05:19:07 +00:00
|
|
|
class DBTestWithParam : public DBTest,
|
|
|
|
public testing::WithParamInterface<uint32_t> {
|
|
|
|
public:
|
|
|
|
DBTestWithParam() {
|
|
|
|
num_subcompactions_ = GetParam();
|
|
|
|
}
|
|
|
|
|
|
|
|
// Required if inheriting from testing::WithParamInterface<>
|
|
|
|
static void SetUpTestCase() {}
|
|
|
|
static void TearDownTestCase() {}
|
|
|
|
|
|
|
|
uint32_t num_subcompactions_;
|
|
|
|
};
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, Empty) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
Options options;
|
2013-12-27 20:23:17 +00:00
|
|
|
options.env = env_;
|
|
|
|
options.write_buffer_size = 100000; // Small write buffer
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
options = CurrentOptions(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2014-04-23 00:17:33 +00:00
|
|
|
std::string num;
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-active-mem-table", &num));
|
|
|
|
ASSERT_EQ("0", num);
|
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
2014-04-23 00:17:33 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-active-mem-table", &num));
|
|
|
|
ASSERT_EQ("1", num);
|
2013-12-27 20:23:17 +00:00
|
|
|
|
2014-10-27 21:50:21 +00:00
|
|
|
// Block sync calls
|
|
|
|
env_->delay_sstable_sync_.store(true, std::memory_order_release);
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "k1", std::string(100000, 'x')); // Fill memtable
|
2014-04-23 00:17:33 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-active-mem-table", &num));
|
|
|
|
ASSERT_EQ("2", num);
|
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "k2", std::string(100000, 'y')); // Trigger compaction
|
2014-04-23 00:17:33 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-active-mem-table", &num));
|
|
|
|
ASSERT_EQ("1", num);
|
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
2014-10-27 21:50:21 +00:00
|
|
|
// Release sync calls
|
|
|
|
env_->delay_sstable_sync_.store(false, std::memory_order_release);
|
2014-08-26 23:26:29 +00:00
|
|
|
|
|
|
|
ASSERT_OK(db_->DisableFileDeletions());
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetProperty("rocksdb.is-file-deletions-enabled", &num));
|
|
|
|
ASSERT_EQ("1", num);
|
|
|
|
|
|
|
|
ASSERT_OK(db_->DisableFileDeletions());
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetProperty("rocksdb.is-file-deletions-enabled", &num));
|
|
|
|
ASSERT_EQ("2", num);
|
|
|
|
|
|
|
|
ASSERT_OK(db_->DisableFileDeletions());
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetProperty("rocksdb.is-file-deletions-enabled", &num));
|
|
|
|
ASSERT_EQ("3", num);
|
|
|
|
|
|
|
|
ASSERT_OK(db_->EnableFileDeletions(false));
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetProperty("rocksdb.is-file-deletions-enabled", &num));
|
|
|
|
ASSERT_EQ("2", num);
|
|
|
|
|
|
|
|
ASSERT_OK(db_->EnableFileDeletions());
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetProperty("rocksdb.is-file-deletions-enabled", &num));
|
|
|
|
ASSERT_EQ("0", num);
|
2012-04-17 15:36:46 +00:00
|
|
|
} while (ChangeOptions());
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, WriteEmptyBatch) {
|
2014-11-06 02:07:22 +00:00
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
options.write_buffer_size = 100000;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
|
|
|
|
ASSERT_OK(Put(1, "foo", "bar"));
|
|
|
|
env_->sync_counter_.store(0);
|
|
|
|
WriteOptions wo;
|
|
|
|
wo.sync = true;
|
|
|
|
wo.disableWAL = false;
|
|
|
|
WriteBatch empty_batch;
|
|
|
|
ASSERT_OK(dbfull()->Write(wo, &empty_batch));
|
|
|
|
ASSERT_GE(env_->sync_counter_.load(), 1);
|
|
|
|
|
|
|
|
// make sure we can re-open it.
|
|
|
|
ASSERT_OK(TryReopenWithColumnFamilies({"default", "pikachu"}, options));
|
|
|
|
ASSERT_EQ("bar", Get(1, "foo"));
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, ReadOnlyDB) {
|
2014-02-03 21:13:36 +00:00
|
|
|
ASSERT_OK(Put("foo", "v1"));
|
|
|
|
ASSERT_OK(Put("bar", "v2"));
|
|
|
|
ASSERT_OK(Put("foo", "v3"));
|
|
|
|
Close();
|
|
|
|
|
2014-10-31 22:08:10 +00:00
|
|
|
auto options = CurrentOptions();
|
|
|
|
assert(options.env = env_);
|
|
|
|
ASSERT_OK(ReadOnlyReopen(options));
|
2014-02-03 21:13:36 +00:00
|
|
|
ASSERT_EQ("v3", Get("foo"));
|
|
|
|
ASSERT_EQ("v2", Get("bar"));
|
|
|
|
Iterator* iter = db_->NewIterator(ReadOptions());
|
|
|
|
int count = 0;
|
|
|
|
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
|
|
|
|
ASSERT_OK(iter->status());
|
|
|
|
++count;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(count, 2);
|
|
|
|
delete iter;
|
2014-08-27 00:19:03 +00:00
|
|
|
Close();
|
|
|
|
|
|
|
|
// Reopen and flush memtable.
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-08-27 00:19:03 +00:00
|
|
|
Flush();
|
|
|
|
Close();
|
|
|
|
// Now check keys in read only mode.
|
2014-10-31 22:08:10 +00:00
|
|
|
ASSERT_OK(ReadOnlyReopen(options));
|
2014-08-27 00:19:03 +00:00
|
|
|
ASSERT_EQ("v3", Get("foo"));
|
|
|
|
ASSERT_EQ("v2", Get("bar"));
|
2014-02-03 21:13:36 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, CompactedDB) {
|
2014-09-25 18:14:01 +00:00
|
|
|
const uint64_t kFileSize = 1 << 20;
|
|
|
|
Options options;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.write_buffer_size = kFileSize;
|
|
|
|
options.target_file_size_base = kFileSize;
|
|
|
|
options.max_bytes_for_level_base = 1 << 30;
|
|
|
|
options.compression = kNoCompression;
|
2014-10-31 22:08:10 +00:00
|
|
|
options = CurrentOptions(options);
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-09-25 18:14:01 +00:00
|
|
|
// 1 L0 file, use CompactedDB if max_open_files = -1
|
|
|
|
ASSERT_OK(Put("aaa", DummyString(kFileSize / 2, '1')));
|
|
|
|
Flush();
|
|
|
|
Close();
|
2014-10-31 22:08:10 +00:00
|
|
|
ASSERT_OK(ReadOnlyReopen(options));
|
2014-09-25 18:14:01 +00:00
|
|
|
Status s = Put("new", "value");
|
|
|
|
ASSERT_EQ(s.ToString(),
|
|
|
|
"Not implemented: Not supported operation in read only mode.");
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, '1'), Get("aaa"));
|
|
|
|
Close();
|
|
|
|
options.max_open_files = -1;
|
2014-10-31 22:08:10 +00:00
|
|
|
ASSERT_OK(ReadOnlyReopen(options));
|
2014-09-25 18:14:01 +00:00
|
|
|
s = Put("new", "value");
|
|
|
|
ASSERT_EQ(s.ToString(),
|
|
|
|
"Not implemented: Not supported in compacted db mode.");
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, '1'), Get("aaa"));
|
|
|
|
Close();
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-09-25 18:14:01 +00:00
|
|
|
// Add more L0 files
|
|
|
|
ASSERT_OK(Put("bbb", DummyString(kFileSize / 2, '2')));
|
|
|
|
Flush();
|
|
|
|
ASSERT_OK(Put("aaa", DummyString(kFileSize / 2, 'a')));
|
|
|
|
Flush();
|
|
|
|
ASSERT_OK(Put("bbb", DummyString(kFileSize / 2, 'b')));
|
Allowing L0 -> L1 trivial move on sorted data
Summary:
This diff updates the logic of how we do trivial move, now trivial move can run on any number of files in input level as long as they are not overlapping
The conditions for trivial move have been updated
Introduced conditions:
- Trivial move cannot happen if we have a compaction filter (except if the compaction is not manual)
- Input level files cannot be overlapping
Removed conditions:
- Trivial move only run when the compaction is not manual
- Input level should can contain only 1 file
More context on what tests failed because of Trivial move
```
DBTest.CompactionsGenerateMultipleFiles
This test is expecting compaction on a file in L0 to generate multiple files in L1, this test will fail with trivial move because we end up with one file in L1
```
```
DBTest.NoSpaceCompactRange
This test expect compaction to fail when we force environment to report running out of space, of course this is not valid in trivial move situation
because trivial move does not need any extra space, and did not check for that
```
```
DBTest.DropWrites
Similar to DBTest.NoSpaceCompactRange
```
```
DBTest.DeleteObsoleteFilesPendingOutputs
This test expect that a file in L2 is deleted after it's moved to L3, this is not valid with trivial move because although the file was moved it is now used by L3
```
```
CuckooTableDBTest.CompactionIntoMultipleFiles
Same as DBTest.CompactionsGenerateMultipleFiles
```
This diff is based on a work by @sdong https://reviews.facebook.net/D34149
Test Plan: make -j64 check
Reviewers: rven, sdong, igor
Reviewed By: igor
Subscribers: yhchiang, ott, march, dhruba, sdong
Differential Revision: https://reviews.facebook.net/D34797
2015-06-04 23:51:25 +00:00
|
|
|
ASSERT_OK(Put("eee", DummyString(kFileSize / 2, 'e')));
|
2014-09-25 18:14:01 +00:00
|
|
|
Flush();
|
|
|
|
Close();
|
|
|
|
|
2014-10-31 22:08:10 +00:00
|
|
|
ASSERT_OK(ReadOnlyReopen(options));
|
2014-09-25 18:14:01 +00:00
|
|
|
// Fallback to read-only DB
|
|
|
|
s = Put("new", "value");
|
|
|
|
ASSERT_EQ(s.ToString(),
|
|
|
|
"Not implemented: Not supported operation in read only mode.");
|
|
|
|
Close();
|
|
|
|
|
|
|
|
// Full compaction
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-09-25 18:14:01 +00:00
|
|
|
// Add more keys
|
|
|
|
ASSERT_OK(Put("fff", DummyString(kFileSize / 2, 'f')));
|
|
|
|
ASSERT_OK(Put("hhh", DummyString(kFileSize / 2, 'h')));
|
|
|
|
ASSERT_OK(Put("iii", DummyString(kFileSize / 2, 'i')));
|
|
|
|
ASSERT_OK(Put("jjj", DummyString(kFileSize / 2, 'j')));
|
2015-06-17 21:36:14 +00:00
|
|
|
db_->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2014-09-25 18:14:01 +00:00
|
|
|
ASSERT_EQ(3, NumTableFilesAtLevel(1));
|
|
|
|
Close();
|
|
|
|
|
|
|
|
// CompactedDB
|
2014-10-31 22:08:10 +00:00
|
|
|
ASSERT_OK(ReadOnlyReopen(options));
|
2014-09-25 18:14:01 +00:00
|
|
|
s = Put("new", "value");
|
|
|
|
ASSERT_EQ(s.ToString(),
|
|
|
|
"Not implemented: Not supported in compacted db mode.");
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("abc"));
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, 'a'), Get("aaa"));
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, 'b'), Get("bbb"));
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("ccc"));
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, 'e'), Get("eee"));
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, 'f'), Get("fff"));
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("ggg"));
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, 'h'), Get("hhh"));
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, 'i'), Get("iii"));
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, 'j'), Get("jjj"));
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("kkk"));
|
2014-09-25 20:34:51 +00:00
|
|
|
|
|
|
|
// MultiGet
|
|
|
|
std::vector<std::string> values;
|
|
|
|
std::vector<Status> status_list = dbfull()->MultiGet(ReadOptions(),
|
|
|
|
std::vector<Slice>({Slice("aaa"), Slice("ccc"), Slice("eee"),
|
|
|
|
Slice("ggg"), Slice("iii"), Slice("kkk")}),
|
|
|
|
&values);
|
2014-09-29 21:52:16 +00:00
|
|
|
ASSERT_EQ(status_list.size(), static_cast<uint64_t>(6));
|
|
|
|
ASSERT_EQ(values.size(), static_cast<uint64_t>(6));
|
2014-09-25 20:34:51 +00:00
|
|
|
ASSERT_OK(status_list[0]);
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, 'a'), values[0]);
|
|
|
|
ASSERT_TRUE(status_list[1].IsNotFound());
|
|
|
|
ASSERT_OK(status_list[2]);
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, 'e'), values[2]);
|
|
|
|
ASSERT_TRUE(status_list[3].IsNotFound());
|
|
|
|
ASSERT_OK(status_list[4]);
|
|
|
|
ASSERT_EQ(DummyString(kFileSize / 2, 'i'), values[4]);
|
|
|
|
ASSERT_TRUE(status_list[5].IsNotFound());
|
2014-09-25 18:14:01 +00:00
|
|
|
}
|
|
|
|
|
2013-11-13 06:46:51 +00:00
|
|
|
// Make sure that when options.block_cache is set, after a new table is
|
|
|
|
// created its index/filter blocks are added to block cache.
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IndexAndFilterBlocksOfNewTableAddedToCache) {
|
2013-11-13 06:46:51 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
2014-01-24 18:57:15 +00:00
|
|
|
BlockBasedTableOptions table_options;
|
|
|
|
table_options.cache_index_and_filter_blocks = true;
|
2014-08-25 21:22:05 +00:00
|
|
|
table_options.filter_policy.reset(NewBloomFilterPolicy(20));
|
2014-01-24 18:57:15 +00:00
|
|
|
options.table_factory.reset(new BlockBasedTableFactory(table_options));
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-11-13 06:46:51 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "key", "val"));
|
|
|
|
// Create a new table.
|
|
|
|
ASSERT_OK(Flush(1));
|
2013-11-13 06:46:51 +00:00
|
|
|
|
|
|
|
// index/filter blocks added to block cache right after table creation.
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(1, TestGetTickerCount(options, BLOCK_CACHE_INDEX_MISS));
|
|
|
|
ASSERT_EQ(1, TestGetTickerCount(options, BLOCK_CACHE_FILTER_MISS));
|
2013-11-13 06:46:51 +00:00
|
|
|
ASSERT_EQ(2, /* only index/filter were added */
|
2014-01-17 20:46:06 +00:00
|
|
|
TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
|
|
|
ASSERT_EQ(0, TestGetTickerCount(options, BLOCK_CACHE_DATA_MISS));
|
2014-08-05 18:27:34 +00:00
|
|
|
uint64_t int_num;
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.estimate-table-readers-mem", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 0U);
|
2013-11-13 06:46:51 +00:00
|
|
|
|
|
|
|
// Make sure filter block is in cache.
|
|
|
|
std::string value;
|
|
|
|
ReadOptions ropt;
|
2014-02-07 22:47:16 +00:00
|
|
|
db_->KeyMayExist(ReadOptions(), handles_[1], "key", &value);
|
2013-11-13 06:46:51 +00:00
|
|
|
|
|
|
|
// Miss count should remain the same.
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(1, TestGetTickerCount(options, BLOCK_CACHE_FILTER_MISS));
|
|
|
|
ASSERT_EQ(1, TestGetTickerCount(options, BLOCK_CACHE_FILTER_HIT));
|
2013-11-13 06:46:51 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
db_->KeyMayExist(ReadOptions(), handles_[1], "key", &value);
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(1, TestGetTickerCount(options, BLOCK_CACHE_FILTER_MISS));
|
|
|
|
ASSERT_EQ(2, TestGetTickerCount(options, BLOCK_CACHE_FILTER_HIT));
|
2013-11-13 06:46:51 +00:00
|
|
|
|
|
|
|
// Make sure index block is in cache.
|
2014-01-17 20:46:06 +00:00
|
|
|
auto index_block_hit = TestGetTickerCount(options, BLOCK_CACHE_FILTER_HIT);
|
2014-02-07 22:47:16 +00:00
|
|
|
value = Get(1, "key");
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(1, TestGetTickerCount(options, BLOCK_CACHE_FILTER_MISS));
|
2013-11-13 06:46:51 +00:00
|
|
|
ASSERT_EQ(index_block_hit + 1,
|
2014-01-17 20:46:06 +00:00
|
|
|
TestGetTickerCount(options, BLOCK_CACHE_FILTER_HIT));
|
2013-11-13 06:46:51 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
value = Get(1, "key");
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(1, TestGetTickerCount(options, BLOCK_CACHE_FILTER_MISS));
|
2013-11-13 06:46:51 +00:00
|
|
|
ASSERT_EQ(index_block_hit + 2,
|
2014-01-17 20:46:06 +00:00
|
|
|
TestGetTickerCount(options, BLOCK_CACHE_FILTER_HIT));
|
2013-11-13 06:46:51 +00:00
|
|
|
}
|
|
|
|
|
2015-04-17 22:26:50 +00:00
|
|
|
TEST_F(DBTest, ParanoidFileChecks) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
options.level0_file_num_compaction_trigger = 2;
|
|
|
|
options.paranoid_file_checks = true;
|
|
|
|
BlockBasedTableOptions table_options;
|
|
|
|
table_options.cache_index_and_filter_blocks = false;
|
|
|
|
table_options.filter_policy.reset(NewBloomFilterPolicy(20));
|
|
|
|
options.table_factory.reset(new BlockBasedTableFactory(table_options));
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
|
|
|
|
ASSERT_OK(Put(1, "1_key", "val"));
|
|
|
|
ASSERT_OK(Put(1, "9_key", "val"));
|
|
|
|
// Create a new table.
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ(1, /* read and cache data block */
|
|
|
|
TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
|
|
|
|
|
|
|
ASSERT_OK(Put(1, "1_key2", "val2"));
|
|
|
|
ASSERT_OK(Put(1, "9_key2", "val2"));
|
|
|
|
// Create a new SST file. This will further trigger a compaction
|
|
|
|
// and generate another file.
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
ASSERT_EQ(3, /* Totally 3 files created up to now */
|
|
|
|
TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
|
|
|
|
|
|
|
// After disabling options.paranoid_file_checks. NO further block
|
|
|
|
// is added after generating a new file.
|
|
|
|
ASSERT_OK(
|
|
|
|
dbfull()->SetOptions(handles_[1], {{"paranoid_file_checks", "false"}}));
|
|
|
|
|
|
|
|
ASSERT_OK(Put(1, "1_key3", "val3"));
|
|
|
|
ASSERT_OK(Put(1, "9_key3", "val3"));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_OK(Put(1, "1_key4", "val4"));
|
|
|
|
ASSERT_OK(Put(1, "9_key4", "val4"));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
ASSERT_EQ(3, /* Totally 3 files created up to now */
|
|
|
|
TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetPropertiesOfAllTablesTest) {
|
2014-02-14 00:28:21 +00:00
|
|
|
Options options = CurrentOptions();
|
2015-07-22 00:04:00 +00:00
|
|
|
options.level0_file_num_compaction_trigger = 8;
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-02-14 00:28:21 +00:00
|
|
|
// Create 4 tables
|
|
|
|
for (int table = 0; table < 4; ++table) {
|
|
|
|
for (int i = 0; i < 10 + table; ++i) {
|
2014-11-25 04:44:49 +00:00
|
|
|
db_->Put(WriteOptions(), ToString(table * 100 + i), "val");
|
2014-02-14 00:28:21 +00:00
|
|
|
}
|
|
|
|
db_->Flush(FlushOptions());
|
|
|
|
}
|
|
|
|
|
|
|
|
// 1. Read table properties directly from file
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-02-14 00:28:21 +00:00
|
|
|
VerifyTableProperties(db_, 10 + 11 + 12 + 13);
|
|
|
|
|
|
|
|
// 2. Put two tables to table cache and
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-02-14 00:28:21 +00:00
|
|
|
// fetch key from 1st and 2nd table, which will internally place that table to
|
|
|
|
// the table cache.
|
|
|
|
for (int i = 0; i < 2; ++i) {
|
2014-11-25 04:44:49 +00:00
|
|
|
Get(ToString(i * 100 + 0));
|
2014-02-14 00:28:21 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
VerifyTableProperties(db_, 10 + 11 + 12 + 13);
|
|
|
|
|
|
|
|
// 3. Put all tables to table cache
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-02-14 00:28:21 +00:00
|
|
|
// fetch key from 1st and 2nd table, which will internally place that table to
|
|
|
|
// the table cache.
|
|
|
|
for (int i = 0; i < 4; ++i) {
|
2014-11-25 04:44:49 +00:00
|
|
|
Get(ToString(i * 100 + 0));
|
2014-02-14 00:28:21 +00:00
|
|
|
}
|
|
|
|
VerifyTableProperties(db_, 10 + 11 + 12 + 13);
|
|
|
|
}
|
|
|
|
|
A new call back to TablePropertiesCollector to allow users know the entry is add, delete or merge
Summary:
Currently users have no idea a key is add, delete or merge from TablePropertiesCollector call back. Add a new function to add it.
Also refactor the codes so that
(1) make table property collector and internal table property collector two separate data structures with the later one now exposed
(2) table builders only receive internal table properties
Test Plan: Add cases in table_properties_collector_test to cover both of old and new ways of using TablePropertiesCollector.
Reviewers: yhchiang, igor.sugak, rven, igor
Reviewed By: rven, igor
Subscribers: meyering, yoshinorim, maykov, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D35373
2015-04-06 17:04:30 +00:00
|
|
|
class CoutingUserTblPropCollector : public TablePropertiesCollector {
|
|
|
|
public:
|
|
|
|
const char* Name() const override { return "CoutingUserTblPropCollector"; }
|
|
|
|
|
|
|
|
Status Finish(UserCollectedProperties* properties) override {
|
|
|
|
std::string encoded;
|
|
|
|
PutVarint32(&encoded, count_);
|
|
|
|
*properties = UserCollectedProperties{
|
|
|
|
{"CoutingUserTblPropCollector", message_}, {"Count", encoded},
|
|
|
|
};
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
|
|
|
Status AddUserKey(const Slice& user_key, const Slice& value, EntryType type,
|
|
|
|
SequenceNumber seq, uint64_t file_size) override {
|
|
|
|
++count_;
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
|
|
|
virtual UserCollectedProperties GetReadableProperties() const override {
|
|
|
|
return UserCollectedProperties{};
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
std::string message_ = "Rocksdb";
|
|
|
|
uint32_t count_ = 0;
|
|
|
|
};
|
|
|
|
|
|
|
|
class CoutingUserTblPropCollectorFactory
|
|
|
|
: public TablePropertiesCollectorFactory {
|
|
|
|
public:
|
|
|
|
virtual TablePropertiesCollector* CreateTablePropertiesCollector() override {
|
|
|
|
return new CoutingUserTblPropCollector();
|
|
|
|
}
|
|
|
|
const char* Name() const override {
|
|
|
|
return "CoutingUserTblPropCollectorFactory";
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
TEST_F(DBTest, GetUserDefinedTablaProperties) {
|
|
|
|
Options options = CurrentOptions();
|
2015-07-22 19:37:49 +00:00
|
|
|
options.level0_file_num_compaction_trigger = (1<<30);
|
A new call back to TablePropertiesCollector to allow users know the entry is add, delete or merge
Summary:
Currently users have no idea a key is add, delete or merge from TablePropertiesCollector call back. Add a new function to add it.
Also refactor the codes so that
(1) make table property collector and internal table property collector two separate data structures with the later one now exposed
(2) table builders only receive internal table properties
Test Plan: Add cases in table_properties_collector_test to cover both of old and new ways of using TablePropertiesCollector.
Reviewers: yhchiang, igor.sugak, rven, igor
Reviewed By: rven, igor
Subscribers: meyering, yoshinorim, maykov, leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D35373
2015-04-06 17:04:30 +00:00
|
|
|
options.max_background_flushes = 0;
|
|
|
|
options.table_properties_collector_factories.resize(1);
|
|
|
|
options.table_properties_collector_factories[0] =
|
|
|
|
std::make_shared<CoutingUserTblPropCollectorFactory>();
|
|
|
|
Reopen(options);
|
|
|
|
// Create 4 tables
|
|
|
|
for (int table = 0; table < 4; ++table) {
|
|
|
|
for (int i = 0; i < 10 + table; ++i) {
|
|
|
|
db_->Put(WriteOptions(), ToString(table * 100 + i), "val");
|
|
|
|
}
|
|
|
|
db_->Flush(FlushOptions());
|
|
|
|
}
|
|
|
|
|
|
|
|
TablePropertiesCollection props;
|
|
|
|
ASSERT_OK(db_->GetPropertiesOfAllTables(&props));
|
|
|
|
ASSERT_EQ(4U, props.size());
|
|
|
|
uint32_t sum = 0;
|
|
|
|
for (const auto& item : props) {
|
|
|
|
auto& user_collected = item.second->user_collected_properties;
|
|
|
|
ASSERT_TRUE(user_collected.find("CoutingUserTblPropCollector") !=
|
|
|
|
user_collected.end());
|
|
|
|
ASSERT_EQ(user_collected.at("CoutingUserTblPropCollector"), "Rocksdb");
|
|
|
|
ASSERT_TRUE(user_collected.find("Count") != user_collected.end());
|
|
|
|
Slice key(user_collected.at("Count"));
|
|
|
|
uint32_t count;
|
|
|
|
ASSERT_TRUE(GetVarint32(&key, &count));
|
|
|
|
sum += count;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(10u + 11u + 12u + 13u, sum);
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, LevelLimitReopen) {
|
2013-01-24 18:54:26 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-01-24 18:54:26 +00:00
|
|
|
|
|
|
|
const std::string value(1024 * 1024, ' ');
|
|
|
|
int i = 0;
|
2014-02-07 22:47:16 +00:00
|
|
|
while (NumTableFilesAtLevel(2, 1) == 0) {
|
|
|
|
ASSERT_OK(Put(1, Key(i++), value));
|
2013-01-24 18:54:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
options.num_levels = 1;
|
2013-05-23 17:56:36 +00:00
|
|
|
options.max_bytes_for_level_multiplier_additional.resize(1, 1);
|
2014-10-29 19:00:42 +00:00
|
|
|
Status s = TryReopenWithColumnFamilies({"default", "pikachu"}, options);
|
2014-01-14 23:27:09 +00:00
|
|
|
ASSERT_EQ(s.IsInvalidArgument(), true);
|
2013-01-24 18:54:26 +00:00
|
|
|
ASSERT_EQ(s.ToString(),
|
2014-01-14 23:27:09 +00:00
|
|
|
"Invalid argument: db has more levels than options.num_levels");
|
2013-01-24 18:54:26 +00:00
|
|
|
|
|
|
|
options.num_levels = 10;
|
2013-05-23 17:56:36 +00:00
|
|
|
options.max_bytes_for_level_multiplier_additional.resize(10, 1);
|
2014-10-29 19:00:42 +00:00
|
|
|
ASSERT_OK(TryReopenWithColumnFamilies({"default", "pikachu"}, options));
|
2013-01-24 18:54:26 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, PutDeleteGet) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
|
|
|
ASSERT_OK(Put(1, "foo", "v2"));
|
|
|
|
ASSERT_EQ("v2", Get(1, "foo"));
|
|
|
|
ASSERT_OK(Delete(1, "foo"));
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get(1, "foo"));
|
2012-04-17 15:36:46 +00:00
|
|
|
} while (ChangeOptions());
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetFromImmutableLayer) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
Options options;
|
2012-04-17 15:36:46 +00:00
|
|
|
options.env = env_;
|
|
|
|
options.write_buffer_size = 100000; // Small write buffer
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
options = CurrentOptions(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2011-06-22 02:36:45 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
2011-06-22 02:36:45 +00:00
|
|
|
|
2014-10-27 21:50:21 +00:00
|
|
|
// Block sync calls
|
|
|
|
env_->delay_sstable_sync_.store(true, std::memory_order_release);
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "k1", std::string(100000, 'x')); // Fill memtable
|
|
|
|
Put(1, "k2", std::string(100000, 'y')); // Trigger flush
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get(0, "foo"));
|
2014-10-27 21:50:21 +00:00
|
|
|
// Release sync calls
|
|
|
|
env_->delay_sstable_sync_.store(false, std::memory_order_release);
|
2012-04-17 15:36:46 +00:00
|
|
|
} while (ChangeOptions());
|
2011-06-22 02:36:45 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetFromVersions) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get(0, "foo"));
|
2012-04-17 15:36:46 +00:00
|
|
|
} while (ChangeOptions());
|
2011-06-22 02:36:45 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetSnapshot) {
|
2015-02-02 22:49:22 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.skip_policy = kSkipNoSnapshot;
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2015-02-02 22:49:22 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions(options_override));
|
2012-04-17 15:36:46 +00:00
|
|
|
// Try with both a short key and a long key
|
|
|
|
for (int i = 0; i < 2; i++) {
|
|
|
|
std::string key = (i == 0) ? std::string("foo") : std::string(200, 'x');
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, key, "v1"));
|
2012-04-17 15:36:46 +00:00
|
|
|
const Snapshot* s1 = db_->GetSnapshot();
|
2014-12-11 02:39:09 +00:00
|
|
|
if (option_config_ == kHashCuckoo) {
|
|
|
|
// NOt supported case.
|
|
|
|
ASSERT_TRUE(s1 == nullptr);
|
|
|
|
break;
|
|
|
|
}
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, key, "v2"));
|
|
|
|
ASSERT_EQ("v2", Get(1, key));
|
|
|
|
ASSERT_EQ("v1", Get(1, key, s1));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ("v2", Get(1, key));
|
|
|
|
ASSERT_EQ("v1", Get(1, key, s1));
|
2012-04-17 15:36:46 +00:00
|
|
|
db_->ReleaseSnapshot(s1);
|
|
|
|
}
|
2014-12-11 02:39:09 +00:00
|
|
|
} while (ChangeOptions());
|
2011-06-22 02:36:45 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetLevel0Ordering) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2012-04-17 15:36:46 +00:00
|
|
|
// Check that we process level-0 files in correct order. The code
|
|
|
|
// below generates two level-0 files where the earlier one comes
|
|
|
|
// before the later one in the level-0 file list since the earlier
|
|
|
|
// one has a smaller "smallest" key.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "bar", "b"));
|
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_OK(Put(1, "foo", "v2"));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ("v2", Get(1, "foo"));
|
2012-04-17 15:36:46 +00:00
|
|
|
} while (ChangeOptions());
|
2011-06-22 02:36:45 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, WrongLevel0Config) {
|
2015-02-24 00:08:27 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
Close();
|
|
|
|
ASSERT_OK(DestroyDB(dbname_, options));
|
|
|
|
options.level0_stop_writes_trigger = 1;
|
|
|
|
options.level0_slowdown_writes_trigger = 2;
|
|
|
|
options.level0_file_num_compaction_trigger = 3;
|
|
|
|
ASSERT_OK(DB::Open(options, dbname_, &db_));
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetOrderedByLevels) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
Compact(1, "a", "z");
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
|
|
|
ASSERT_OK(Put(1, "foo", "v2"));
|
|
|
|
ASSERT_EQ("v2", Get(1, "foo"));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ("v2", Get(1, "foo"));
|
2012-04-17 15:36:46 +00:00
|
|
|
} while (ChangeOptions());
|
2011-06-22 02:36:45 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetPicksCorrectFile) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2012-04-17 15:36:46 +00:00
|
|
|
// Arrange to have multiple files in a non-level-0 level.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "a", "va"));
|
|
|
|
Compact(1, "a", "b");
|
|
|
|
ASSERT_OK(Put(1, "x", "vx"));
|
|
|
|
Compact(1, "x", "y");
|
|
|
|
ASSERT_OK(Put(1, "f", "vf"));
|
|
|
|
Compact(1, "f", "g");
|
|
|
|
ASSERT_EQ("va", Get(1, "a"));
|
|
|
|
ASSERT_EQ("vf", Get(1, "f"));
|
|
|
|
ASSERT_EQ("vx", Get(1, "x"));
|
2012-04-17 15:36:46 +00:00
|
|
|
} while (ChangeOptions());
|
2011-06-22 02:36:45 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetEncountersEmptyLevel) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2014-09-18 20:32:44 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.disableDataSync = true;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2012-04-17 15:36:46 +00:00
|
|
|
// Arrange for the following to happen:
|
|
|
|
// * sstable A in level 0
|
|
|
|
// * nothing in level 1
|
|
|
|
// * sstable B in level 2
|
|
|
|
// Then do enough Get() calls to arrange for an automatic compaction
|
|
|
|
// of sstable A. A bug would cause the compaction to be marked as
|
2015-04-25 09:14:27 +00:00
|
|
|
// occurring at level 1 (instead of the correct level 0).
|
2012-04-17 15:36:46 +00:00
|
|
|
|
|
|
|
// Step 1: First place sstables in levels 0 and 2
|
2015-07-17 19:02:52 +00:00
|
|
|
Put(1, "a", "begin");
|
|
|
|
Put(1, "z", "end");
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
dbfull()->TEST_CompactRange(0, nullptr, nullptr, handles_[1]);
|
|
|
|
dbfull()->TEST_CompactRange(1, nullptr, nullptr, handles_[1]);
|
|
|
|
Put(1, "a", "begin");
|
|
|
|
Put(1, "z", "end");
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_GT(NumTableFilesAtLevel(0, 1), 0);
|
|
|
|
ASSERT_GT(NumTableFilesAtLevel(2, 1), 0);
|
2011-09-01 19:08:02 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
// Step 2: clear level 1 if necessary.
|
2014-02-07 22:47:16 +00:00
|
|
|
dbfull()->TEST_CompactRange(1, nullptr, nullptr, handles_[1]);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 1), 1);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(1, 1), 0);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(2, 1), 1);
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2012-08-27 06:45:35 +00:00
|
|
|
// Step 3: read a bunch of times
|
|
|
|
for (int i = 0; i < 1000; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("NOT_FOUND", Get(1, "missing"));
|
2012-04-17 15:36:46 +00:00
|
|
|
}
|
2012-08-27 06:45:35 +00:00
|
|
|
|
|
|
|
// Step 4: Wait for compaction to finish
|
2015-07-17 19:02:52 +00:00
|
|
|
dbfull()->TEST_WaitForCompact();
|
2012-08-27 06:45:35 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 1), 1); // XXX
|
2014-05-21 18:43:35 +00:00
|
|
|
} while (ChangeOptions(kSkipUniversalCompaction | kSkipFIFOCompaction));
|
2011-09-01 19:08:02 +00:00
|
|
|
}
|
|
|
|
|
2013-07-26 19:57:01 +00:00
|
|
|
// KeyMayExist can lead to a few false positives, but not false negatives.
|
|
|
|
// To make test deterministic, use a much larger number of bits per key-20 than
|
|
|
|
// bits in the key, so that false positives are eliminated
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, KeyMayExist) {
|
2013-07-06 01:49:18 +00:00
|
|
|
do {
|
2013-07-26 19:57:01 +00:00
|
|
|
ReadOptions ropts;
|
|
|
|
std::string value;
|
2014-08-25 21:22:05 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.filter_policy.reset(NewBloomFilterPolicy(20));
|
|
|
|
Options options = CurrentOptions(options_override);
|
2013-10-04 04:49:15 +00:00
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-07-06 01:49:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(!db_->KeyMayExist(ropts, handles_[1], "a", &value));
|
2013-07-06 01:49:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "a", "b"));
|
2013-07-26 19:57:01 +00:00
|
|
|
bool value_found = false;
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(
|
|
|
|
db_->KeyMayExist(ropts, handles_[1], "a", &value, &value_found));
|
2013-07-26 19:57:01 +00:00
|
|
|
ASSERT_TRUE(value_found);
|
|
|
|
ASSERT_EQ("b", value);
|
2013-07-06 01:49:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Flush(1));
|
2013-07-26 19:57:01 +00:00
|
|
|
value.clear();
|
2013-08-25 05:48:51 +00:00
|
|
|
|
2014-01-17 20:46:06 +00:00
|
|
|
long numopen = TestGetTickerCount(options, NO_FILE_OPENS);
|
|
|
|
long cache_added = TestGetTickerCount(options, BLOCK_CACHE_ADD);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(
|
|
|
|
db_->KeyMayExist(ropts, handles_[1], "a", &value, &value_found));
|
2013-08-25 20:40:02 +00:00
|
|
|
ASSERT_TRUE(!value_found);
|
2013-08-25 05:48:51 +00:00
|
|
|
// assert that no new files were opened and no new blocks were
|
|
|
|
// read into block cache.
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(numopen, TestGetTickerCount(options, NO_FILE_OPENS));
|
|
|
|
ASSERT_EQ(cache_added, TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
2013-07-06 01:49:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Delete(1, "a"));
|
2013-08-25 05:48:51 +00:00
|
|
|
|
2014-01-17 20:46:06 +00:00
|
|
|
numopen = TestGetTickerCount(options, NO_FILE_OPENS);
|
|
|
|
cache_added = TestGetTickerCount(options, BLOCK_CACHE_ADD);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(!db_->KeyMayExist(ropts, handles_[1], "a", &value));
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(numopen, TestGetTickerCount(options, NO_FILE_OPENS));
|
|
|
|
ASSERT_EQ(cache_added, TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
2013-07-06 01:49:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Flush(1));
|
Allowing L0 -> L1 trivial move on sorted data
Summary:
This diff updates the logic of how we do trivial move, now trivial move can run on any number of files in input level as long as they are not overlapping
The conditions for trivial move have been updated
Introduced conditions:
- Trivial move cannot happen if we have a compaction filter (except if the compaction is not manual)
- Input level files cannot be overlapping
Removed conditions:
- Trivial move only run when the compaction is not manual
- Input level should can contain only 1 file
More context on what tests failed because of Trivial move
```
DBTest.CompactionsGenerateMultipleFiles
This test is expecting compaction on a file in L0 to generate multiple files in L1, this test will fail with trivial move because we end up with one file in L1
```
```
DBTest.NoSpaceCompactRange
This test expect compaction to fail when we force environment to report running out of space, of course this is not valid in trivial move situation
because trivial move does not need any extra space, and did not check for that
```
```
DBTest.DropWrites
Similar to DBTest.NoSpaceCompactRange
```
```
DBTest.DeleteObsoleteFilesPendingOutputs
This test expect that a file in L2 is deleted after it's moved to L3, this is not valid with trivial move because although the file was moved it is now used by L3
```
```
CuckooTableDBTest.CompactionIntoMultipleFiles
Same as DBTest.CompactionsGenerateMultipleFiles
```
This diff is based on a work by @sdong https://reviews.facebook.net/D34149
Test Plan: make -j64 check
Reviewers: rven, sdong, igor
Reviewed By: igor
Subscribers: yhchiang, ott, march, dhruba, sdong
Differential Revision: https://reviews.facebook.net/D34797
2015-06-04 23:51:25 +00:00
|
|
|
dbfull()->TEST_CompactRange(0, nullptr, nullptr, handles_[1],
|
|
|
|
true /* disallow trivial move */);
|
2013-08-25 05:48:51 +00:00
|
|
|
|
2014-01-17 20:46:06 +00:00
|
|
|
numopen = TestGetTickerCount(options, NO_FILE_OPENS);
|
|
|
|
cache_added = TestGetTickerCount(options, BLOCK_CACHE_ADD);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(!db_->KeyMayExist(ropts, handles_[1], "a", &value));
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(numopen, TestGetTickerCount(options, NO_FILE_OPENS));
|
|
|
|
ASSERT_EQ(cache_added, TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
2013-07-06 01:49:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Delete(1, "c"));
|
2013-08-25 05:48:51 +00:00
|
|
|
|
2014-01-17 20:46:06 +00:00
|
|
|
numopen = TestGetTickerCount(options, NO_FILE_OPENS);
|
|
|
|
cache_added = TestGetTickerCount(options, BLOCK_CACHE_ADD);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(!db_->KeyMayExist(ropts, handles_[1], "c", &value));
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(numopen, TestGetTickerCount(options, NO_FILE_OPENS));
|
|
|
|
ASSERT_EQ(cache_added, TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
2013-07-11 21:05:31 +00:00
|
|
|
|
2013-12-20 17:35:24 +00:00
|
|
|
// KeyMayExist function only checks data in block caches, which is not used
|
|
|
|
// by plain table format.
|
2014-05-21 18:43:35 +00:00
|
|
|
} while (
|
|
|
|
ChangeOptions(kSkipPlainTable | kSkipHashIndex | kSkipFIFOCompaction));
|
2013-07-06 01:49:18 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, NonBlockingIteration) {
|
2013-08-25 05:48:51 +00:00
|
|
|
do {
|
|
|
|
ReadOptions non_blocking_opts, regular_opts;
|
|
|
|
Options options = CurrentOptions();
|
2013-10-04 04:49:15 +00:00
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
2013-08-25 05:48:51 +00:00
|
|
|
non_blocking_opts.read_tier = kBlockCacheTier;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-08-25 05:48:51 +00:00
|
|
|
// write one kv to the database.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "a", "b"));
|
2013-08-25 05:48:51 +00:00
|
|
|
|
|
|
|
// scan using non-blocking iterator. We should find it because
|
|
|
|
// it is in memtable.
|
2014-02-07 22:47:16 +00:00
|
|
|
Iterator* iter = db_->NewIterator(non_blocking_opts, handles_[1]);
|
2013-08-25 05:48:51 +00:00
|
|
|
int count = 0;
|
|
|
|
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
|
2013-10-04 17:21:03 +00:00
|
|
|
ASSERT_OK(iter->status());
|
2013-08-25 05:48:51 +00:00
|
|
|
count++;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(count, 1);
|
|
|
|
delete iter;
|
|
|
|
|
|
|
|
// flush memtable to storage. Now, the key should not be in the
|
|
|
|
// memtable neither in the block cache.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Flush(1));
|
2013-08-25 05:48:51 +00:00
|
|
|
|
|
|
|
// verify that a non-blocking iterator does not find any
|
|
|
|
// kvs. Neither does it do any IOs to storage.
|
2014-01-17 20:46:06 +00:00
|
|
|
long numopen = TestGetTickerCount(options, NO_FILE_OPENS);
|
|
|
|
long cache_added = TestGetTickerCount(options, BLOCK_CACHE_ADD);
|
2014-02-07 22:47:16 +00:00
|
|
|
iter = db_->NewIterator(non_blocking_opts, handles_[1]);
|
2013-08-25 05:48:51 +00:00
|
|
|
count = 0;
|
|
|
|
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
|
|
|
|
count++;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(count, 0);
|
|
|
|
ASSERT_TRUE(iter->status().IsIncomplete());
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(numopen, TestGetTickerCount(options, NO_FILE_OPENS));
|
|
|
|
ASSERT_EQ(cache_added, TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
2013-08-25 05:48:51 +00:00
|
|
|
delete iter;
|
|
|
|
|
|
|
|
// read in the specified block via a regular get
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(Get(1, "a"), "b");
|
2013-08-25 05:48:51 +00:00
|
|
|
|
|
|
|
// verify that we can find it via a non-blocking scan
|
2014-01-17 20:46:06 +00:00
|
|
|
numopen = TestGetTickerCount(options, NO_FILE_OPENS);
|
|
|
|
cache_added = TestGetTickerCount(options, BLOCK_CACHE_ADD);
|
2014-02-07 22:47:16 +00:00
|
|
|
iter = db_->NewIterator(non_blocking_opts, handles_[1]);
|
2013-08-25 05:48:51 +00:00
|
|
|
count = 0;
|
|
|
|
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
|
2013-10-04 17:21:03 +00:00
|
|
|
ASSERT_OK(iter->status());
|
2013-08-25 05:48:51 +00:00
|
|
|
count++;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(count, 1);
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(numopen, TestGetTickerCount(options, NO_FILE_OPENS));
|
|
|
|
ASSERT_EQ(cache_added, TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
2013-08-25 05:48:51 +00:00
|
|
|
delete iter;
|
|
|
|
|
2013-12-20 17:35:24 +00:00
|
|
|
// This test verifies block cache behaviors, which is not used by plain
|
|
|
|
// table format.
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
// Exclude kHashCuckoo as it does not support iteration currently
|
2014-09-17 19:31:53 +00:00
|
|
|
} while (ChangeOptions(kSkipPlainTable | kSkipNoSeekToLast | kSkipHashCuckoo |
|
|
|
|
kSkipMmapReads));
|
2013-08-25 05:48:51 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, ManagedNonBlockingIteration) {
|
2015-02-18 19:49:31 +00:00
|
|
|
do {
|
|
|
|
ReadOptions non_blocking_opts, regular_opts;
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
non_blocking_opts.read_tier = kBlockCacheTier;
|
|
|
|
non_blocking_opts.managed = true;
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
// write one kv to the database.
|
|
|
|
ASSERT_OK(Put(1, "a", "b"));
|
|
|
|
|
|
|
|
// scan using non-blocking iterator. We should find it because
|
|
|
|
// it is in memtable.
|
|
|
|
Iterator* iter = db_->NewIterator(non_blocking_opts, handles_[1]);
|
|
|
|
int count = 0;
|
|
|
|
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
|
|
|
|
ASSERT_OK(iter->status());
|
|
|
|
count++;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(count, 1);
|
|
|
|
delete iter;
|
|
|
|
|
|
|
|
// flush memtable to storage. Now, the key should not be in the
|
|
|
|
// memtable neither in the block cache.
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
|
|
|
|
// verify that a non-blocking iterator does not find any
|
|
|
|
// kvs. Neither does it do any IOs to storage.
|
|
|
|
int64_t numopen = TestGetTickerCount(options, NO_FILE_OPENS);
|
|
|
|
int64_t cache_added = TestGetTickerCount(options, BLOCK_CACHE_ADD);
|
|
|
|
iter = db_->NewIterator(non_blocking_opts, handles_[1]);
|
|
|
|
count = 0;
|
|
|
|
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
|
|
|
|
count++;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(count, 0);
|
|
|
|
ASSERT_TRUE(iter->status().IsIncomplete());
|
|
|
|
ASSERT_EQ(numopen, TestGetTickerCount(options, NO_FILE_OPENS));
|
|
|
|
ASSERT_EQ(cache_added, TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
|
|
|
delete iter;
|
|
|
|
|
|
|
|
// read in the specified block via a regular get
|
|
|
|
ASSERT_EQ(Get(1, "a"), "b");
|
|
|
|
|
|
|
|
// verify that we can find it via a non-blocking scan
|
|
|
|
numopen = TestGetTickerCount(options, NO_FILE_OPENS);
|
|
|
|
cache_added = TestGetTickerCount(options, BLOCK_CACHE_ADD);
|
|
|
|
iter = db_->NewIterator(non_blocking_opts, handles_[1]);
|
|
|
|
count = 0;
|
|
|
|
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
|
|
|
|
ASSERT_OK(iter->status());
|
|
|
|
count++;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(count, 1);
|
|
|
|
ASSERT_EQ(numopen, TestGetTickerCount(options, NO_FILE_OPENS));
|
|
|
|
ASSERT_EQ(cache_added, TestGetTickerCount(options, BLOCK_CACHE_ADD));
|
|
|
|
delete iter;
|
|
|
|
|
|
|
|
// This test verifies block cache behaviors, which is not used by plain
|
|
|
|
// table format.
|
|
|
|
// Exclude kHashCuckoo as it does not support iteration currently
|
|
|
|
} while (ChangeOptions(kSkipPlainTable | kSkipNoSeekToLast | kSkipHashCuckoo |
|
|
|
|
kSkipMmapReads));
|
|
|
|
}
|
|
|
|
|
2013-07-12 23:56:52 +00:00
|
|
|
// A delete is skipped for key if KeyMayExist(key) returns False
|
|
|
|
// Tests Writebatch consistency and proper delete behaviour
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, FilterDeletes) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2014-08-25 21:22:05 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.filter_policy.reset(NewBloomFilterPolicy(20));
|
|
|
|
Options options = CurrentOptions(options_override);
|
2013-08-07 22:20:41 +00:00
|
|
|
options.filter_deletes = true;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-08-07 22:20:41 +00:00
|
|
|
WriteBatch batch;
|
|
|
|
|
2014-03-14 20:40:06 +00:00
|
|
|
batch.Delete(handles_[1], "a");
|
2013-08-07 22:20:41 +00:00
|
|
|
dbfull()->Write(WriteOptions(), &batch);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("a", 1), "[ ]"); // Delete skipped
|
2013-08-07 22:20:41 +00:00
|
|
|
batch.Clear();
|
|
|
|
|
2014-03-14 20:40:06 +00:00
|
|
|
batch.Put(handles_[1], "a", "b");
|
|
|
|
batch.Delete(handles_[1], "a");
|
2013-08-07 22:20:41 +00:00
|
|
|
dbfull()->Write(WriteOptions(), &batch);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(Get(1, "a"), "NOT_FOUND");
|
|
|
|
ASSERT_EQ(AllEntriesFor("a", 1), "[ DEL, b ]"); // Delete issued
|
2013-08-07 22:20:41 +00:00
|
|
|
batch.Clear();
|
|
|
|
|
2014-03-14 20:40:06 +00:00
|
|
|
batch.Delete(handles_[1], "c");
|
|
|
|
batch.Put(handles_[1], "c", "d");
|
2013-08-07 22:20:41 +00:00
|
|
|
dbfull()->Write(WriteOptions(), &batch);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(Get(1, "c"), "d");
|
|
|
|
ASSERT_EQ(AllEntriesFor("c", 1), "[ d ]"); // Delete skipped
|
2013-08-07 22:20:41 +00:00
|
|
|
batch.Clear();
|
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Flush(1)); // A stray Flush
|
2013-08-07 22:20:41 +00:00
|
|
|
|
2014-03-14 20:40:06 +00:00
|
|
|
batch.Delete(handles_[1], "c");
|
2013-08-07 22:20:41 +00:00
|
|
|
dbfull()->Write(WriteOptions(), &batch);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("c", 1), "[ DEL, d ]"); // Delete issued
|
2013-08-07 22:20:41 +00:00
|
|
|
batch.Clear();
|
|
|
|
} while (ChangeCompactOptions());
|
2013-07-12 23:56:52 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetFilterByPrefixBloom) {
|
2015-02-03 01:42:57 +00:00
|
|
|
Options options = last_options_;
|
|
|
|
options.prefix_extractor.reset(NewFixedPrefixTransform(8));
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
BlockBasedTableOptions bbto;
|
|
|
|
bbto.filter_policy.reset(NewBloomFilterPolicy(10, false));
|
|
|
|
bbto.whole_key_filtering = false;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(bbto));
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
WriteOptions wo;
|
|
|
|
ReadOptions ro;
|
|
|
|
FlushOptions fo;
|
|
|
|
fo.wait = true;
|
|
|
|
std::string value;
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "barbarbar", "foo"));
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "barbarbar2", "foo2"));
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "foofoofoo", "bar"));
|
|
|
|
|
|
|
|
dbfull()->Flush(fo);
|
|
|
|
|
|
|
|
ASSERT_EQ("foo", Get("barbarbar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 0);
|
|
|
|
ASSERT_EQ("foo2", Get("barbarbar2"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 0);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("barbarbar3"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 0);
|
|
|
|
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("barfoofoo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("foobarbar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 2);
|
|
|
|
}
|
2014-03-05 17:13:07 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, WholeKeyFilterProp) {
|
2015-02-05 01:03:57 +00:00
|
|
|
Options options = last_options_;
|
|
|
|
options.prefix_extractor.reset(NewFixedPrefixTransform(3));
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
|
|
|
|
BlockBasedTableOptions bbto;
|
|
|
|
bbto.filter_policy.reset(NewBloomFilterPolicy(10, false));
|
|
|
|
bbto.whole_key_filtering = false;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(bbto));
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
WriteOptions wo;
|
|
|
|
ReadOptions ro;
|
|
|
|
FlushOptions fo;
|
|
|
|
fo.wait = true;
|
|
|
|
std::string value;
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "foobar", "foo"));
|
|
|
|
// Needs insert some keys to make sure files are not filtered out by key
|
|
|
|
// ranges.
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "aaa", ""));
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "zzz", ""));
|
|
|
|
dbfull()->Flush(fo);
|
|
|
|
|
|
|
|
Reopen(options);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("foo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 0);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("bar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
ASSERT_EQ("foo", Get("foobar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
|
|
|
|
// Reopen with whole key filtering enabled and prefix extractor
|
|
|
|
// NULL. Bloom filter should be off for both of whole key and
|
|
|
|
// prefix bloom.
|
|
|
|
bbto.whole_key_filtering = true;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(bbto));
|
|
|
|
options.prefix_extractor.reset();
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("foo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("bar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
ASSERT_EQ("foo", Get("foobar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
// Write DB with only full key filtering.
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "foobar", "foo"));
|
|
|
|
// Needs insert some keys to make sure files are not filtered out by key
|
|
|
|
// ranges.
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "aaa", ""));
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "zzz", ""));
|
2015-06-17 21:36:14 +00:00
|
|
|
db_->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2015-02-05 01:03:57 +00:00
|
|
|
|
|
|
|
// Reopen with both of whole key off and prefix extractor enabled.
|
|
|
|
// Still no bloom filter should be used.
|
|
|
|
options.prefix_extractor.reset(NewFixedPrefixTransform(3));
|
|
|
|
bbto.whole_key_filtering = false;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(bbto));
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("foo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("bar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
ASSERT_EQ("foo", Get("foobar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
|
|
|
|
// Try to create a DB with mixed files:
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "foobar", "foo"));
|
|
|
|
// Needs insert some keys to make sure files are not filtered out by key
|
|
|
|
// ranges.
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "aaa", ""));
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "zzz", ""));
|
2015-06-17 21:36:14 +00:00
|
|
|
db_->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2015-02-05 01:03:57 +00:00
|
|
|
|
|
|
|
options.prefix_extractor.reset();
|
|
|
|
bbto.whole_key_filtering = true;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(bbto));
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
// Try to create a DB with mixed files.
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "barfoo", "bar"));
|
|
|
|
// In this case needs insert some keys to make sure files are
|
|
|
|
// not filtered out by key ranges.
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "aaa", ""));
|
|
|
|
ASSERT_OK(dbfull()->Put(wo, "zzz", ""));
|
|
|
|
Flush();
|
|
|
|
|
|
|
|
// Now we have two files:
|
|
|
|
// File 1: An older file with prefix bloom.
|
|
|
|
// File 2: A newer file with whole bloom filter.
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 1);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("foo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 2);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("bar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 3);
|
|
|
|
ASSERT_EQ("foo", Get("foobar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 4);
|
|
|
|
ASSERT_EQ("bar", Get("barfoo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 4);
|
|
|
|
|
|
|
|
// Reopen with the same setting: only whole key is used
|
|
|
|
Reopen(options);
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 4);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("foo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 5);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("bar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 6);
|
|
|
|
ASSERT_EQ("foo", Get("foobar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 7);
|
|
|
|
ASSERT_EQ("bar", Get("barfoo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 7);
|
|
|
|
|
|
|
|
// Restart with both filters are allowed
|
|
|
|
options.prefix_extractor.reset(NewFixedPrefixTransform(3));
|
|
|
|
bbto.whole_key_filtering = true;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(bbto));
|
|
|
|
Reopen(options);
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 7);
|
|
|
|
// File 1 will has it filtered out.
|
|
|
|
// File 2 will not, as prefix `foo` exists in the file.
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("foo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 8);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("bar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 10);
|
|
|
|
ASSERT_EQ("foo", Get("foobar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 11);
|
|
|
|
ASSERT_EQ("bar", Get("barfoo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 11);
|
|
|
|
|
|
|
|
// Restart with only prefix bloom is allowed.
|
|
|
|
options.prefix_extractor.reset(NewFixedPrefixTransform(3));
|
|
|
|
bbto.whole_key_filtering = false;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(bbto));
|
|
|
|
Reopen(options);
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 11);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("foo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 11);
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get("bar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 12);
|
|
|
|
ASSERT_EQ("foo", Get("foobar"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 12);
|
|
|
|
ASSERT_EQ("bar", Get("barfoo"));
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 12);
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterSeekBeforePrev) {
|
2014-03-05 17:13:07 +00:00
|
|
|
ASSERT_OK(Put("a", "b"));
|
|
|
|
ASSERT_OK(Put("c", "d"));
|
|
|
|
dbfull()->Flush(FlushOptions());
|
|
|
|
ASSERT_OK(Put("0", "f"));
|
|
|
|
ASSERT_OK(Put("1", "h"));
|
|
|
|
dbfull()->Flush(FlushOptions());
|
|
|
|
ASSERT_OK(Put("2", "j"));
|
|
|
|
auto iter = db_->NewIterator(ReadOptions());
|
|
|
|
iter->Seek(Slice("c"));
|
|
|
|
iter->Prev();
|
|
|
|
iter->Seek(Slice("a"));
|
|
|
|
iter->Prev();
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
|
2014-04-10 04:17:14 +00:00
|
|
|
namespace {
|
2014-04-01 21:45:30 +00:00
|
|
|
std::string MakeLongKey(size_t length, char c) {
|
|
|
|
return std::string(length, c);
|
|
|
|
}
|
2014-04-10 04:17:14 +00:00
|
|
|
} // namespace
|
2014-04-01 21:45:30 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterLongKeys) {
|
2014-04-01 21:45:30 +00:00
|
|
|
ASSERT_OK(Put(MakeLongKey(20, 0), "0"));
|
|
|
|
ASSERT_OK(Put(MakeLongKey(32, 2), "2"));
|
|
|
|
ASSERT_OK(Put("a", "b"));
|
|
|
|
dbfull()->Flush(FlushOptions());
|
|
|
|
ASSERT_OK(Put(MakeLongKey(50, 1), "1"));
|
|
|
|
ASSERT_OK(Put(MakeLongKey(127, 3), "3"));
|
|
|
|
ASSERT_OK(Put(MakeLongKey(64, 4), "4"));
|
|
|
|
auto iter = db_->NewIterator(ReadOptions());
|
|
|
|
|
|
|
|
// Create a key that needs to be skipped for Seq too new
|
|
|
|
iter->Seek(MakeLongKey(20, 0));
|
|
|
|
ASSERT_EQ(IterStatus(iter), MakeLongKey(20, 0) + "->0");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), MakeLongKey(50, 1) + "->1");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), MakeLongKey(32, 2) + "->2");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), MakeLongKey(127, 3) + "->3");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), MakeLongKey(64, 4) + "->4");
|
|
|
|
delete iter;
|
|
|
|
|
|
|
|
iter = db_->NewIterator(ReadOptions());
|
|
|
|
iter->Seek(MakeLongKey(50, 1));
|
|
|
|
ASSERT_EQ(IterStatus(iter), MakeLongKey(50, 1) + "->1");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), MakeLongKey(32, 2) + "->2");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), MakeLongKey(127, 3) + "->3");
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterNextWithNewerSeq) {
|
2014-03-14 23:06:38 +00:00
|
|
|
ASSERT_OK(Put("0", "0"));
|
|
|
|
dbfull()->Flush(FlushOptions());
|
|
|
|
ASSERT_OK(Put("a", "b"));
|
|
|
|
ASSERT_OK(Put("c", "d"));
|
|
|
|
ASSERT_OK(Put("d", "e"));
|
|
|
|
auto iter = db_->NewIterator(ReadOptions());
|
|
|
|
|
|
|
|
// Create a key that needs to be skipped for Seq too new
|
|
|
|
for (uint64_t i = 0; i < last_options_.max_sequential_skip_in_iterations + 1;
|
|
|
|
i++) {
|
|
|
|
ASSERT_OK(Put("b", "f"));
|
|
|
|
}
|
|
|
|
|
|
|
|
iter->Seek(Slice("a"));
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->b");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "c->d");
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterPrevWithNewerSeq) {
|
2014-03-14 23:06:38 +00:00
|
|
|
ASSERT_OK(Put("0", "0"));
|
|
|
|
dbfull()->Flush(FlushOptions());
|
|
|
|
ASSERT_OK(Put("a", "b"));
|
|
|
|
ASSERT_OK(Put("c", "d"));
|
|
|
|
ASSERT_OK(Put("d", "e"));
|
|
|
|
auto iter = db_->NewIterator(ReadOptions());
|
|
|
|
|
|
|
|
// Create a key that needs to be skipped for Seq too new
|
|
|
|
for (uint64_t i = 0; i < last_options_.max_sequential_skip_in_iterations + 1;
|
|
|
|
i++) {
|
|
|
|
ASSERT_OK(Put("b", "f"));
|
|
|
|
}
|
|
|
|
|
|
|
|
iter->Seek(Slice("d"));
|
|
|
|
ASSERT_EQ(IterStatus(iter), "d->e");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "c->d");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->b");
|
|
|
|
|
|
|
|
iter->Prev();
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterPrevWithNewerSeq2) {
|
2014-03-14 23:06:38 +00:00
|
|
|
ASSERT_OK(Put("0", "0"));
|
|
|
|
dbfull()->Flush(FlushOptions());
|
|
|
|
ASSERT_OK(Put("a", "b"));
|
|
|
|
ASSERT_OK(Put("c", "d"));
|
|
|
|
ASSERT_OK(Put("d", "e"));
|
|
|
|
auto iter = db_->NewIterator(ReadOptions());
|
|
|
|
iter->Seek(Slice("c"));
|
|
|
|
ASSERT_EQ(IterStatus(iter), "c->d");
|
|
|
|
|
|
|
|
// Create a key that needs to be skipped for Seq too new
|
|
|
|
for (uint64_t i = 0; i < last_options_.max_sequential_skip_in_iterations + 1;
|
|
|
|
i++) {
|
|
|
|
ASSERT_OK(Put("b", "f"));
|
|
|
|
}
|
|
|
|
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->b");
|
|
|
|
|
|
|
|
iter->Prev();
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterEmpty) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
Iterator* iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->SeekToLast();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->Seek("foo");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
delete iter;
|
|
|
|
} while (ChangeCompactOptions());
|
2011-03-25 20:27:43 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterSingle) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "a", "va"));
|
|
|
|
Iterator* iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
|
|
|
|
iter->SeekToLast();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
iter->SeekToLast();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
|
|
|
|
iter->Seek("");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->Seek("a");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
|
|
|
|
iter->Seek("b");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
|
|
|
|
delete iter;
|
|
|
|
} while (ChangeCompactOptions());
|
2011-03-25 20:27:43 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterMulti) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "a", "va"));
|
|
|
|
ASSERT_OK(Put(1, "b", "vb"));
|
|
|
|
ASSERT_OK(Put(1, "c", "vc"));
|
|
|
|
Iterator* iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->vb");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "c->vc");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->SeekToLast();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "c->vc");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->vb");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
iter->SeekToLast();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "c->vc");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
|
|
|
|
iter->Seek("");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Seek("a");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Seek("ax");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->vb");
|
2013-11-18 19:32:54 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->Seek("b");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->vb");
|
|
|
|
iter->Seek("z");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
|
|
|
|
// Switch from reverse to forward
|
|
|
|
iter->SeekToLast();
|
|
|
|
iter->Prev();
|
|
|
|
iter->Prev();
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->vb");
|
|
|
|
|
|
|
|
// Switch from forward to reverse
|
|
|
|
iter->SeekToFirst();
|
|
|
|
iter->Next();
|
|
|
|
iter->Next();
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->vb");
|
|
|
|
|
|
|
|
// Make sure iter stays at snapshot
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "a", "va2"));
|
|
|
|
ASSERT_OK(Put(1, "a2", "va3"));
|
|
|
|
ASSERT_OK(Put(1, "b", "vb2"));
|
|
|
|
ASSERT_OK(Put(1, "c", "vc2"));
|
|
|
|
ASSERT_OK(Delete(1, "b"));
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->vb");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "c->vc");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
iter->SeekToLast();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "c->vc");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->vb");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
|
|
|
|
delete iter;
|
|
|
|
} while (ChangeCompactOptions());
|
2011-03-25 20:27:43 +00:00
|
|
|
}
|
|
|
|
|
2013-07-28 18:53:08 +00:00
|
|
|
// Check that we can skip over a run of user keys
|
|
|
|
// by using reseek rather than sequential scan
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterReseek) {
|
2015-02-02 22:49:22 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.skip_policy = kSkipNoSnapshot;
|
|
|
|
Options options = CurrentOptions(options_override);
|
2013-07-28 18:53:08 +00:00
|
|
|
options.max_sequential_skip_in_iterations = 3;
|
|
|
|
options.create_if_missing = true;
|
2013-10-04 04:49:15 +00:00
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-07-28 18:53:08 +00:00
|
|
|
|
|
|
|
// insert two keys with same userkey and verify that
|
|
|
|
// reseek is not invoked. For each of these test cases,
|
|
|
|
// verify that we can find the next key "b".
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "a", "one"));
|
|
|
|
ASSERT_OK(Put(1, "a", "two"));
|
|
|
|
ASSERT_OK(Put(1, "b", "bone"));
|
|
|
|
Iterator* iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2013-07-28 18:53:08 +00:00
|
|
|
iter->SeekToFirst();
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION), 0);
|
2013-07-28 18:53:08 +00:00
|
|
|
ASSERT_EQ(IterStatus(iter), "a->two");
|
|
|
|
iter->Next();
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION), 0);
|
2013-07-28 18:53:08 +00:00
|
|
|
ASSERT_EQ(IterStatus(iter), "b->bone");
|
|
|
|
delete iter;
|
|
|
|
|
|
|
|
// insert a total of three keys with same userkey and verify
|
|
|
|
// that reseek is still not invoked.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "a", "three"));
|
|
|
|
iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2013-07-28 18:53:08 +00:00
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->three");
|
|
|
|
iter->Next();
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION), 0);
|
2013-07-28 18:53:08 +00:00
|
|
|
ASSERT_EQ(IterStatus(iter), "b->bone");
|
|
|
|
delete iter;
|
|
|
|
|
|
|
|
// insert a total of four keys with same userkey and verify
|
|
|
|
// that reseek is invoked.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "a", "four"));
|
|
|
|
iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2013-07-28 18:53:08 +00:00
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->four");
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION), 0);
|
2013-07-28 18:53:08 +00:00
|
|
|
iter->Next();
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION), 1);
|
2013-07-28 18:53:08 +00:00
|
|
|
ASSERT_EQ(IterStatus(iter), "b->bone");
|
|
|
|
delete iter;
|
|
|
|
|
|
|
|
// Testing reverse iterator
|
|
|
|
// At this point, we have three versions of "a" and one version of "b".
|
|
|
|
// The reseek statistics is already at 1.
|
2014-01-17 20:46:06 +00:00
|
|
|
int num_reseeks =
|
|
|
|
(int)TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION);
|
2013-07-28 18:53:08 +00:00
|
|
|
|
|
|
|
// Insert another version of b and assert that reseek is not invoked
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "b", "btwo"));
|
|
|
|
iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2013-07-28 18:53:08 +00:00
|
|
|
iter->SeekToLast();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->btwo");
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION),
|
|
|
|
num_reseeks);
|
2013-07-28 18:53:08 +00:00
|
|
|
iter->Prev();
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION),
|
|
|
|
num_reseeks + 1);
|
2013-07-28 18:53:08 +00:00
|
|
|
ASSERT_EQ(IterStatus(iter), "a->four");
|
|
|
|
delete iter;
|
|
|
|
|
|
|
|
// insert two more versions of b. This makes a total of 4 versions
|
|
|
|
// of b and 4 versions of a.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "b", "bthree"));
|
|
|
|
ASSERT_OK(Put(1, "b", "bfour"));
|
|
|
|
iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2013-07-28 18:53:08 +00:00
|
|
|
iter->SeekToLast();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->bfour");
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION),
|
|
|
|
num_reseeks + 2);
|
2013-07-28 18:53:08 +00:00
|
|
|
iter->Prev();
|
|
|
|
|
|
|
|
// the previous Prev call should have invoked reseek
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION),
|
|
|
|
num_reseeks + 3);
|
2013-07-28 18:53:08 +00:00
|
|
|
ASSERT_EQ(IterStatus(iter), "a->four");
|
|
|
|
delete iter;
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterSmallAndLargeMix) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "a", "va"));
|
|
|
|
ASSERT_OK(Put(1, "b", std::string(100000, 'b')));
|
|
|
|
ASSERT_OK(Put(1, "c", "vc"));
|
|
|
|
ASSERT_OK(Put(1, "d", std::string(100000, 'd')));
|
|
|
|
ASSERT_OK(Put(1, "e", std::string(100000, 'e')));
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
Iterator* iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->" + std::string(100000, 'b'));
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "c->vc");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "d->" + std::string(100000, 'd'));
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "e->" + std::string(100000, 'e'));
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
2011-03-25 20:27:43 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->SeekToLast();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "e->" + std::string(100000, 'e'));
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "d->" + std::string(100000, 'd'));
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "c->vc");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "b->" + std::string(100000, 'b'));
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "a->va");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "(invalid)");
|
|
|
|
|
|
|
|
delete iter;
|
|
|
|
} while (ChangeCompactOptions());
|
2011-03-25 20:27:43 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterMultiWithDelete) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-04-25 19:21:34 +00:00
|
|
|
ASSERT_OK(Put(1, "ka", "va"));
|
|
|
|
ASSERT_OK(Put(1, "kb", "vb"));
|
|
|
|
ASSERT_OK(Put(1, "kc", "vc"));
|
|
|
|
ASSERT_OK(Delete(1, "kb"));
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get(1, "kb"));
|
2014-02-07 22:47:16 +00:00
|
|
|
|
|
|
|
Iterator* iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2014-04-25 19:21:34 +00:00
|
|
|
iter->Seek("kc");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "kc->vc");
|
2013-03-21 22:59:47 +00:00
|
|
|
if (!CurrentOptions().merge_operator) {
|
|
|
|
// TODO: merge operator does not support backward iteration yet
|
2014-04-25 19:21:34 +00:00
|
|
|
if (kPlainTableAllBytesPrefix != option_config_&&
|
|
|
|
kBlockBasedTableWithWholeKeyHashIndex != option_config_ &&
|
|
|
|
kHashLinkList != option_config_) {
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "ka->va");
|
|
|
|
}
|
2013-03-21 22:59:47 +00:00
|
|
|
}
|
2012-04-17 15:36:46 +00:00
|
|
|
delete iter;
|
|
|
|
} while (ChangeOptions());
|
2011-08-16 01:21:01 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterPrevMaxSkip) {
|
2013-10-18 01:33:18 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2013-10-18 01:33:18 +00:00
|
|
|
for (int i = 0; i < 2; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "key1", "v1"));
|
|
|
|
ASSERT_OK(Put(1, "key2", "v2"));
|
|
|
|
ASSERT_OK(Put(1, "key3", "v3"));
|
|
|
|
ASSERT_OK(Put(1, "key4", "v4"));
|
|
|
|
ASSERT_OK(Put(1, "key5", "v5"));
|
2013-10-18 01:33:18 +00:00
|
|
|
}
|
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
VerifyIterLast("key5->v5", 1);
|
2013-10-18 01:33:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Delete(1, "key5"));
|
|
|
|
VerifyIterLast("key4->v4", 1);
|
2013-10-18 01:33:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Delete(1, "key4"));
|
|
|
|
VerifyIterLast("key3->v3", 1);
|
2013-10-18 01:33:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Delete(1, "key3"));
|
|
|
|
VerifyIterLast("key2->v2", 1);
|
2013-10-18 01:33:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Delete(1, "key2"));
|
|
|
|
VerifyIterLast("key1->v1", 1);
|
2013-10-18 01:33:18 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Delete(1, "key1"));
|
|
|
|
VerifyIterLast("(invalid)", 1);
|
2014-04-25 19:21:34 +00:00
|
|
|
} while (ChangeOptions(kSkipMergePut | kSkipNoSeekToLast));
|
2013-10-18 01:33:18 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IterWithSnapshot) {
|
2015-02-03 20:19:56 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.skip_policy = kSkipNoSnapshot;
|
2013-09-28 18:39:08 +00:00
|
|
|
do {
|
2015-02-03 20:19:56 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions(options_override));
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "key1", "val1"));
|
|
|
|
ASSERT_OK(Put(1, "key2", "val2"));
|
|
|
|
ASSERT_OK(Put(1, "key3", "val3"));
|
|
|
|
ASSERT_OK(Put(1, "key4", "val4"));
|
|
|
|
ASSERT_OK(Put(1, "key5", "val5"));
|
2013-09-28 18:39:08 +00:00
|
|
|
|
|
|
|
const Snapshot *snapshot = db_->GetSnapshot();
|
|
|
|
ReadOptions options;
|
|
|
|
options.snapshot = snapshot;
|
2014-02-07 22:47:16 +00:00
|
|
|
Iterator* iter = db_->NewIterator(options, handles_[1]);
|
2013-09-28 18:39:08 +00:00
|
|
|
|
|
|
|
// Put more values after the snapshot
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "key100", "val100"));
|
|
|
|
ASSERT_OK(Put(1, "key101", "val101"));
|
2013-09-28 18:39:08 +00:00
|
|
|
|
|
|
|
iter->Seek("key5");
|
|
|
|
ASSERT_EQ(IterStatus(iter), "key5->val5");
|
|
|
|
if (!CurrentOptions().merge_operator) {
|
|
|
|
// TODO: merge operator does not support backward iteration yet
|
2014-04-25 19:21:34 +00:00
|
|
|
if (kPlainTableAllBytesPrefix != option_config_&&
|
|
|
|
kBlockBasedTableWithWholeKeyHashIndex != option_config_ &&
|
|
|
|
kHashLinkList != option_config_) {
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "key4->val4");
|
|
|
|
iter->Prev();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "key3->val3");
|
2013-09-28 18:39:08 +00:00
|
|
|
|
2014-04-25 19:21:34 +00:00
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "key4->val4");
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_EQ(IterStatus(iter), "key5->val5");
|
|
|
|
}
|
2013-09-28 18:39:08 +00:00
|
|
|
iter->Next();
|
|
|
|
ASSERT_TRUE(!iter->Valid());
|
|
|
|
}
|
|
|
|
db_->ReleaseSnapshot(snapshot);
|
|
|
|
delete iter;
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
// skip as HashCuckooRep does not support snapshot
|
|
|
|
} while (ChangeOptions(kSkipHashCuckoo));
|
2013-09-28 18:39:08 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, Recover) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
ASSERT_OK(Put(1, "baz", "v5"));
|
|
|
|
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
|
|
|
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("v5", Get(1, "baz"));
|
|
|
|
ASSERT_OK(Put(1, "bar", "v2"));
|
|
|
|
ASSERT_OK(Put(1, "foo", "v3"));
|
|
|
|
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("v3", Get(1, "foo"));
|
|
|
|
ASSERT_OK(Put(1, "foo", "v4"));
|
|
|
|
ASSERT_EQ("v4", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("v2", Get(1, "bar"));
|
|
|
|
ASSERT_EQ("v5", Get(1, "baz"));
|
2012-04-17 15:36:46 +00:00
|
|
|
} while (ChangeOptions());
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, RecoverWithTableHandle) {
|
2014-02-12 18:43:27 +00:00
|
|
|
do {
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
Options options;
|
2014-02-12 18:43:27 +00:00
|
|
|
options.create_if_missing = true;
|
|
|
|
options.write_buffer_size = 100;
|
|
|
|
options.disable_auto_compactions = true;
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
options = CurrentOptions(options);
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-02-12 18:43:27 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
ASSERT_OK(Put(1, "bar", "v2"));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_OK(Put(1, "foo", "v3"));
|
|
|
|
ASSERT_OK(Put(1, "bar", "v4"));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_OK(Put(1, "big", std::string(100, 'a')));
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, CurrentOptions());
|
2014-02-12 18:43:27 +00:00
|
|
|
|
|
|
|
std::vector<std::vector<FileMetaData>> files;
|
2014-02-07 22:47:16 +00:00
|
|
|
dbfull()->TEST_GetFilesMetaData(handles_[1], &files);
|
2014-02-12 18:43:27 +00:00
|
|
|
int total_files = 0;
|
|
|
|
for (const auto& level : files) {
|
|
|
|
total_files += level.size();
|
|
|
|
}
|
|
|
|
ASSERT_EQ(total_files, 3);
|
|
|
|
for (const auto& level : files) {
|
|
|
|
for (const auto& file : level) {
|
|
|
|
if (kInfiniteMaxOpenFiles == option_config_) {
|
|
|
|
ASSERT_TRUE(file.table_reader_handle != nullptr);
|
|
|
|
} else {
|
|
|
|
ASSERT_TRUE(file.table_reader_handle == nullptr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
} while (ChangeOptions());
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IgnoreRecoveredLog) {
|
Refactor Recover() code
Summary:
This diff does two things:
* Rethinks how we call Recover() with read_only option. Before, we call it with pointer to memtable where we'd like to apply those changes to. This memtable is set in db_impl_readonly.cc and it's actually DBImpl::mem_. Why don't we just apply updates to mem_ right away? It seems more intuitive.
* Changes when we apply updates to manifest. Before, the process is to recover all the logs, flush it to sst files and then do one giant commit that atomically adds all recovered sst files and sets the next log number. This works good enough, but causes some small troubles for my column family approach, since I can't have one VersionEdit apply to more than single column family[1]. The change here is to commit the files recovered from logs right away. Here is the state of the world before the change:
1. Recover log 5, add new sst files to edit
2. Recover log 7, add new sst files to edit
3. Recover log 8, add new sst files to edit
4. Commit all added sst files to manifest and mark log files 5, 7 and 8 as recoverd (via SetLogNumber(9) function)
After the change, we'll do:
1. Recover log 5, commit the new sst files and set log 5 as recovered
2. Recover log 7, commit the new sst files and set log 7 as recovered
3. Recover log 8, commit the new sst files and set log 8 as recovered
The added (small) benefit is that if we fail after (2), the new recovery will only have to recover log 8. In previous case, we'll have to restart the recovery from the beginning. The bigger benefit will be to enable easier integration of multiple column families in Recovery code path.
[1] I'm happy to dicuss this decison, but I believe this is the cleanest way to go. It also makes backward compatibility much easier. We don't have a requirement of adding multiple column families atomically.
Test Plan: make check
Reviewers: dhruba, haobo, kailiu, sdong
Reviewed By: kailiu
CC: leveldb
Differential Revision: https://reviews.facebook.net/D15237
2014-01-22 18:45:26 +00:00
|
|
|
std::string backup_logs = dbname_ + "/backup_logs";
|
|
|
|
|
|
|
|
// delete old files in backup_logs directory
|
|
|
|
env_->CreateDirIfMissing(backup_logs);
|
|
|
|
std::vector<std::string> old_files;
|
|
|
|
env_->GetChildren(backup_logs, &old_files);
|
|
|
|
for (auto& file : old_files) {
|
|
|
|
if (file != "." && file != "..") {
|
|
|
|
env_->DeleteFile(backup_logs + "/" + file);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.merge_operator = MergeOperators::CreateUInt64AddOperator();
|
options.level_compaction_dynamic_level_bytes to allow RocksDB to pick size bases of levels dynamically.
Summary:
When having fixed max_bytes_for_level_base, the ratio of size of largest level and the second one can range from 0 to the multiplier. This makes LSM tree frequently irregular and unpredictable. It can also cause poor space amplification in some cases.
In this improvement (proposed by Igor Kabiljo), we introduce a parameter option.level_compaction_use_dynamic_max_bytes. When turning it on, RocksDB is free to pick a level base in the range of (options.max_bytes_for_level_base/options.max_bytes_for_level_multiplier, options.max_bytes_for_level_base] so that real level ratios are close to options.max_bytes_for_level_multiplier.
Test Plan: New unit tests and pass tests suites including valgrind.
Reviewers: MarkCallaghan, rven, yhchiang, igor, ikabiljo
Reviewed By: ikabiljo
Subscribers: yoshinorim, ikabiljo, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D31437
2015-02-05 19:44:17 +00:00
|
|
|
options.wal_dir = dbname_ + "/logs";
|
|
|
|
DestroyAndReopen(options);
|
Refactor Recover() code
Summary:
This diff does two things:
* Rethinks how we call Recover() with read_only option. Before, we call it with pointer to memtable where we'd like to apply those changes to. This memtable is set in db_impl_readonly.cc and it's actually DBImpl::mem_. Why don't we just apply updates to mem_ right away? It seems more intuitive.
* Changes when we apply updates to manifest. Before, the process is to recover all the logs, flush it to sst files and then do one giant commit that atomically adds all recovered sst files and sets the next log number. This works good enough, but causes some small troubles for my column family approach, since I can't have one VersionEdit apply to more than single column family[1]. The change here is to commit the files recovered from logs right away. Here is the state of the world before the change:
1. Recover log 5, add new sst files to edit
2. Recover log 7, add new sst files to edit
3. Recover log 8, add new sst files to edit
4. Commit all added sst files to manifest and mark log files 5, 7 and 8 as recoverd (via SetLogNumber(9) function)
After the change, we'll do:
1. Recover log 5, commit the new sst files and set log 5 as recovered
2. Recover log 7, commit the new sst files and set log 7 as recovered
3. Recover log 8, commit the new sst files and set log 8 as recovered
The added (small) benefit is that if we fail after (2), the new recovery will only have to recover log 8. In previous case, we'll have to restart the recovery from the beginning. The bigger benefit will be to enable easier integration of multiple column families in Recovery code path.
[1] I'm happy to dicuss this decison, but I believe this is the cleanest way to go. It also makes backward compatibility much easier. We don't have a requirement of adding multiple column families atomically.
Test Plan: make check
Reviewers: dhruba, haobo, kailiu, sdong
Reviewed By: kailiu
CC: leveldb
Differential Revision: https://reviews.facebook.net/D15237
2014-01-22 18:45:26 +00:00
|
|
|
|
|
|
|
// fill up the DB
|
|
|
|
std::string one, two;
|
|
|
|
PutFixed64(&one, 1);
|
|
|
|
PutFixed64(&two, 2);
|
|
|
|
ASSERT_OK(db_->Merge(WriteOptions(), Slice("foo"), Slice(one)));
|
|
|
|
ASSERT_OK(db_->Merge(WriteOptions(), Slice("foo"), Slice(one)));
|
|
|
|
ASSERT_OK(db_->Merge(WriteOptions(), Slice("bar"), Slice(one)));
|
|
|
|
|
|
|
|
// copy the logs to backup
|
|
|
|
std::vector<std::string> logs;
|
|
|
|
env_->GetChildren(options.wal_dir, &logs);
|
|
|
|
for (auto& log : logs) {
|
|
|
|
if (log != ".." && log != ".") {
|
|
|
|
CopyFile(options.wal_dir + "/" + log, backup_logs + "/" + log);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// recover the DB
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
Refactor Recover() code
Summary:
This diff does two things:
* Rethinks how we call Recover() with read_only option. Before, we call it with pointer to memtable where we'd like to apply those changes to. This memtable is set in db_impl_readonly.cc and it's actually DBImpl::mem_. Why don't we just apply updates to mem_ right away? It seems more intuitive.
* Changes when we apply updates to manifest. Before, the process is to recover all the logs, flush it to sst files and then do one giant commit that atomically adds all recovered sst files and sets the next log number. This works good enough, but causes some small troubles for my column family approach, since I can't have one VersionEdit apply to more than single column family[1]. The change here is to commit the files recovered from logs right away. Here is the state of the world before the change:
1. Recover log 5, add new sst files to edit
2. Recover log 7, add new sst files to edit
3. Recover log 8, add new sst files to edit
4. Commit all added sst files to manifest and mark log files 5, 7 and 8 as recoverd (via SetLogNumber(9) function)
After the change, we'll do:
1. Recover log 5, commit the new sst files and set log 5 as recovered
2. Recover log 7, commit the new sst files and set log 7 as recovered
3. Recover log 8, commit the new sst files and set log 8 as recovered
The added (small) benefit is that if we fail after (2), the new recovery will only have to recover log 8. In previous case, we'll have to restart the recovery from the beginning. The bigger benefit will be to enable easier integration of multiple column families in Recovery code path.
[1] I'm happy to dicuss this decison, but I believe this is the cleanest way to go. It also makes backward compatibility much easier. We don't have a requirement of adding multiple column families atomically.
Test Plan: make check
Reviewers: dhruba, haobo, kailiu, sdong
Reviewed By: kailiu
CC: leveldb
Differential Revision: https://reviews.facebook.net/D15237
2014-01-22 18:45:26 +00:00
|
|
|
ASSERT_EQ(two, Get("foo"));
|
|
|
|
ASSERT_EQ(one, Get("bar"));
|
|
|
|
Close();
|
|
|
|
|
|
|
|
// copy the logs from backup back to wal dir
|
|
|
|
for (auto& log : logs) {
|
|
|
|
if (log != ".." && log != ".") {
|
|
|
|
CopyFile(backup_logs + "/" + log, options.wal_dir + "/" + log);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// this should ignore the log files, recovery should not happen again
|
|
|
|
// if the recovery happens, the same merge operator would be called twice,
|
|
|
|
// leading to incorrect results
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
Refactor Recover() code
Summary:
This diff does two things:
* Rethinks how we call Recover() with read_only option. Before, we call it with pointer to memtable where we'd like to apply those changes to. This memtable is set in db_impl_readonly.cc and it's actually DBImpl::mem_. Why don't we just apply updates to mem_ right away? It seems more intuitive.
* Changes when we apply updates to manifest. Before, the process is to recover all the logs, flush it to sst files and then do one giant commit that atomically adds all recovered sst files and sets the next log number. This works good enough, but causes some small troubles for my column family approach, since I can't have one VersionEdit apply to more than single column family[1]. The change here is to commit the files recovered from logs right away. Here is the state of the world before the change:
1. Recover log 5, add new sst files to edit
2. Recover log 7, add new sst files to edit
3. Recover log 8, add new sst files to edit
4. Commit all added sst files to manifest and mark log files 5, 7 and 8 as recoverd (via SetLogNumber(9) function)
After the change, we'll do:
1. Recover log 5, commit the new sst files and set log 5 as recovered
2. Recover log 7, commit the new sst files and set log 7 as recovered
3. Recover log 8, commit the new sst files and set log 8 as recovered
The added (small) benefit is that if we fail after (2), the new recovery will only have to recover log 8. In previous case, we'll have to restart the recovery from the beginning. The bigger benefit will be to enable easier integration of multiple column families in Recovery code path.
[1] I'm happy to dicuss this decison, but I believe this is the cleanest way to go. It also makes backward compatibility much easier. We don't have a requirement of adding multiple column families atomically.
Test Plan: make check
Reviewers: dhruba, haobo, kailiu, sdong
Reviewed By: kailiu
CC: leveldb
Differential Revision: https://reviews.facebook.net/D15237
2014-01-22 18:45:26 +00:00
|
|
|
ASSERT_EQ(two, Get("foo"));
|
|
|
|
ASSERT_EQ(one, Get("bar"));
|
|
|
|
Close();
|
2014-10-29 18:59:18 +00:00
|
|
|
Destroy(options);
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-04-15 16:06:13 +00:00
|
|
|
Close();
|
Refactor Recover() code
Summary:
This diff does two things:
* Rethinks how we call Recover() with read_only option. Before, we call it with pointer to memtable where we'd like to apply those changes to. This memtable is set in db_impl_readonly.cc and it's actually DBImpl::mem_. Why don't we just apply updates to mem_ right away? It seems more intuitive.
* Changes when we apply updates to manifest. Before, the process is to recover all the logs, flush it to sst files and then do one giant commit that atomically adds all recovered sst files and sets the next log number. This works good enough, but causes some small troubles for my column family approach, since I can't have one VersionEdit apply to more than single column family[1]. The change here is to commit the files recovered from logs right away. Here is the state of the world before the change:
1. Recover log 5, add new sst files to edit
2. Recover log 7, add new sst files to edit
3. Recover log 8, add new sst files to edit
4. Commit all added sst files to manifest and mark log files 5, 7 and 8 as recoverd (via SetLogNumber(9) function)
After the change, we'll do:
1. Recover log 5, commit the new sst files and set log 5 as recovered
2. Recover log 7, commit the new sst files and set log 7 as recovered
3. Recover log 8, commit the new sst files and set log 8 as recovered
The added (small) benefit is that if we fail after (2), the new recovery will only have to recover log 8. In previous case, we'll have to restart the recovery from the beginning. The bigger benefit will be to enable easier integration of multiple column families in Recovery code path.
[1] I'm happy to dicuss this decison, but I believe this is the cleanest way to go. It also makes backward compatibility much easier. We don't have a requirement of adding multiple column families atomically.
Test Plan: make check
Reviewers: dhruba, haobo, kailiu, sdong
Reviewed By: kailiu
CC: leveldb
Differential Revision: https://reviews.facebook.net/D15237
2014-01-22 18:45:26 +00:00
|
|
|
|
|
|
|
// copy the logs from backup back to wal dir
|
|
|
|
env_->CreateDirIfMissing(options.wal_dir);
|
|
|
|
for (auto& log : logs) {
|
|
|
|
if (log != ".." && log != ".") {
|
|
|
|
CopyFile(backup_logs + "/" + log, options.wal_dir + "/" + log);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// assert that we successfully recovered only from logs, even though we
|
|
|
|
// destroyed the DB
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
Refactor Recover() code
Summary:
This diff does two things:
* Rethinks how we call Recover() with read_only option. Before, we call it with pointer to memtable where we'd like to apply those changes to. This memtable is set in db_impl_readonly.cc and it's actually DBImpl::mem_. Why don't we just apply updates to mem_ right away? It seems more intuitive.
* Changes when we apply updates to manifest. Before, the process is to recover all the logs, flush it to sst files and then do one giant commit that atomically adds all recovered sst files and sets the next log number. This works good enough, but causes some small troubles for my column family approach, since I can't have one VersionEdit apply to more than single column family[1]. The change here is to commit the files recovered from logs right away. Here is the state of the world before the change:
1. Recover log 5, add new sst files to edit
2. Recover log 7, add new sst files to edit
3. Recover log 8, add new sst files to edit
4. Commit all added sst files to manifest and mark log files 5, 7 and 8 as recoverd (via SetLogNumber(9) function)
After the change, we'll do:
1. Recover log 5, commit the new sst files and set log 5 as recovered
2. Recover log 7, commit the new sst files and set log 7 as recovered
3. Recover log 8, commit the new sst files and set log 8 as recovered
The added (small) benefit is that if we fail after (2), the new recovery will only have to recover log 8. In previous case, we'll have to restart the recovery from the beginning. The bigger benefit will be to enable easier integration of multiple column families in Recovery code path.
[1] I'm happy to dicuss this decison, but I believe this is the cleanest way to go. It also makes backward compatibility much easier. We don't have a requirement of adding multiple column families atomically.
Test Plan: make check
Reviewers: dhruba, haobo, kailiu, sdong
Reviewed By: kailiu
CC: leveldb
Differential Revision: https://reviews.facebook.net/D15237
2014-01-22 18:45:26 +00:00
|
|
|
ASSERT_EQ(two, Get("foo"));
|
|
|
|
ASSERT_EQ(one, Get("bar"));
|
2014-04-15 16:06:13 +00:00
|
|
|
|
|
|
|
// Recovery will fail if DB directory doesn't exist.
|
2014-10-29 18:59:18 +00:00
|
|
|
Destroy(options);
|
2014-04-15 16:06:13 +00:00
|
|
|
// copy the logs from backup back to wal dir
|
|
|
|
env_->CreateDirIfMissing(options.wal_dir);
|
|
|
|
for (auto& log : logs) {
|
|
|
|
if (log != ".." && log != ".") {
|
|
|
|
CopyFile(backup_logs + "/" + log, options.wal_dir + "/" + log);
|
|
|
|
// we won't be needing this file no more
|
|
|
|
env_->DeleteFile(backup_logs + "/" + log);
|
|
|
|
}
|
|
|
|
}
|
2014-10-29 19:00:01 +00:00
|
|
|
Status s = TryReopen(options);
|
2014-04-15 16:06:13 +00:00
|
|
|
ASSERT_TRUE(!s.ok());
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
} while (ChangeOptions(kSkipHashCuckoo));
|
Refactor Recover() code
Summary:
This diff does two things:
* Rethinks how we call Recover() with read_only option. Before, we call it with pointer to memtable where we'd like to apply those changes to. This memtable is set in db_impl_readonly.cc and it's actually DBImpl::mem_. Why don't we just apply updates to mem_ right away? It seems more intuitive.
* Changes when we apply updates to manifest. Before, the process is to recover all the logs, flush it to sst files and then do one giant commit that atomically adds all recovered sst files and sets the next log number. This works good enough, but causes some small troubles for my column family approach, since I can't have one VersionEdit apply to more than single column family[1]. The change here is to commit the files recovered from logs right away. Here is the state of the world before the change:
1. Recover log 5, add new sst files to edit
2. Recover log 7, add new sst files to edit
3. Recover log 8, add new sst files to edit
4. Commit all added sst files to manifest and mark log files 5, 7 and 8 as recoverd (via SetLogNumber(9) function)
After the change, we'll do:
1. Recover log 5, commit the new sst files and set log 5 as recovered
2. Recover log 7, commit the new sst files and set log 7 as recovered
3. Recover log 8, commit the new sst files and set log 8 as recovered
The added (small) benefit is that if we fail after (2), the new recovery will only have to recover log 8. In previous case, we'll have to restart the recovery from the beginning. The bigger benefit will be to enable easier integration of multiple column families in Recovery code path.
[1] I'm happy to dicuss this decison, but I believe this is the cleanest way to go. It also makes backward compatibility much easier. We don't have a requirement of adding multiple column families atomically.
Test Plan: make check
Reviewers: dhruba, haobo, kailiu, sdong
Reviewed By: kailiu
CC: leveldb
Differential Revision: https://reviews.facebook.net/D15237
2014-01-22 18:45:26 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, CheckLock) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
|
|
|
DB* localdb;
|
|
|
|
Options options = CurrentOptions();
|
2014-10-29 19:00:01 +00:00
|
|
|
ASSERT_OK(TryReopen(options));
|
2013-08-07 22:20:41 +00:00
|
|
|
|
|
|
|
// second open should fail
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(!(DB::Open(options, dbname_, &localdb)).ok());
|
2013-08-07 22:20:41 +00:00
|
|
|
} while (ChangeCompactOptions());
|
2012-08-18 07:26:50 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, FlushMultipleMemtable) {
|
2013-09-05 00:24:35 +00:00
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
WriteOptions writeOpt = WriteOptions();
|
|
|
|
writeOpt.disableWAL = true;
|
|
|
|
options.max_write_buffer_number = 4;
|
|
|
|
options.min_write_buffer_number_to_merge = 3;
|
Support saving history in memtable_list
Summary:
For transactions, we are using the memtables to validate that there are no write conflicts. But after flushing, we don't have any memtables, and transactions could fail to commit. So we want to someone keep around some extra history to use for conflict checking. In addition, we want to provide a way to increase the size of this history if too many transactions fail to commit.
After chatting with people, it seems like everyone prefers just using Memtables to store this history (instead of a separate history structure). It seems like the best place for this is abstracted inside the memtable_list. I decide to create a separate list in MemtableListVersion as using the same list complicated the flush/installalflushresults logic too much.
This diff adds a new parameter to control how much memtable history to keep around after flushing. However, it sounds like people aren't too fond of adding new parameters. So I am making the default size of flushed+not-flushed memtables be set to max_write_buffers. This should not change the maximum amount of memory used, but make it more likely we're using closer the the limit. (We are now postponing deleting flushed memtables until the max_write_buffer limit is reached). So while we might use more memory on average, we are still obeying the limit set (and you could argue it's better to go ahead and use up memory now instead of waiting for a write stall to happen to test this limit).
However, if people are opposed to this default behavior, we can easily set it to 0 and require this parameter be set in order to use transactions.
Test Plan: Added a xfunc test to play around with setting different values of this parameter in all tests. Added testing in memtablelist_test and planning on adding more testing here.
Reviewers: sdong, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37443
2015-05-28 23:34:24 +00:00
|
|
|
options.max_write_buffer_number_to_maintain = -1;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "foo", "v1"));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "bar", "v1"));
|
|
|
|
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("v1", Get(1, "bar"));
|
|
|
|
ASSERT_OK(Flush(1));
|
2013-09-05 00:24:35 +00:00
|
|
|
} while (ChangeCompactOptions());
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, NumImmutableMemTable) {
|
2013-10-04 22:17:54 +00:00
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
WriteOptions writeOpt = WriteOptions();
|
|
|
|
writeOpt.disableWAL = true;
|
|
|
|
options.max_write_buffer_number = 4;
|
|
|
|
options.min_write_buffer_number_to_merge = 3;
|
Support saving history in memtable_list
Summary:
For transactions, we are using the memtables to validate that there are no write conflicts. But after flushing, we don't have any memtables, and transactions could fail to commit. So we want to someone keep around some extra history to use for conflict checking. In addition, we want to provide a way to increase the size of this history if too many transactions fail to commit.
After chatting with people, it seems like everyone prefers just using Memtables to store this history (instead of a separate history structure). It seems like the best place for this is abstracted inside the memtable_list. I decide to create a separate list in MemtableListVersion as using the same list complicated the flush/installalflushresults logic too much.
This diff adds a new parameter to control how much memtable history to keep around after flushing. However, it sounds like people aren't too fond of adding new parameters. So I am making the default size of flushed+not-flushed memtables be set to max_write_buffers. This should not change the maximum amount of memory used, but make it more likely we're using closer the the limit. (We are now postponing deleting flushed memtables until the max_write_buffer limit is reached). So while we might use more memory on average, we are still obeying the limit set (and you could argue it's better to go ahead and use up memory now instead of waiting for a write stall to happen to test this limit).
However, if people are opposed to this default behavior, we can easily set it to 0 and require this parameter be set in order to use transactions.
Test Plan: Added a xfunc test to play around with setting different values of this parameter in all tests. Added testing in memtablelist_test and planning on adding more testing here.
Reviewers: sdong, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37443
2015-05-28 23:34:24 +00:00
|
|
|
options.max_write_buffer_number_to_maintain = 0;
|
2013-10-04 22:17:54 +00:00
|
|
|
options.write_buffer_size = 1000000;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-10-04 22:17:54 +00:00
|
|
|
|
2014-03-12 23:40:14 +00:00
|
|
|
std::string big_value(1000000 * 2, 'x');
|
2013-10-04 22:17:54 +00:00
|
|
|
std::string num;
|
2013-11-18 19:32:54 +00:00
|
|
|
SetPerfLevel(kEnableTime);;
|
2014-07-10 18:35:07 +00:00
|
|
|
ASSERT_TRUE(GetPerfLevel() == kEnableTime);
|
2013-10-04 22:17:54 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "k1", big_value));
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(handles_[1],
|
|
|
|
"rocksdb.num-immutable-mem-table", &num));
|
2013-10-04 22:17:54 +00:00
|
|
|
ASSERT_EQ(num, "0");
|
2014-04-23 00:17:33 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-active-mem-table", &num));
|
|
|
|
ASSERT_EQ(num, "1");
|
2013-11-18 19:32:54 +00:00
|
|
|
perf_context.Reset();
|
2014-02-07 22:47:16 +00:00
|
|
|
Get(1, "k1");
|
2013-11-18 19:32:54 +00:00
|
|
|
ASSERT_EQ(1, (int) perf_context.get_from_memtable_count);
|
2013-10-04 22:17:54 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "k2", big_value));
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(handles_[1],
|
|
|
|
"rocksdb.num-immutable-mem-table", &num));
|
2013-10-04 22:17:54 +00:00
|
|
|
ASSERT_EQ(num, "1");
|
2014-04-23 00:17:33 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-active-mem-table", &num));
|
|
|
|
ASSERT_EQ(num, "1");
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-imm-mem-tables", &num));
|
|
|
|
ASSERT_EQ(num, "1");
|
|
|
|
|
2013-11-18 19:32:54 +00:00
|
|
|
perf_context.Reset();
|
2014-02-07 22:47:16 +00:00
|
|
|
Get(1, "k1");
|
2013-11-18 19:32:54 +00:00
|
|
|
ASSERT_EQ(2, (int) perf_context.get_from_memtable_count);
|
|
|
|
perf_context.Reset();
|
2014-02-07 22:47:16 +00:00
|
|
|
Get(1, "k2");
|
2013-11-18 19:32:54 +00:00
|
|
|
ASSERT_EQ(1, (int) perf_context.get_from_memtable_count);
|
2013-10-04 22:17:54 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "k3", big_value));
|
2014-03-31 19:44:54 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.cur-size-active-mem-table", &num));
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(handles_[1],
|
|
|
|
"rocksdb.num-immutable-mem-table", &num));
|
2013-10-04 22:17:54 +00:00
|
|
|
ASSERT_EQ(num, "2");
|
2014-04-23 00:17:33 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-active-mem-table", &num));
|
|
|
|
ASSERT_EQ(num, "1");
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-imm-mem-tables", &num));
|
|
|
|
ASSERT_EQ(num, "2");
|
2013-11-18 19:32:54 +00:00
|
|
|
perf_context.Reset();
|
2014-02-07 22:47:16 +00:00
|
|
|
Get(1, "k2");
|
2013-11-18 19:32:54 +00:00
|
|
|
ASSERT_EQ(2, (int) perf_context.get_from_memtable_count);
|
|
|
|
perf_context.Reset();
|
2014-02-07 22:47:16 +00:00
|
|
|
Get(1, "k3");
|
2013-11-18 19:32:54 +00:00
|
|
|
ASSERT_EQ(1, (int) perf_context.get_from_memtable_count);
|
|
|
|
perf_context.Reset();
|
2014-02-07 22:47:16 +00:00
|
|
|
Get(1, "k1");
|
2013-11-18 19:32:54 +00:00
|
|
|
ASSERT_EQ(3, (int) perf_context.get_from_memtable_count);
|
2013-10-04 22:17:54 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(handles_[1],
|
|
|
|
"rocksdb.num-immutable-mem-table", &num));
|
2013-10-04 22:17:54 +00:00
|
|
|
ASSERT_EQ(num, "0");
|
2014-03-31 19:44:54 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty(
|
|
|
|
handles_[1], "rocksdb.cur-size-active-mem-table", &num));
|
2014-05-09 18:01:54 +00:00
|
|
|
// "200" is the size of the metadata of an empty skiplist, this would
|
2014-03-27 18:59:37 +00:00
|
|
|
// break if we change the default skiplist implementation
|
2014-05-09 18:01:54 +00:00
|
|
|
ASSERT_EQ(num, "200");
|
2015-03-18 23:11:02 +00:00
|
|
|
|
|
|
|
uint64_t int_num;
|
|
|
|
uint64_t base_total_size;
|
|
|
|
ASSERT_TRUE(dbfull()->GetIntProperty(
|
|
|
|
handles_[1], "rocksdb.estimate-num-keys", &base_total_size));
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->Delete(writeOpt, handles_[1], "k2"));
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "k3", ""));
|
|
|
|
ASSERT_OK(dbfull()->Delete(writeOpt, handles_[1], "k3"));
|
|
|
|
ASSERT_TRUE(dbfull()->GetIntProperty(
|
|
|
|
handles_[1], "rocksdb.num-deletes-active-mem-table", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 2U);
|
|
|
|
ASSERT_TRUE(dbfull()->GetIntProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-active-mem-table", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 3U);
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "k2", big_value));
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "k2", big_value));
|
|
|
|
ASSERT_TRUE(dbfull()->GetIntProperty(
|
|
|
|
handles_[1], "rocksdb.num-entries-imm-mem-tables", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 4U);
|
|
|
|
ASSERT_TRUE(dbfull()->GetIntProperty(
|
|
|
|
handles_[1], "rocksdb.num-deletes-imm-mem-tables", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 2U);
|
|
|
|
|
|
|
|
ASSERT_TRUE(dbfull()->GetIntProperty(
|
|
|
|
handles_[1], "rocksdb.estimate-num-keys", &int_num));
|
|
|
|
ASSERT_EQ(int_num, base_total_size + 1);
|
|
|
|
|
2013-11-18 19:32:54 +00:00
|
|
|
SetPerfLevel(kDisable);
|
2014-07-10 18:35:07 +00:00
|
|
|
ASSERT_TRUE(GetPerfLevel() == kDisable);
|
2013-10-04 22:17:54 +00:00
|
|
|
} while (ChangeCompactOptions());
|
|
|
|
}
|
|
|
|
|
2014-03-18 19:37:42 +00:00
|
|
|
class SleepingBackgroundTask {
|
|
|
|
public:
|
2014-04-08 21:57:00 +00:00
|
|
|
SleepingBackgroundTask()
|
|
|
|
: bg_cv_(&mutex_), should_sleep_(true), done_with_sleep_(false) {}
|
2014-03-18 19:37:42 +00:00
|
|
|
void DoSleep() {
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
while (should_sleep_) {
|
|
|
|
bg_cv_.Wait();
|
|
|
|
}
|
2014-04-08 21:57:00 +00:00
|
|
|
done_with_sleep_ = true;
|
|
|
|
bg_cv_.SignalAll();
|
2014-03-18 19:37:42 +00:00
|
|
|
}
|
|
|
|
void WakeUp() {
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
should_sleep_ = false;
|
|
|
|
bg_cv_.SignalAll();
|
|
|
|
}
|
2014-04-08 21:57:00 +00:00
|
|
|
void WaitUntilDone() {
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
while (!done_with_sleep_) {
|
|
|
|
bg_cv_.Wait();
|
|
|
|
}
|
|
|
|
}
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
bool WokenUp() {
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
return should_sleep_ == false;
|
|
|
|
}
|
|
|
|
|
|
|
|
void Reset() {
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
should_sleep_ = true;
|
|
|
|
done_with_sleep_ = false;
|
|
|
|
}
|
2014-03-18 19:37:42 +00:00
|
|
|
|
|
|
|
static void DoSleepTask(void* arg) {
|
|
|
|
reinterpret_cast<SleepingBackgroundTask*>(arg)->DoSleep();
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
port::Mutex mutex_;
|
|
|
|
port::CondVar bg_cv_; // Signalled when background work finishes
|
|
|
|
bool should_sleep_;
|
2014-04-08 21:57:00 +00:00
|
|
|
bool done_with_sleep_;
|
2014-03-18 19:37:42 +00:00
|
|
|
};
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, FlushEmptyColumnFamily) {
|
2014-09-05 19:01:01 +00:00
|
|
|
// Block flush thread and disable compaction thread
|
|
|
|
env_->SetBackgroundThreads(1, Env::HIGH);
|
|
|
|
env_->SetBackgroundThreads(1, Env::LOW);
|
|
|
|
SleepingBackgroundTask sleeping_task_low;
|
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_low,
|
|
|
|
Env::Priority::LOW);
|
|
|
|
SleepingBackgroundTask sleeping_task_high;
|
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_high,
|
|
|
|
Env::Priority::HIGH);
|
|
|
|
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
// disable compaction
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
WriteOptions writeOpt = WriteOptions();
|
|
|
|
writeOpt.disableWAL = true;
|
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
options.min_write_buffer_number_to_merge = 1;
|
Support saving history in memtable_list
Summary:
For transactions, we are using the memtables to validate that there are no write conflicts. But after flushing, we don't have any memtables, and transactions could fail to commit. So we want to someone keep around some extra history to use for conflict checking. In addition, we want to provide a way to increase the size of this history if too many transactions fail to commit.
After chatting with people, it seems like everyone prefers just using Memtables to store this history (instead of a separate history structure). It seems like the best place for this is abstracted inside the memtable_list. I decide to create a separate list in MemtableListVersion as using the same list complicated the flush/installalflushresults logic too much.
This diff adds a new parameter to control how much memtable history to keep around after flushing. However, it sounds like people aren't too fond of adding new parameters. So I am making the default size of flushed+not-flushed memtables be set to max_write_buffers. This should not change the maximum amount of memory used, but make it more likely we're using closer the the limit. (We are now postponing deleting flushed memtables until the max_write_buffer limit is reached). So while we might use more memory on average, we are still obeying the limit set (and you could argue it's better to go ahead and use up memory now instead of waiting for a write stall to happen to test this limit).
However, if people are opposed to this default behavior, we can easily set it to 0 and require this parameter be set in order to use transactions.
Test Plan: Added a xfunc test to play around with setting different values of this parameter in all tests. Added testing in memtablelist_test and planning on adding more testing here.
Reviewers: sdong, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37443
2015-05-28 23:34:24 +00:00
|
|
|
options.max_write_buffer_number_to_maintain = 1;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-09-05 19:01:01 +00:00
|
|
|
|
|
|
|
// Compaction can still go through even if no thread can flush the
|
|
|
|
// mem table.
|
|
|
|
ASSERT_OK(Flush(0));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
|
|
|
|
// Insert can go through
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[0], "foo", "v1"));
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "bar", "v1"));
|
|
|
|
|
|
|
|
ASSERT_EQ("v1", Get(0, "foo"));
|
|
|
|
ASSERT_EQ("v1", Get(1, "bar"));
|
|
|
|
|
|
|
|
sleeping_task_high.WakeUp();
|
|
|
|
sleeping_task_high.WaitUntilDone();
|
|
|
|
|
|
|
|
// Flush can still go through.
|
|
|
|
ASSERT_OK(Flush(0));
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
|
|
|
|
sleeping_task_low.WakeUp();
|
|
|
|
sleeping_task_low.WaitUntilDone();
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetProperty) {
|
2014-03-18 19:37:42 +00:00
|
|
|
// Set sizes to both background thread pool to be 1 and block them.
|
|
|
|
env_->SetBackgroundThreads(1, Env::HIGH);
|
|
|
|
env_->SetBackgroundThreads(1, Env::LOW);
|
2014-03-19 22:40:12 +00:00
|
|
|
SleepingBackgroundTask sleeping_task_low;
|
2014-03-18 19:37:42 +00:00
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_low,
|
|
|
|
Env::Priority::LOW);
|
2014-03-19 22:40:12 +00:00
|
|
|
SleepingBackgroundTask sleeping_task_high;
|
2014-03-18 19:37:42 +00:00
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_high,
|
|
|
|
Env::Priority::HIGH);
|
|
|
|
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
WriteOptions writeOpt = WriteOptions();
|
|
|
|
writeOpt.disableWAL = true;
|
|
|
|
options.compaction_style = kCompactionStyleUniversal;
|
|
|
|
options.level0_file_num_compaction_trigger = 1;
|
|
|
|
options.compaction_options_universal.size_ratio = 50;
|
|
|
|
options.max_background_compactions = 1;
|
|
|
|
options.max_background_flushes = 1;
|
|
|
|
options.max_write_buffer_number = 10;
|
|
|
|
options.min_write_buffer_number_to_merge = 1;
|
Support saving history in memtable_list
Summary:
For transactions, we are using the memtables to validate that there are no write conflicts. But after flushing, we don't have any memtables, and transactions could fail to commit. So we want to someone keep around some extra history to use for conflict checking. In addition, we want to provide a way to increase the size of this history if too many transactions fail to commit.
After chatting with people, it seems like everyone prefers just using Memtables to store this history (instead of a separate history structure). It seems like the best place for this is abstracted inside the memtable_list. I decide to create a separate list in MemtableListVersion as using the same list complicated the flush/installalflushresults logic too much.
This diff adds a new parameter to control how much memtable history to keep around after flushing. However, it sounds like people aren't too fond of adding new parameters. So I am making the default size of flushed+not-flushed memtables be set to max_write_buffers. This should not change the maximum amount of memory used, but make it more likely we're using closer the the limit. (We are now postponing deleting flushed memtables until the max_write_buffer limit is reached). So while we might use more memory on average, we are still obeying the limit set (and you could argue it's better to go ahead and use up memory now instead of waiting for a write stall to happen to test this limit).
However, if people are opposed to this default behavior, we can easily set it to 0 and require this parameter be set in order to use transactions.
Test Plan: Added a xfunc test to play around with setting different values of this parameter in all tests. Added testing in memtablelist_test and planning on adding more testing here.
Reviewers: sdong, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37443
2015-05-28 23:34:24 +00:00
|
|
|
options.max_write_buffer_number_to_maintain = 0;
|
2014-03-18 19:37:42 +00:00
|
|
|
options.write_buffer_size = 1000000;
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-03-18 19:37:42 +00:00
|
|
|
|
|
|
|
std::string big_value(1000000 * 2, 'x');
|
|
|
|
std::string num;
|
2014-07-28 22:28:53 +00:00
|
|
|
uint64_t int_num;
|
2014-03-18 19:37:42 +00:00
|
|
|
SetPerfLevel(kEnableTime);
|
|
|
|
|
2014-08-05 18:27:34 +00:00
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.estimate-table-readers-mem", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 0U);
|
2015-07-22 04:33:20 +00:00
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.estimate-live-data-size", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 0U);
|
2014-08-05 18:27:34 +00:00
|
|
|
|
2014-03-18 19:37:42 +00:00
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, "k1", big_value));
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.num-immutable-mem-table", &num));
|
|
|
|
ASSERT_EQ(num, "0");
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.mem-table-flush-pending", &num));
|
|
|
|
ASSERT_EQ(num, "0");
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.compaction-pending", &num));
|
|
|
|
ASSERT_EQ(num, "0");
|
2014-07-28 21:50:16 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.estimate-num-keys", &num));
|
|
|
|
ASSERT_EQ(num, "1");
|
2014-03-18 19:37:42 +00:00
|
|
|
perf_context.Reset();
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, "k2", big_value));
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.num-immutable-mem-table", &num));
|
|
|
|
ASSERT_EQ(num, "1");
|
2014-07-28 21:50:16 +00:00
|
|
|
ASSERT_OK(dbfull()->Delete(writeOpt, "k-non-existing"));
|
2014-03-18 19:37:42 +00:00
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, "k3", big_value));
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.num-immutable-mem-table", &num));
|
|
|
|
ASSERT_EQ(num, "2");
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.mem-table-flush-pending", &num));
|
|
|
|
ASSERT_EQ(num, "1");
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.compaction-pending", &num));
|
|
|
|
ASSERT_EQ(num, "0");
|
2014-07-28 21:50:16 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.estimate-num-keys", &num));
|
2015-03-18 23:11:02 +00:00
|
|
|
ASSERT_EQ(num, "2");
|
2014-07-28 22:28:53 +00:00
|
|
|
// Verify the same set of properties through GetIntProperty
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.num-immutable-mem-table", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 2U);
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.mem-table-flush-pending", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 1U);
|
|
|
|
ASSERT_TRUE(dbfull()->GetIntProperty("rocksdb.compaction-pending", &int_num));
|
2014-07-31 21:48:00 +00:00
|
|
|
ASSERT_EQ(int_num, 0U);
|
2014-07-28 22:28:53 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetIntProperty("rocksdb.estimate-num-keys", &int_num));
|
2015-03-18 23:11:02 +00:00
|
|
|
ASSERT_EQ(int_num, 2U);
|
2014-03-18 19:37:42 +00:00
|
|
|
|
2014-08-05 18:27:34 +00:00
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.estimate-table-readers-mem", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 0U);
|
|
|
|
|
2014-03-18 19:37:42 +00:00
|
|
|
sleeping_task_high.WakeUp();
|
2014-04-08 21:57:00 +00:00
|
|
|
sleeping_task_high.WaitUntilDone();
|
2014-03-18 19:37:42 +00:00
|
|
|
dbfull()->TEST_WaitForFlushMemTable();
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, "k4", big_value));
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, "k5", big_value));
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable();
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.mem-table-flush-pending", &num));
|
|
|
|
ASSERT_EQ(num, "0");
|
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.compaction-pending", &num));
|
|
|
|
ASSERT_EQ(num, "1");
|
2014-07-28 21:50:16 +00:00
|
|
|
ASSERT_TRUE(dbfull()->GetProperty("rocksdb.estimate-num-keys", &num));
|
|
|
|
ASSERT_EQ(num, "4");
|
2014-08-05 18:27:34 +00:00
|
|
|
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.estimate-table-readers-mem", &int_num));
|
|
|
|
ASSERT_GT(int_num, 0U);
|
|
|
|
|
2014-03-18 19:37:42 +00:00
|
|
|
sleeping_task_low.WakeUp();
|
2014-04-08 21:57:00 +00:00
|
|
|
sleeping_task_low.WaitUntilDone();
|
2014-08-05 18:27:34 +00:00
|
|
|
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable();
|
|
|
|
options.max_open_files = 10;
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-08-05 18:27:34 +00:00
|
|
|
// After reopening, no table reader is loaded, so no memory for table readers
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.estimate-table-readers-mem", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 0U);
|
|
|
|
ASSERT_TRUE(dbfull()->GetIntProperty("rocksdb.estimate-num-keys", &int_num));
|
|
|
|
ASSERT_GT(int_num, 0U);
|
|
|
|
|
|
|
|
// After reading a key, at least one table reader is loaded.
|
|
|
|
Get("k5");
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.estimate-table-readers-mem", &int_num));
|
|
|
|
ASSERT_GT(int_num, 0U);
|
2015-02-12 01:10:43 +00:00
|
|
|
|
|
|
|
// Test rocksdb.num-live-versions
|
|
|
|
{
|
|
|
|
options.level0_file_num_compaction_trigger = 20;
|
|
|
|
Reopen(options);
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.num-live-versions", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 1U);
|
|
|
|
|
|
|
|
// Use an iterator to hold current version
|
|
|
|
std::unique_ptr<Iterator> iter1(dbfull()->NewIterator(ReadOptions()));
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, "k6", big_value));
|
|
|
|
Flush();
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.num-live-versions", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 2U);
|
|
|
|
|
|
|
|
// Use an iterator to hold current version
|
|
|
|
std::unique_ptr<Iterator> iter2(dbfull()->NewIterator(ReadOptions()));
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, "k7", big_value));
|
|
|
|
Flush();
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.num-live-versions", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 3U);
|
|
|
|
|
|
|
|
iter2.reset();
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.num-live-versions", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 2U);
|
|
|
|
|
|
|
|
iter1.reset();
|
|
|
|
ASSERT_TRUE(
|
|
|
|
dbfull()->GetIntProperty("rocksdb.num-live-versions", &int_num));
|
|
|
|
ASSERT_EQ(int_num, 1U);
|
|
|
|
}
|
2014-03-18 19:37:42 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, FLUSH) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2013-08-07 22:20:41 +00:00
|
|
|
WriteOptions writeOpt = WriteOptions();
|
|
|
|
writeOpt.disableWAL = true;
|
2013-11-18 19:32:54 +00:00
|
|
|
SetPerfLevel(kEnableTime);;
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "foo", "v1"));
|
2013-08-07 22:20:41 +00:00
|
|
|
// this will now also flush the last 2 writes
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "bar", "v1"));
|
2012-07-06 18:42:09 +00:00
|
|
|
|
2013-11-18 19:32:54 +00:00
|
|
|
perf_context.Reset();
|
2014-02-07 22:47:16 +00:00
|
|
|
Get(1, "foo");
|
2013-11-18 19:32:54 +00:00
|
|
|
ASSERT_TRUE((int) perf_context.get_from_output_files_time > 0);
|
|
|
|
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
2014-03-12 18:50:10 +00:00
|
|
|
ASSERT_EQ("v1", Get(1, "bar"));
|
2012-07-06 18:42:09 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
writeOpt.disableWAL = true;
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "bar", "v2"));
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "foo", "v2"));
|
|
|
|
ASSERT_OK(Flush(1));
|
2012-07-06 18:42:09 +00:00
|
|
|
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("v2", Get(1, "bar"));
|
2013-11-18 19:32:54 +00:00
|
|
|
perf_context.Reset();
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("v2", Get(1, "foo"));
|
2013-11-18 19:32:54 +00:00
|
|
|
ASSERT_TRUE((int) perf_context.get_from_output_files_time > 0);
|
2012-07-06 18:42:09 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
writeOpt.disableWAL = false;
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "bar", "v3"));
|
|
|
|
ASSERT_OK(dbfull()->Put(writeOpt, handles_[1], "foo", "v3"));
|
|
|
|
ASSERT_OK(Flush(1));
|
2012-07-06 18:42:09 +00:00
|
|
|
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, CurrentOptions());
|
2013-08-07 22:20:41 +00:00
|
|
|
// 'foo' should be there because its put
|
|
|
|
// has WAL enabled.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("v3", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("v3", Get(1, "bar"));
|
2013-11-18 19:32:54 +00:00
|
|
|
|
|
|
|
SetPerfLevel(kDisable);
|
2013-08-07 22:20:41 +00:00
|
|
|
} while (ChangeCompactOptions());
|
2012-07-06 18:42:09 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, RecoveryWithEmptyLog) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
ASSERT_OK(Put(1, "foo", "v2"));
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, CurrentOptions());
|
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "v3"));
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("v3", Get(1, "foo"));
|
2012-04-17 15:36:46 +00:00
|
|
|
} while (ChangeOptions());
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2011-06-22 02:36:45 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, FlushSchedule) {
|
2014-09-11 01:46:09 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.level0_stop_writes_trigger = 1 << 10;
|
|
|
|
options.level0_slowdown_writes_trigger = 1 << 10;
|
|
|
|
options.min_write_buffer_number_to_merge = 1;
|
Support saving history in memtable_list
Summary:
For transactions, we are using the memtables to validate that there are no write conflicts. But after flushing, we don't have any memtables, and transactions could fail to commit. So we want to someone keep around some extra history to use for conflict checking. In addition, we want to provide a way to increase the size of this history if too many transactions fail to commit.
After chatting with people, it seems like everyone prefers just using Memtables to store this history (instead of a separate history structure). It seems like the best place for this is abstracted inside the memtable_list. I decide to create a separate list in MemtableListVersion as using the same list complicated the flush/installalflushresults logic too much.
This diff adds a new parameter to control how much memtable history to keep around after flushing. However, it sounds like people aren't too fond of adding new parameters. So I am making the default size of flushed+not-flushed memtables be set to max_write_buffers. This should not change the maximum amount of memory used, but make it more likely we're using closer the the limit. (We are now postponing deleting flushed memtables until the max_write_buffer limit is reached). So while we might use more memory on average, we are still obeying the limit set (and you could argue it's better to go ahead and use up memory now instead of waiting for a write stall to happen to test this limit).
However, if people are opposed to this default behavior, we can easily set it to 0 and require this parameter be set in order to use transactions.
Test Plan: Added a xfunc test to play around with setting different values of this parameter in all tests. Added testing in memtablelist_test and planning on adding more testing here.
Reviewers: sdong, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37443
2015-05-28 23:34:24 +00:00
|
|
|
options.max_write_buffer_number_to_maintain = 1;
|
2014-09-11 01:46:09 +00:00
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
options.write_buffer_size = 100 * 1000;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-09-11 01:46:09 +00:00
|
|
|
std::vector<std::thread> threads;
|
|
|
|
|
2014-09-11 22:36:30 +00:00
|
|
|
std::atomic<int> thread_num(0);
|
2014-09-11 01:46:09 +00:00
|
|
|
// each column family will have 5 thread, each thread generating 2 memtables.
|
|
|
|
// each column family should end up with 10 table files
|
2015-08-05 18:47:07 +00:00
|
|
|
std::function<void()> fill_memtable_func = [&]() {
|
|
|
|
int a = thread_num.fetch_add(1);
|
|
|
|
Random rnd(a);
|
|
|
|
WriteOptions wo;
|
|
|
|
// this should fill up 2 memtables
|
|
|
|
for (int k = 0; k < 5000; ++k) {
|
|
|
|
ASSERT_OK(db_->Put(wo, handles_[a & 1], RandomString(&rnd, 13), ""));
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2014-09-11 01:46:09 +00:00
|
|
|
for (int i = 0; i < 10; ++i) {
|
2015-08-05 18:47:07 +00:00
|
|
|
threads.emplace_back(fill_memtable_func);
|
2014-09-11 01:46:09 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
for (auto& t : threads) {
|
|
|
|
t.join();
|
|
|
|
}
|
|
|
|
|
2014-09-11 02:14:17 +00:00
|
|
|
auto default_tables = GetNumberOfSstFilesForColumnFamily(db_, "default");
|
|
|
|
auto pikachu_tables = GetNumberOfSstFilesForColumnFamily(db_, "pikachu");
|
|
|
|
ASSERT_LE(default_tables, static_cast<uint64_t>(10));
|
|
|
|
ASSERT_GT(default_tables, static_cast<uint64_t>(0));
|
|
|
|
ASSERT_LE(pikachu_tables, static_cast<uint64_t>(10));
|
|
|
|
ASSERT_GT(pikachu_tables, static_cast<uint64_t>(0));
|
2014-09-11 01:46:09 +00:00
|
|
|
}
|
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, ManifestRollOver) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
Options options;
|
2013-08-07 22:20:41 +00:00
|
|
|
options.max_manifest_file_size = 10 ; // 10 bytes
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
options = CurrentOptions(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-08-07 22:20:41 +00:00
|
|
|
{
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "manifest_key1", std::string(1000, '1')));
|
|
|
|
ASSERT_OK(Put(1, "manifest_key2", std::string(1000, '2')));
|
|
|
|
ASSERT_OK(Put(1, "manifest_key3", std::string(1000, '3')));
|
|
|
|
uint64_t manifest_before_flush = dbfull()->TEST_Current_Manifest_FileNo();
|
|
|
|
ASSERT_OK(Flush(1)); // This should trigger LogAndApply.
|
|
|
|
uint64_t manifest_after_flush = dbfull()->TEST_Current_Manifest_FileNo();
|
2013-10-18 21:50:54 +00:00
|
|
|
ASSERT_GT(manifest_after_flush, manifest_before_flush);
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, options);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_GT(dbfull()->TEST_Current_Manifest_FileNo(), manifest_after_flush);
|
2013-08-07 22:20:41 +00:00
|
|
|
// check if a new manifest file got inserted or not.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(std::string(1000, '1'), Get(1, "manifest_key1"));
|
|
|
|
ASSERT_EQ(std::string(1000, '2'), Get(1, "manifest_key2"));
|
|
|
|
ASSERT_EQ(std::string(1000, '3'), Get(1, "manifest_key3"));
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
|
|
|
} while (ChangeCompactOptions());
|
2013-01-11 01:18:50 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IdentityAcrossRestarts) {
|
2013-10-18 21:50:54 +00:00
|
|
|
do {
|
2013-12-03 14:39:07 +00:00
|
|
|
std::string id1;
|
|
|
|
ASSERT_OK(db_->GetDbIdentity(id1));
|
2013-10-18 21:50:54 +00:00
|
|
|
|
|
|
|
Options options = CurrentOptions();
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2013-12-03 14:39:07 +00:00
|
|
|
std::string id2;
|
|
|
|
ASSERT_OK(db_->GetDbIdentity(id2));
|
2013-10-18 21:50:54 +00:00
|
|
|
// id1 should match id2 because identity was not regenerated
|
2013-12-03 14:39:07 +00:00
|
|
|
ASSERT_EQ(id1.compare(id2), 0);
|
2013-10-18 21:50:54 +00:00
|
|
|
|
2013-12-03 14:39:07 +00:00
|
|
|
std::string idfilename = IdentityFileName(dbname_);
|
2013-10-18 21:50:54 +00:00
|
|
|
ASSERT_OK(env_->DeleteFile(idfilename));
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2013-12-03 14:39:07 +00:00
|
|
|
std::string id3;
|
|
|
|
ASSERT_OK(db_->GetDbIdentity(id3));
|
|
|
|
// id1 should NOT match id3 because identity was regenerated
|
|
|
|
ASSERT_NE(id1.compare(id3), 0);
|
2013-10-18 21:50:54 +00:00
|
|
|
} while (ChangeCompactOptions());
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, RecoverWithLargeLog) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
|
|
|
{
|
|
|
|
Options options = CurrentOptions();
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "big1", std::string(200000, '1')));
|
|
|
|
ASSERT_OK(Put(1, "big2", std::string(200000, '2')));
|
|
|
|
ASSERT_OK(Put(1, "small3", std::string(10, '3')));
|
|
|
|
ASSERT_OK(Put(1, "small4", std::string(10, '4')));
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 1), 0);
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Make sure that if we re-open with a small write buffer size that
|
|
|
|
// we flush table files in the middle of a large log file.
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
Options options;
|
2013-08-07 22:20:41 +00:00
|
|
|
options.write_buffer_size = 100000;
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
options = CurrentOptions(options);
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, options);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 1), 3);
|
|
|
|
ASSERT_EQ(std::string(200000, '1'), Get(1, "big1"));
|
|
|
|
ASSERT_EQ(std::string(200000, '2'), Get(1, "big2"));
|
|
|
|
ASSERT_EQ(std::string(10, '3'), Get(1, "small3"));
|
|
|
|
ASSERT_EQ(std::string(10, '4'), Get(1, "small4"));
|
|
|
|
ASSERT_GT(NumTableFilesAtLevel(0, 1), 1);
|
2013-08-07 22:20:41 +00:00
|
|
|
} while (ChangeCompactOptions());
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-07-14 23:03:47 +00:00
|
|
|
namespace {
|
|
|
|
class KeepFilter : public CompactionFilter {
|
2014-03-19 23:20:29 +00:00
|
|
|
public:
|
|
|
|
virtual bool Filter(int level, const Slice& key, const Slice& value,
|
|
|
|
std::string* new_value, bool* value_changed) const
|
|
|
|
override {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2015-07-14 23:03:47 +00:00
|
|
|
virtual const char* Name() const override { return "KeepFilter"; }
|
2014-03-19 23:20:29 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
class KeepFilterFactory : public CompactionFilterFactory {
|
|
|
|
public:
|
|
|
|
explicit KeepFilterFactory(bool check_context = false)
|
|
|
|
: check_context_(check_context) {}
|
|
|
|
|
|
|
|
virtual std::unique_ptr<CompactionFilter> CreateCompactionFilter(
|
2014-04-02 21:48:53 +00:00
|
|
|
const CompactionFilter::Context& context) override {
|
2014-03-19 23:20:29 +00:00
|
|
|
if (check_context_) {
|
rocksdb: Replace ASSERT* with EXPECT* in functions that does not return void value
Summary:
gtest does not use exceptions to fail a unit test by design, and `ASSERT*`s are implemented using `return`. As a consequence we cannot use `ASSERT*` in a function that does not return `void` value ([[ https://code.google.com/p/googletest/wiki/AdvancedGuide#Assertion_Placement | 1]]), and have to fix our existing code. This diff does this in a generic way, with no manual changes.
In order to detect all existing `ASSERT*` that are used in functions that doesn't return void value, I change the code to generate compile errors for such cases.
In `util/testharness.h` I defined `EXPECT*` assertions, the same way as `ASSERT*`, and redefined `ASSERT*` to return `void`. Then executed:
```lang=bash
% USE_CLANG=1 make all -j55 -k 2> build.log
% perl -naF: -e 'print "-- -number=".$F[1]." ".$F[0]."\n" if /: error:/' \
build.log | xargs -L 1 perl -spi -e 's/ASSERT/EXPECT/g if $. == $number'
% make format
```
After that I reverted back change to `ASSERT*` in `util/testharness.h`. But preserved introduced `EXPECT*`, which is the same as `ASSERT*`. This will be deleted once switched to gtest.
This diff is independent and contains manual changes only in `util/testharness.h`.
Test Plan:
Make sure all tests are passing.
```lang=bash
% USE_CLANG=1 make check
```
Reviewers: igor, lgalanis, sdong, yufei.zhu, rven, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D33333
2015-03-17 03:52:32 +00:00
|
|
|
EXPECT_EQ(expect_full_compaction_.load(), context.is_full_compaction);
|
|
|
|
EXPECT_EQ(expect_manual_compaction_.load(), context.is_manual_compaction);
|
2014-03-19 23:20:29 +00:00
|
|
|
}
|
|
|
|
return std::unique_ptr<CompactionFilter>(new KeepFilter());
|
|
|
|
}
|
|
|
|
|
|
|
|
virtual const char* Name() const override { return "KeepFilterFactory"; }
|
|
|
|
bool check_context_;
|
|
|
|
std::atomic_bool expect_full_compaction_;
|
|
|
|
std::atomic_bool expect_manual_compaction_;
|
|
|
|
};
|
|
|
|
|
2015-07-14 23:03:47 +00:00
|
|
|
class DelayFilter : public CompactionFilter {
|
2014-03-19 23:20:29 +00:00
|
|
|
public:
|
2015-07-15 01:24:45 +00:00
|
|
|
explicit DelayFilter(DBTestBase* d) : db_test(d) {}
|
2015-07-14 23:03:47 +00:00
|
|
|
virtual bool Filter(int level, const Slice& key, const Slice& value,
|
|
|
|
std::string* new_value,
|
|
|
|
bool* value_changed) const override {
|
|
|
|
db_test->env_->addon_time_.fetch_add(1000);
|
|
|
|
return true;
|
2014-03-19 23:20:29 +00:00
|
|
|
}
|
|
|
|
|
2015-07-14 23:03:47 +00:00
|
|
|
virtual const char* Name() const override { return "DelayFilter"; }
|
|
|
|
|
|
|
|
private:
|
2015-07-15 01:24:45 +00:00
|
|
|
DBTestBase* db_test;
|
2014-03-19 23:20:29 +00:00
|
|
|
};
|
|
|
|
|
2015-03-03 18:59:36 +00:00
|
|
|
class DelayFilterFactory : public CompactionFilterFactory {
|
|
|
|
public:
|
2015-07-15 01:24:45 +00:00
|
|
|
explicit DelayFilterFactory(DBTestBase* d) : db_test(d) {}
|
2015-03-03 18:59:36 +00:00
|
|
|
virtual std::unique_ptr<CompactionFilter> CreateCompactionFilter(
|
|
|
|
const CompactionFilter::Context& context) override {
|
|
|
|
return std::unique_ptr<CompactionFilter>(new DelayFilter(db_test));
|
|
|
|
}
|
|
|
|
|
|
|
|
virtual const char* Name() const override { return "DelayFilterFactory"; }
|
|
|
|
|
|
|
|
private:
|
2015-07-15 01:24:45 +00:00
|
|
|
DBTestBase* db_test;
|
2015-03-03 18:59:36 +00:00
|
|
|
};
|
2015-07-14 23:03:47 +00:00
|
|
|
} // namespace
|
2015-03-30 21:04:21 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, CompressedCache) {
|
2015-04-06 19:50:44 +00:00
|
|
|
if (!Snappy_Supported()) {
|
2015-03-13 18:08:50 +00:00
|
|
|
return;
|
|
|
|
}
|
2013-11-27 21:32:56 +00:00
|
|
|
int num_iter = 80;
|
|
|
|
|
|
|
|
// Run this test three iterations.
|
|
|
|
// Iteration 1: only a uncompressed block cache
|
|
|
|
// Iteration 2: only a compressed block cache
|
|
|
|
// Iteration 3: both block cache and compressed cache
|
2014-04-09 18:43:14 +00:00
|
|
|
// Iteration 4: both block cache and compressed cache, but DB is not
|
|
|
|
// compressed
|
|
|
|
for (int iter = 0; iter < 4; iter++) {
|
2014-08-25 21:22:05 +00:00
|
|
|
Options options;
|
2013-11-27 21:32:56 +00:00
|
|
|
options.write_buffer_size = 64*1024; // small write buffer
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
2014-10-31 22:08:10 +00:00
|
|
|
options = CurrentOptions(options);
|
2013-11-27 21:32:56 +00:00
|
|
|
|
2014-08-25 21:22:05 +00:00
|
|
|
BlockBasedTableOptions table_options;
|
2013-11-27 21:32:56 +00:00
|
|
|
switch (iter) {
|
|
|
|
case 0:
|
|
|
|
// only uncompressed block cache
|
2014-08-25 21:22:05 +00:00
|
|
|
table_options.block_cache = NewLRUCache(8*1024);
|
|
|
|
table_options.block_cache_compressed = nullptr;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
2013-11-27 21:32:56 +00:00
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
// no block cache, only compressed cache
|
2014-08-25 21:22:05 +00:00
|
|
|
table_options.no_block_cache = true;
|
|
|
|
table_options.block_cache = nullptr;
|
|
|
|
table_options.block_cache_compressed = NewLRUCache(8*1024);
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
2013-11-27 21:32:56 +00:00
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
// both compressed and uncompressed block cache
|
2014-08-25 21:22:05 +00:00
|
|
|
table_options.block_cache = NewLRUCache(1024);
|
|
|
|
table_options.block_cache_compressed = NewLRUCache(8*1024);
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
2013-11-27 21:32:56 +00:00
|
|
|
break;
|
2014-04-09 18:43:14 +00:00
|
|
|
case 3:
|
|
|
|
// both block cache and compressed cache, but DB is not compressed
|
|
|
|
// also, make block cache sizes bigger, to trigger block cache hits
|
2014-08-25 21:22:05 +00:00
|
|
|
table_options.block_cache = NewLRUCache(1024 * 1024);
|
|
|
|
table_options.block_cache_compressed = NewLRUCache(8 * 1024 * 1024);
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
2014-04-09 18:43:14 +00:00
|
|
|
options.compression = kNoCompression;
|
|
|
|
break;
|
2013-11-27 21:32:56 +00:00
|
|
|
default:
|
|
|
|
ASSERT_TRUE(false);
|
|
|
|
}
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-02-15 01:47:53 +00:00
|
|
|
// default column family doesn't have block cache
|
|
|
|
Options no_block_cache_opts;
|
|
|
|
no_block_cache_opts.statistics = options.statistics;
|
2014-10-31 22:08:10 +00:00
|
|
|
no_block_cache_opts = CurrentOptions(no_block_cache_opts);
|
2014-08-25 21:22:05 +00:00
|
|
|
BlockBasedTableOptions table_options_no_bc;
|
|
|
|
table_options_no_bc.no_block_cache = true;
|
|
|
|
no_block_cache_opts.table_factory.reset(
|
|
|
|
NewBlockBasedTableFactory(table_options_no_bc));
|
2014-02-15 01:47:53 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"},
|
2014-10-29 19:00:42 +00:00
|
|
|
std::vector<Options>({no_block_cache_opts, options}));
|
2013-11-27 21:32:56 +00:00
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
|
|
|
|
// Write 8MB (80 values, each 100K)
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 1), 0);
|
2013-11-27 21:32:56 +00:00
|
|
|
std::vector<std::string> values;
|
|
|
|
std::string str;
|
|
|
|
for (int i = 0; i < num_iter; i++) {
|
|
|
|
if (i % 4 == 0) { // high compression ratio
|
|
|
|
str = RandomString(&rnd, 1000);
|
|
|
|
}
|
|
|
|
values.push_back(str);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, Key(i), values[i]));
|
2013-11-27 21:32:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// flush all data from memtable so that reads are from block cache
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Flush(1));
|
2013-11-27 21:32:56 +00:00
|
|
|
|
|
|
|
for (int i = 0; i < num_iter; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(Get(1, Key(i)), values[i]);
|
2013-11-27 21:32:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// check that we triggered the appropriate code paths in the cache
|
|
|
|
switch (iter) {
|
|
|
|
case 0:
|
|
|
|
// only uncompressed block cache
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_GT(TestGetTickerCount(options, BLOCK_CACHE_MISS), 0);
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOCK_CACHE_COMPRESSED_MISS), 0);
|
2013-11-27 21:32:56 +00:00
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
// no block cache, only compressed cache
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOCK_CACHE_MISS), 0);
|
|
|
|
ASSERT_GT(TestGetTickerCount(options, BLOCK_CACHE_COMPRESSED_MISS), 0);
|
2013-11-27 21:32:56 +00:00
|
|
|
break;
|
|
|
|
case 2:
|
|
|
|
// both compressed and uncompressed block cache
|
2014-01-17 20:46:06 +00:00
|
|
|
ASSERT_GT(TestGetTickerCount(options, BLOCK_CACHE_MISS), 0);
|
|
|
|
ASSERT_GT(TestGetTickerCount(options, BLOCK_CACHE_COMPRESSED_MISS), 0);
|
2013-11-27 21:32:56 +00:00
|
|
|
break;
|
2014-04-09 18:43:14 +00:00
|
|
|
case 3:
|
|
|
|
// both compressed and uncompressed block cache
|
|
|
|
ASSERT_GT(TestGetTickerCount(options, BLOCK_CACHE_MISS), 0);
|
|
|
|
ASSERT_GT(TestGetTickerCount(options, BLOCK_CACHE_HIT), 0);
|
|
|
|
ASSERT_GT(TestGetTickerCount(options, BLOCK_CACHE_COMPRESSED_MISS), 0);
|
|
|
|
// compressed doesn't have any hits since blocks are not compressed on
|
|
|
|
// storage
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOCK_CACHE_COMPRESSED_HIT), 0);
|
|
|
|
break;
|
2013-11-27 21:32:56 +00:00
|
|
|
default:
|
|
|
|
ASSERT_TRUE(false);
|
|
|
|
}
|
2014-02-07 22:47:16 +00:00
|
|
|
|
|
|
|
options.create_if_missing = true;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2013-11-27 21:32:56 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-11-17 07:44:39 +00:00
|
|
|
static std::string CompressibleString(Random* rnd, int len) {
|
|
|
|
std::string r;
|
|
|
|
test::CompressibleString(rnd, 0.8, len, &r);
|
|
|
|
return r;
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, FailMoreDbPaths) {
|
2015-01-28 23:30:02 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-07-14 22:34:30 +00:00
|
|
|
options.db_paths.emplace_back(dbname_, 10000000);
|
|
|
|
options.db_paths.emplace_back(dbname_ + "_2", 1000000);
|
|
|
|
options.db_paths.emplace_back(dbname_ + "_3", 1000000);
|
|
|
|
options.db_paths.emplace_back(dbname_ + "_4", 1000000);
|
|
|
|
options.db_paths.emplace_back(dbname_ + "_5", 1000000);
|
2014-10-29 19:00:01 +00:00
|
|
|
ASSERT_TRUE(TryReopen(options).IsNotSupported());
|
2014-07-02 16:54:20 +00:00
|
|
|
}
|
2014-07-14 22:34:30 +00:00
|
|
|
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
2014-11-07 22:45:18 +00:00
|
|
|
void CheckColumnFamilyMeta(const ColumnFamilyMetaData& cf_meta) {
|
|
|
|
uint64_t cf_size = 0;
|
|
|
|
uint64_t cf_csize = 0;
|
|
|
|
size_t file_count = 0;
|
|
|
|
for (auto level_meta : cf_meta.levels) {
|
|
|
|
uint64_t level_size = 0;
|
|
|
|
uint64_t level_csize = 0;
|
|
|
|
file_count += level_meta.files.size();
|
|
|
|
for (auto file_meta : level_meta.files) {
|
|
|
|
level_size += file_meta.size;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(level_meta.size, level_size);
|
|
|
|
cf_size += level_size;
|
|
|
|
cf_csize += level_csize;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(cf_meta.file_count, file_count);
|
|
|
|
ASSERT_EQ(cf_meta.size, cf_size);
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, ColumnFamilyMetaDataTest) {
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
2014-11-07 22:45:18 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
int key_index = 0;
|
|
|
|
ColumnFamilyMetaData cf_meta;
|
|
|
|
for (int i = 0; i < 100; ++i) {
|
|
|
|
GenerateNewFile(&rnd, &key_index);
|
|
|
|
db_->GetColumnFamilyMetaData(&cf_meta);
|
|
|
|
CheckColumnFamilyMeta(cf_meta);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-04-10 04:17:14 +00:00
|
|
|
namespace {
|
2012-11-01 17:50:08 +00:00
|
|
|
void MinLevelHelper(DBTest* self, Options& options) {
|
|
|
|
Random rnd(301);
|
|
|
|
|
|
|
|
for (int num = 0;
|
|
|
|
num < options.level0_file_num_compaction_trigger - 1;
|
|
|
|
num++)
|
|
|
|
{
|
|
|
|
std::vector<std::string> values;
|
|
|
|
// Write 120KB (12 values, each 10K)
|
|
|
|
for (int i = 0; i < 12; i++) {
|
2015-07-14 00:41:41 +00:00
|
|
|
values.push_back(DBTestBase::RandomString(&rnd, 10000));
|
|
|
|
ASSERT_OK(self->Put(DBTestBase::Key(i), values[i]));
|
2012-11-01 17:50:08 +00:00
|
|
|
}
|
2013-10-14 22:12:15 +00:00
|
|
|
self->dbfull()->TEST_WaitForFlushMemTable();
|
2012-11-01 17:50:08 +00:00
|
|
|
ASSERT_EQ(self->NumTableFilesAtLevel(0), num + 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
//generate one more file in level-0, and should trigger level-0 compaction
|
|
|
|
std::vector<std::string> values;
|
|
|
|
for (int i = 0; i < 12; i++) {
|
2015-07-14 00:41:41 +00:00
|
|
|
values.push_back(DBTestBase::RandomString(&rnd, 10000));
|
|
|
|
ASSERT_OK(self->Put(DBTestBase::Key(i), values[i]));
|
2012-11-01 17:50:08 +00:00
|
|
|
}
|
|
|
|
self->dbfull()->TEST_WaitForCompact();
|
|
|
|
|
|
|
|
ASSERT_EQ(self->NumTableFilesAtLevel(0), 0);
|
|
|
|
ASSERT_EQ(self->NumTableFilesAtLevel(1), 1);
|
|
|
|
}
|
|
|
|
|
2012-11-16 20:55:21 +00:00
|
|
|
// returns false if the calling-Test should be skipped
|
|
|
|
bool MinLevelToCompress(CompressionType& type, Options& options, int wbits,
|
2012-11-01 17:50:08 +00:00
|
|
|
int lev, int strategy) {
|
2012-11-05 06:04:14 +00:00
|
|
|
fprintf(stderr, "Test with compression options : window_bits = %d, level = %d, strategy = %d}\n", wbits, lev, strategy);
|
2012-11-01 17:50:08 +00:00
|
|
|
options.write_buffer_size = 100<<10; //100KB
|
|
|
|
options.num_levels = 3;
|
|
|
|
options.level0_file_num_compaction_trigger = 3;
|
2012-10-28 06:13:17 +00:00
|
|
|
options.create_if_missing = true;
|
2012-11-01 17:50:08 +00:00
|
|
|
|
2015-04-06 19:50:44 +00:00
|
|
|
if (Snappy_Supported()) {
|
2012-11-01 17:50:08 +00:00
|
|
|
type = kSnappyCompression;
|
|
|
|
fprintf(stderr, "using snappy\n");
|
2015-04-06 19:50:44 +00:00
|
|
|
} else if (Zlib_Supported()) {
|
2012-11-01 17:50:08 +00:00
|
|
|
type = kZlibCompression;
|
|
|
|
fprintf(stderr, "using zlib\n");
|
2015-04-06 19:50:44 +00:00
|
|
|
} else if (BZip2_Supported()) {
|
2012-11-01 17:50:08 +00:00
|
|
|
type = kBZip2Compression;
|
|
|
|
fprintf(stderr, "using bzip2\n");
|
2015-04-06 19:50:44 +00:00
|
|
|
} else if (LZ4_Supported()) {
|
2014-02-08 02:12:30 +00:00
|
|
|
type = kLZ4Compression;
|
|
|
|
fprintf(stderr, "using lz4\n");
|
2012-11-01 17:50:08 +00:00
|
|
|
} else {
|
|
|
|
fprintf(stderr, "skipping test, compression disabled\n");
|
2012-11-16 20:55:21 +00:00
|
|
|
return false;
|
2012-11-01 17:50:08 +00:00
|
|
|
}
|
2013-01-24 18:54:26 +00:00
|
|
|
options.compression_per_level.resize(options.num_levels);
|
2012-10-28 06:13:17 +00:00
|
|
|
|
|
|
|
// do not compress L0
|
|
|
|
for (int i = 0; i < 1; i++) {
|
|
|
|
options.compression_per_level[i] = kNoCompression;
|
|
|
|
}
|
|
|
|
for (int i = 1; i < options.num_levels; i++) {
|
|
|
|
options.compression_per_level[i] = type;
|
|
|
|
}
|
2012-11-16 20:55:21 +00:00
|
|
|
return true;
|
2012-11-01 17:50:08 +00:00
|
|
|
}
|
2014-04-10 04:17:14 +00:00
|
|
|
} // namespace
|
2013-08-07 22:20:41 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, MinLevelToCompress1) {
|
2012-11-01 17:50:08 +00:00
|
|
|
Options options = CurrentOptions();
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
2014-11-07 22:45:18 +00:00
|
|
|
CompressionType type = kSnappyCompression;
|
2012-11-16 20:55:21 +00:00
|
|
|
if (!MinLevelToCompress(type, options, -14, -1, 0)) {
|
|
|
|
return;
|
|
|
|
}
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2012-11-01 17:50:08 +00:00
|
|
|
MinLevelHelper(this, options);
|
|
|
|
|
|
|
|
// do not compress L0 and L1
|
|
|
|
for (int i = 0; i < 2; i++) {
|
|
|
|
options.compression_per_level[i] = kNoCompression;
|
|
|
|
}
|
|
|
|
for (int i = 2; i < options.num_levels; i++) {
|
|
|
|
options.compression_per_level[i] = type;
|
|
|
|
}
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2012-11-01 17:50:08 +00:00
|
|
|
MinLevelHelper(this, options);
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, MinLevelToCompress2) {
|
2012-11-01 17:50:08 +00:00
|
|
|
Options options = CurrentOptions();
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
2014-11-07 22:45:18 +00:00
|
|
|
CompressionType type = kSnappyCompression;
|
2012-11-16 20:55:21 +00:00
|
|
|
if (!MinLevelToCompress(type, options, 15, -1, 0)) {
|
|
|
|
return;
|
|
|
|
}
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2012-11-01 17:50:08 +00:00
|
|
|
MinLevelHelper(this, options);
|
|
|
|
|
2012-10-28 06:13:17 +00:00
|
|
|
// do not compress L0 and L1
|
|
|
|
for (int i = 0; i < 2; i++) {
|
|
|
|
options.compression_per_level[i] = kNoCompression;
|
|
|
|
}
|
|
|
|
for (int i = 2; i < options.num_levels; i++) {
|
|
|
|
options.compression_per_level[i] = type;
|
|
|
|
}
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2012-11-01 17:50:08 +00:00
|
|
|
MinLevelHelper(this, options);
|
|
|
|
}
|
2012-10-28 06:13:17 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, RepeatedWritesToSameKey) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
Options options;
|
2013-08-07 22:20:41 +00:00
|
|
|
options.env = env_;
|
|
|
|
options.write_buffer_size = 100000; // Small write buffer
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
options = CurrentOptions(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2011-07-15 00:20:57 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// We must have at most one file per level except for level-0,
|
|
|
|
// which may have up to kL0_StopWritesTrigger files.
|
2014-02-07 22:47:16 +00:00
|
|
|
const int kMaxFiles =
|
|
|
|
options.num_levels + options.level0_stop_writes_trigger;
|
2011-07-15 00:20:57 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
Random rnd(301);
|
2014-11-11 21:47:22 +00:00
|
|
|
std::string value =
|
|
|
|
RandomString(&rnd, static_cast<int>(2 * options.write_buffer_size));
|
2013-08-07 22:20:41 +00:00
|
|
|
for (int i = 0; i < 5 * kMaxFiles; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "key", value));
|
|
|
|
ASSERT_LE(TotalTableFiles(1), kMaxFiles);
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
|
|
|
} while (ChangeCompactOptions());
|
2011-07-15 00:20:57 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, SparseMerge) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.compression = kNoCompression;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2011-03-22 18:32:49 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
FillLevels("A", "Z", 1);
|
2013-08-07 22:20:41 +00:00
|
|
|
|
|
|
|
// Suppose there is:
|
|
|
|
// small amount of data with prefix A
|
|
|
|
// large amount of data with prefix B
|
|
|
|
// small amount of data with prefix C
|
|
|
|
// and that recent updates have made small changes to all three prefixes.
|
|
|
|
// Check that we do not do a compaction that merges all of B in one shot.
|
|
|
|
const std::string value(1000, 'x');
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "A", "va");
|
2013-08-07 22:20:41 +00:00
|
|
|
// Write approximately 100MB of "B" values
|
|
|
|
for (int i = 0; i < 100000; i++) {
|
|
|
|
char key[100];
|
|
|
|
snprintf(key, sizeof(key), "B%010d", i);
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, key, value);
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "C", "vc");
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
dbfull()->TEST_CompactRange(0, nullptr, nullptr, handles_[1]);
|
2011-03-22 18:32:49 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Make sparse update
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "A", "va2");
|
|
|
|
Put(1, "B100", "bvalue2");
|
|
|
|
Put(1, "C", "vc2");
|
|
|
|
ASSERT_OK(Flush(1));
|
2013-08-07 22:20:41 +00:00
|
|
|
|
|
|
|
// Compactions should not cause us to create a situation where
|
|
|
|
// a file overlaps too much data at the next level.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_LE(dbfull()->TEST_MaxNextLevelOverlappingBytes(handles_[1]),
|
|
|
|
20 * 1048576);
|
2013-08-07 22:20:41 +00:00
|
|
|
dbfull()->TEST_CompactRange(0, nullptr, nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_LE(dbfull()->TEST_MaxNextLevelOverlappingBytes(handles_[1]),
|
|
|
|
20 * 1048576);
|
2013-08-07 22:20:41 +00:00
|
|
|
dbfull()->TEST_CompactRange(1, nullptr, nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_LE(dbfull()->TEST_MaxNextLevelOverlappingBytes(handles_[1]),
|
|
|
|
20 * 1048576);
|
2013-08-07 22:20:41 +00:00
|
|
|
} while (ChangeCompactOptions());
|
2011-03-22 18:32:49 +00:00
|
|
|
}
|
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
static bool Between(uint64_t val, uint64_t low, uint64_t high) {
|
|
|
|
bool result = (val >= low) && (val <= high);
|
|
|
|
if (!result) {
|
|
|
|
fprintf(stderr, "Value %llu is not in range [%llu, %llu]\n",
|
|
|
|
(unsigned long long)(val),
|
|
|
|
(unsigned long long)(low),
|
|
|
|
(unsigned long long)(high));
|
|
|
|
}
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2015-06-13 01:04:30 +00:00
|
|
|
TEST_F(DBTest, ApproximateSizesMemTable) {
|
|
|
|
Options options;
|
|
|
|
options.write_buffer_size = 100000000; // Large write buffer
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
const int N = 128;
|
|
|
|
Random rnd(301);
|
|
|
|
for (int i = 0; i < N; i++) {
|
|
|
|
ASSERT_OK(Put(Key(i), RandomString(&rnd, 1024)));
|
|
|
|
}
|
|
|
|
|
|
|
|
uint64_t size;
|
|
|
|
std::string start = Key(50);
|
|
|
|
std::string end = Key(60);
|
|
|
|
Range r(start, end);
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size, true);
|
|
|
|
ASSERT_GT(size, 6000);
|
|
|
|
ASSERT_LT(size, 204800);
|
|
|
|
// Zero if not including mem table
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size, false);
|
|
|
|
ASSERT_EQ(size, 0);
|
|
|
|
|
|
|
|
start = Key(500);
|
|
|
|
end = Key(600);
|
|
|
|
r = Range(start, end);
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size, true);
|
|
|
|
ASSERT_EQ(size, 0);
|
|
|
|
|
|
|
|
for (int i = 0; i < N; i++) {
|
|
|
|
ASSERT_OK(Put(Key(1000 + i), RandomString(&rnd, 1024)));
|
|
|
|
}
|
|
|
|
|
|
|
|
start = Key(500);
|
|
|
|
end = Key(600);
|
|
|
|
r = Range(start, end);
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size, true);
|
|
|
|
ASSERT_EQ(size, 0);
|
|
|
|
|
|
|
|
start = Key(100);
|
|
|
|
end = Key(1020);
|
|
|
|
r = Range(start, end);
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size, true);
|
|
|
|
ASSERT_GT(size, 6000);
|
|
|
|
|
|
|
|
options.max_write_buffer_number = 8;
|
|
|
|
options.min_write_buffer_number_to_merge = 5;
|
|
|
|
options.write_buffer_size = 1024 * N; // Not very large
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
int keys[N * 3];
|
|
|
|
for (int i = 0; i < N; i++) {
|
|
|
|
keys[i * 3] = i * 5;
|
|
|
|
keys[i * 3 + 1] = i * 5 + 1;
|
|
|
|
keys[i * 3 + 2] = i * 5 + 2;
|
|
|
|
}
|
|
|
|
std::random_shuffle(std::begin(keys), std::end(keys));
|
|
|
|
|
|
|
|
for (int i = 0; i < N * 3; i++) {
|
|
|
|
ASSERT_OK(Put(Key(keys[i] + 1000), RandomString(&rnd, 1024)));
|
|
|
|
}
|
|
|
|
|
|
|
|
start = Key(100);
|
|
|
|
end = Key(300);
|
|
|
|
r = Range(start, end);
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size, true);
|
|
|
|
ASSERT_EQ(size, 0);
|
|
|
|
|
|
|
|
start = Key(1050);
|
|
|
|
end = Key(1080);
|
|
|
|
r = Range(start, end);
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size, true);
|
|
|
|
ASSERT_GT(size, 6000);
|
|
|
|
|
|
|
|
start = Key(2100);
|
|
|
|
end = Key(2300);
|
|
|
|
r = Range(start, end);
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size, true);
|
|
|
|
ASSERT_EQ(size, 0);
|
|
|
|
|
|
|
|
start = Key(1050);
|
|
|
|
end = Key(1080);
|
|
|
|
r = Range(start, end);
|
|
|
|
uint64_t size_with_mt, size_without_mt;
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size_with_mt, true);
|
|
|
|
ASSERT_GT(size_with_mt, 6000);
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size_without_mt, false);
|
|
|
|
ASSERT_EQ(size_without_mt, 0);
|
|
|
|
|
|
|
|
Flush();
|
|
|
|
|
|
|
|
for (int i = 0; i < N; i++) {
|
|
|
|
ASSERT_OK(Put(Key(i + 1000), RandomString(&rnd, 1024)));
|
|
|
|
}
|
|
|
|
|
|
|
|
start = Key(1050);
|
|
|
|
end = Key(1080);
|
|
|
|
r = Range(start, end);
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size_with_mt, true);
|
|
|
|
db_->GetApproximateSizes(&r, 1, &size_without_mt, false);
|
|
|
|
ASSERT_GT(size_with_mt, size_without_mt);
|
|
|
|
ASSERT_GT(size_without_mt, 6000);
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, ApproximateSizes) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
Options options;
|
2012-04-17 15:36:46 +00:00
|
|
|
options.write_buffer_size = 100000000; // Large write buffer
|
|
|
|
options.compression = kNoCompression;
|
2014-10-29 18:59:18 +00:00
|
|
|
options.create_if_missing = true;
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
options = CurrentOptions(options);
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(Between(Size("", "xyz", 1), 0, 0));
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, options);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(Between(Size("", "xyz", 1), 0, 0));
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
// Write 8MB (80 values, each 100K)
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 1), 0);
|
2012-04-17 15:36:46 +00:00
|
|
|
const int N = 80;
|
|
|
|
static const int S1 = 100000;
|
|
|
|
static const int S2 = 105000; // Allow some expansion from metadata
|
|
|
|
Random rnd(301);
|
|
|
|
for (int i = 0; i < N; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, Key(i), RandomString(&rnd, S1)));
|
2012-04-17 15:36:46 +00:00
|
|
|
}
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
// 0 because GetApproximateSizes() does not account for memtable space
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(Between(Size("", Key(50), 1), 0, 0));
|
2011-04-19 23:01:25 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
// Check sizes across recovery by reopening a few times
|
|
|
|
for (int run = 0; run < 3; run++) {
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, options);
|
2011-04-20 22:48:11 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
for (int compact_start = 0; compact_start < N; compact_start += 10) {
|
|
|
|
for (int i = 0; i < N; i += 10) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(Between(Size("", Key(i), 1), S1 * i, S2 * i));
|
|
|
|
ASSERT_TRUE(Between(Size("", Key(i) + ".suffix", 1), S1 * (i + 1),
|
|
|
|
S2 * (i + 1)));
|
|
|
|
ASSERT_TRUE(Between(Size(Key(i), Key(i + 10), 1), S1 * 10, S2 * 10));
|
2012-04-17 15:36:46 +00:00
|
|
|
}
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(Between(Size("", Key(50), 1), S1 * 50, S2 * 50));
|
|
|
|
ASSERT_TRUE(
|
|
|
|
Between(Size("", Key(50) + ".suffix", 1), S1 * 50, S2 * 50));
|
2012-04-17 15:36:46 +00:00
|
|
|
|
|
|
|
std::string cstart_str = Key(compact_start);
|
|
|
|
std::string cend_str = Key(compact_start + 9);
|
|
|
|
Slice cstart = cstart_str;
|
|
|
|
Slice cend = cend_str;
|
2014-02-07 22:47:16 +00:00
|
|
|
dbfull()->TEST_CompactRange(0, &cstart, &cend, handles_[1]);
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
2011-04-20 22:48:11 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 1), 0);
|
|
|
|
ASSERT_GT(NumTableFilesAtLevel(1, 1), 0);
|
2012-04-17 15:36:46 +00:00
|
|
|
}
|
2013-12-20 17:35:24 +00:00
|
|
|
// ApproximateOffsetOf() is not yet implemented in plain table format.
|
2014-05-21 18:43:35 +00:00
|
|
|
} while (ChangeOptions(kSkipUniversalCompaction | kSkipFIFOCompaction |
|
2014-08-08 16:44:14 +00:00
|
|
|
kSkipPlainTable | kSkipHashIndex));
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, ApproximateSizes_MixOfSmallAndLarge) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.compression = kNoCompression;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
Random rnd(301);
|
|
|
|
std::string big1 = RandomString(&rnd, 100000);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, Key(0), RandomString(&rnd, 10000)));
|
|
|
|
ASSERT_OK(Put(1, Key(1), RandomString(&rnd, 10000)));
|
|
|
|
ASSERT_OK(Put(1, Key(2), big1));
|
|
|
|
ASSERT_OK(Put(1, Key(3), RandomString(&rnd, 10000)));
|
|
|
|
ASSERT_OK(Put(1, Key(4), big1));
|
|
|
|
ASSERT_OK(Put(1, Key(5), RandomString(&rnd, 10000)));
|
|
|
|
ASSERT_OK(Put(1, Key(6), RandomString(&rnd, 300000)));
|
|
|
|
ASSERT_OK(Put(1, Key(7), RandomString(&rnd, 10000)));
|
2012-04-17 15:36:46 +00:00
|
|
|
|
|
|
|
// Check sizes across recovery by reopening a few times
|
|
|
|
for (int run = 0; run < 3; run++) {
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, options);
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(Between(Size("", Key(0), 1), 0, 0));
|
|
|
|
ASSERT_TRUE(Between(Size("", Key(1), 1), 10000, 11000));
|
|
|
|
ASSERT_TRUE(Between(Size("", Key(2), 1), 20000, 21000));
|
|
|
|
ASSERT_TRUE(Between(Size("", Key(3), 1), 120000, 121000));
|
|
|
|
ASSERT_TRUE(Between(Size("", Key(4), 1), 130000, 131000));
|
|
|
|
ASSERT_TRUE(Between(Size("", Key(5), 1), 230000, 231000));
|
|
|
|
ASSERT_TRUE(Between(Size("", Key(6), 1), 240000, 241000));
|
|
|
|
ASSERT_TRUE(Between(Size("", Key(7), 1), 540000, 541000));
|
|
|
|
ASSERT_TRUE(Between(Size("", Key(8), 1), 550000, 560000));
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(Between(Size(Key(3), Key(5), 1), 110000, 111000));
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
dbfull()->TEST_CompactRange(0, nullptr, nullptr, handles_[1]);
|
2012-04-17 15:36:46 +00:00
|
|
|
}
|
2013-12-20 17:35:24 +00:00
|
|
|
// ApproximateOffsetOf() is not yet implemented in plain table format.
|
|
|
|
} while (ChangeOptions(kSkipPlainTable));
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, IteratorPinsRef) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "hello");
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Get iterator that will yield the current contents of the DB.
|
2014-02-07 22:47:16 +00:00
|
|
|
Iterator* iter = db_->NewIterator(ReadOptions(), handles_[1]);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Write to force compactions
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "newvalue1");
|
2013-08-07 22:20:41 +00:00
|
|
|
for (int i = 0; i < 100; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
// 100K values
|
|
|
|
ASSERT_OK(Put(1, Key(i), Key(i) + std::string(100000, 'v')));
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "newvalue2");
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
iter->SeekToFirst();
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ("foo", iter->key().ToString());
|
|
|
|
ASSERT_EQ("hello", iter->value().ToString());
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_TRUE(!iter->Valid());
|
|
|
|
delete iter;
|
|
|
|
} while (ChangeCompactOptions());
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, Snapshot) {
|
2015-02-02 22:49:22 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.skip_policy = kSkipNoSnapshot;
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2015-02-02 22:49:22 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions(options_override));
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(0, "foo", "0v1");
|
|
|
|
Put(1, "foo", "1v1");
|
2014-12-06 00:12:10 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
const Snapshot* s1 = db_->GetSnapshot();
|
2014-12-06 00:12:10 +00:00
|
|
|
ASSERT_EQ(1U, GetNumSnapshots());
|
|
|
|
uint64_t time_snap1 = GetTimeOldestSnapshots();
|
|
|
|
ASSERT_GT(time_snap1, 0U);
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(0, "foo", "0v2");
|
|
|
|
Put(1, "foo", "1v2");
|
2014-12-06 00:12:10 +00:00
|
|
|
|
2015-05-15 22:52:51 +00:00
|
|
|
env_->addon_time_.fetch_add(1);
|
2014-12-06 00:12:10 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
const Snapshot* s2 = db_->GetSnapshot();
|
2014-12-06 00:12:10 +00:00
|
|
|
ASSERT_EQ(2U, GetNumSnapshots());
|
|
|
|
ASSERT_EQ(time_snap1, GetTimeOldestSnapshots());
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(0, "foo", "0v3");
|
|
|
|
Put(1, "foo", "1v3");
|
2014-12-06 00:12:10 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
const Snapshot* s3 = db_->GetSnapshot();
|
2014-12-06 00:12:10 +00:00
|
|
|
ASSERT_EQ(3U, GetNumSnapshots());
|
|
|
|
ASSERT_EQ(time_snap1, GetTimeOldestSnapshots());
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(0, "foo", "0v4");
|
|
|
|
Put(1, "foo", "1v4");
|
|
|
|
ASSERT_EQ("0v1", Get(0, "foo", s1));
|
|
|
|
ASSERT_EQ("1v1", Get(1, "foo", s1));
|
|
|
|
ASSERT_EQ("0v2", Get(0, "foo", s2));
|
|
|
|
ASSERT_EQ("1v2", Get(1, "foo", s2));
|
|
|
|
ASSERT_EQ("0v3", Get(0, "foo", s3));
|
|
|
|
ASSERT_EQ("1v3", Get(1, "foo", s3));
|
|
|
|
ASSERT_EQ("0v4", Get(0, "foo"));
|
|
|
|
ASSERT_EQ("1v4", Get(1, "foo"));
|
2012-04-17 15:36:46 +00:00
|
|
|
|
|
|
|
db_->ReleaseSnapshot(s3);
|
2014-12-06 00:12:10 +00:00
|
|
|
ASSERT_EQ(2U, GetNumSnapshots());
|
|
|
|
ASSERT_EQ(time_snap1, GetTimeOldestSnapshots());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("0v1", Get(0, "foo", s1));
|
|
|
|
ASSERT_EQ("1v1", Get(1, "foo", s1));
|
|
|
|
ASSERT_EQ("0v2", Get(0, "foo", s2));
|
|
|
|
ASSERT_EQ("1v2", Get(1, "foo", s2));
|
|
|
|
ASSERT_EQ("0v4", Get(0, "foo"));
|
|
|
|
ASSERT_EQ("1v4", Get(1, "foo"));
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
db_->ReleaseSnapshot(s1);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("0v2", Get(0, "foo", s2));
|
|
|
|
ASSERT_EQ("1v2", Get(1, "foo", s2));
|
|
|
|
ASSERT_EQ("0v4", Get(0, "foo"));
|
|
|
|
ASSERT_EQ("1v4", Get(1, "foo"));
|
2014-12-06 00:12:10 +00:00
|
|
|
ASSERT_EQ(1U, GetNumSnapshots());
|
|
|
|
ASSERT_LT(time_snap1, GetTimeOldestSnapshots());
|
2011-06-22 02:36:45 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
db_->ReleaseSnapshot(s2);
|
2014-12-06 00:12:10 +00:00
|
|
|
ASSERT_EQ(0U, GetNumSnapshots());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("0v4", Get(0, "foo"));
|
|
|
|
ASSERT_EQ("1v4", Get(1, "foo"));
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
} while (ChangeOptions(kSkipHashCuckoo));
|
2012-04-17 15:36:46 +00:00
|
|
|
}
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, HiddenValuesAreRemoved) {
|
2015-02-02 22:49:22 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.skip_policy = kSkipNoSnapshot;
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2015-02-02 22:49:22 +00:00
|
|
|
Options options = CurrentOptions(options_override);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2012-04-17 15:36:46 +00:00
|
|
|
Random rnd(301);
|
2014-02-07 22:47:16 +00:00
|
|
|
FillLevels("a", "z", 1);
|
2012-04-17 15:36:46 +00:00
|
|
|
|
|
|
|
std::string big = RandomString(&rnd, 50000);
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", big);
|
|
|
|
Put(1, "pastfoo", "v");
|
2012-04-17 15:36:46 +00:00
|
|
|
const Snapshot* snapshot = db_->GetSnapshot();
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "tiny");
|
|
|
|
Put(1, "pastfoo2", "v2"); // Advance sequence number one more
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_GT(NumTableFilesAtLevel(0, 1), 0);
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(big, Get(1, "foo", snapshot));
|
|
|
|
ASSERT_TRUE(Between(Size("", "pastfoo", 1), 50000, 60000));
|
2012-04-17 15:36:46 +00:00
|
|
|
db_->ReleaseSnapshot(snapshot);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ tiny, " + big + " ]");
|
2012-04-17 15:36:46 +00:00
|
|
|
Slice x("x");
|
2014-02-07 22:47:16 +00:00
|
|
|
dbfull()->TEST_CompactRange(0, nullptr, &x, handles_[1]);
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ tiny ]");
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 1), 0);
|
|
|
|
ASSERT_GE(NumTableFilesAtLevel(1, 1), 1);
|
|
|
|
dbfull()->TEST_CompactRange(1, nullptr, &x, handles_[1]);
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ tiny ]");
|
|
|
|
|
|
|
|
ASSERT_TRUE(Between(Size("", "pastfoo", 1), 0, 1000));
|
2013-12-20 17:35:24 +00:00
|
|
|
// ApproximateOffsetOf() is not yet implemented in plain table format,
|
|
|
|
// which is used by Size().
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
// skip HashCuckooRep as it does not support snapshot
|
2014-05-21 18:43:35 +00:00
|
|
|
} while (ChangeOptions(kSkipUniversalCompaction | kSkipFIFOCompaction |
|
|
|
|
kSkipPlainTable | kSkipHashCuckoo));
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, CompactBetweenSnapshots) {
|
2015-02-02 22:49:22 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.skip_policy = kSkipNoSnapshot;
|
2012-11-27 05:16:21 +00:00
|
|
|
do {
|
2015-02-02 22:49:22 +00:00
|
|
|
Options options = CurrentOptions(options_override);
|
2014-04-25 19:23:07 +00:00
|
|
|
options.disable_auto_compactions = true;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2012-11-27 05:16:21 +00:00
|
|
|
Random rnd(301);
|
2014-02-07 22:47:16 +00:00
|
|
|
FillLevels("a", "z", 1);
|
2012-11-27 05:16:21 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "first");
|
2012-11-27 05:16:21 +00:00
|
|
|
const Snapshot* snapshot1 = db_->GetSnapshot();
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "second");
|
|
|
|
Put(1, "foo", "third");
|
|
|
|
Put(1, "foo", "fourth");
|
2012-11-27 05:16:21 +00:00
|
|
|
const Snapshot* snapshot2 = db_->GetSnapshot();
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "fifth");
|
|
|
|
Put(1, "foo", "sixth");
|
2012-11-27 05:16:21 +00:00
|
|
|
|
|
|
|
// All entries (including duplicates) exist
|
|
|
|
// before any compaction is triggered.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ("sixth", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("fourth", Get(1, "foo", snapshot2));
|
|
|
|
ASSERT_EQ("first", Get(1, "foo", snapshot1));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1),
|
2012-11-27 05:16:21 +00:00
|
|
|
"[ sixth, fifth, fourth, third, second, first ]");
|
|
|
|
|
|
|
|
// After a compaction, "second", "third" and "fifth" should
|
|
|
|
// be removed
|
2014-02-07 22:47:16 +00:00
|
|
|
FillLevels("a", "z", 1);
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), handles_[1], nullptr,
|
|
|
|
nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("sixth", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("fourth", Get(1, "foo", snapshot2));
|
|
|
|
ASSERT_EQ("first", Get(1, "foo", snapshot1));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ sixth, fourth, first ]");
|
2012-11-27 05:16:21 +00:00
|
|
|
|
|
|
|
// after we release the snapshot1, only two values left
|
|
|
|
db_->ReleaseSnapshot(snapshot1);
|
2014-02-07 22:47:16 +00:00
|
|
|
FillLevels("a", "z", 1);
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), handles_[1], nullptr,
|
|
|
|
nullptr);
|
2012-11-27 05:16:21 +00:00
|
|
|
|
|
|
|
// We have only one valid snapshot snapshot2. Since snapshot1 is
|
|
|
|
// not valid anymore, "first" should be removed by a compaction.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("sixth", Get(1, "foo"));
|
|
|
|
ASSERT_EQ("fourth", Get(1, "foo", snapshot2));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ sixth, fourth ]");
|
2012-11-27 05:16:21 +00:00
|
|
|
|
|
|
|
// after we release the snapshot2, only one value should be left
|
|
|
|
db_->ReleaseSnapshot(snapshot2);
|
2014-02-07 22:47:16 +00:00
|
|
|
FillLevels("a", "z", 1);
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), handles_[1], nullptr,
|
|
|
|
nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("sixth", Get(1, "foo"));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ sixth ]");
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
// skip HashCuckooRep as it does not support snapshot
|
2014-05-21 18:43:35 +00:00
|
|
|
} while (ChangeOptions(kSkipHashCuckoo | kSkipFIFOCompaction));
|
2012-11-27 05:16:21 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DeletionMarkers1) {
|
2014-09-18 20:32:44 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.max_background_flushes = 0;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "v1");
|
|
|
|
ASSERT_OK(Flush(1));
|
2015-07-17 19:02:52 +00:00
|
|
|
const int last = 2;
|
|
|
|
MoveFilesToLevel(last, 1);
|
2014-02-07 22:47:16 +00:00
|
|
|
// foo => v1 is now in last level
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(last, 1), 1);
|
2011-06-22 02:36:45 +00:00
|
|
|
|
|
|
|
// Place a table at level last-1 to prevent merging with preceding mutation
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "a", "begin");
|
|
|
|
Put(1, "z", "end");
|
|
|
|
Flush(1);
|
2015-07-17 19:02:52 +00:00
|
|
|
MoveFilesToLevel(last - 1, 1);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(last, 1), 1);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(last - 1, 1), 1);
|
|
|
|
|
|
|
|
Delete(1, "foo");
|
|
|
|
Put(1, "foo", "v2");
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v2, DEL, v1 ]");
|
|
|
|
ASSERT_OK(Flush(1)); // Moves to level last-2
|
2015-07-14 11:07:02 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v2, v1 ]");
|
2011-10-05 23:30:28 +00:00
|
|
|
Slice z("z");
|
2014-02-07 22:47:16 +00:00
|
|
|
dbfull()->TEST_CompactRange(last - 2, nullptr, &z, handles_[1]);
|
2011-03-18 22:37:00 +00:00
|
|
|
// DEL eliminated, but v1 remains because we aren't compacting that level
|
|
|
|
// (DEL can be eliminated because v2 hides v1).
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v2, v1 ]");
|
|
|
|
dbfull()->TEST_CompactRange(last - 1, nullptr, nullptr, handles_[1]);
|
2011-06-22 02:36:45 +00:00
|
|
|
// Merging last-1 w/ last, so we are the base level for "foo", so
|
|
|
|
// DEL is removed. (as is v1).
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v2 ]");
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DeletionMarkers2) {
|
2014-09-18 20:32:44 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "v1");
|
|
|
|
ASSERT_OK(Flush(1));
|
2015-07-17 19:02:52 +00:00
|
|
|
const int last = 2;
|
|
|
|
MoveFilesToLevel(last, 1);
|
2014-02-07 22:47:16 +00:00
|
|
|
// foo => v1 is now in last level
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(last, 1), 1);
|
2011-06-22 02:36:45 +00:00
|
|
|
|
|
|
|
// Place a table at level last-1 to prevent merging with preceding mutation
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "a", "begin");
|
|
|
|
Put(1, "z", "end");
|
|
|
|
Flush(1);
|
2015-07-17 19:02:52 +00:00
|
|
|
MoveFilesToLevel(last - 1, 1);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(last, 1), 1);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(last - 1, 1), 1);
|
|
|
|
|
|
|
|
Delete(1, "foo");
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ DEL, v1 ]");
|
|
|
|
ASSERT_OK(Flush(1)); // Moves to level last-2
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ DEL, v1 ]");
|
|
|
|
dbfull()->TEST_CompactRange(last - 2, nullptr, nullptr, handles_[1]);
|
2011-06-22 02:36:45 +00:00
|
|
|
// DEL kept: "last" file overlaps
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ DEL, v1 ]");
|
|
|
|
dbfull()->TEST_CompactRange(last - 1, nullptr, nullptr, handles_[1]);
|
2011-06-22 02:36:45 +00:00
|
|
|
// Merging last-1 w/ last, so we are the base level for "foo", so
|
|
|
|
// DEL is removed. (as is v1).
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ ]");
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, OverlapInLevel0) {
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2014-09-18 20:32:44 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2011-10-05 23:30:28 +00:00
|
|
|
|
2013-10-04 17:21:03 +00:00
|
|
|
//Fill levels 1 and 2 to disable the pushing of new memtables to levels > 0.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "100", "v100"));
|
|
|
|
ASSERT_OK(Put(1, "999", "v999"));
|
|
|
|
Flush(1);
|
2015-07-17 19:02:52 +00:00
|
|
|
MoveFilesToLevel(2, 1);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Delete(1, "100"));
|
|
|
|
ASSERT_OK(Delete(1, "999"));
|
|
|
|
Flush(1);
|
2015-07-17 19:02:52 +00:00
|
|
|
MoveFilesToLevel(1, 1);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("0,1,1", FilesPerLevel(1));
|
2012-04-17 15:36:46 +00:00
|
|
|
|
|
|
|
// Make files spanning the following ranges in level-0:
|
|
|
|
// files[0] 200 .. 900
|
|
|
|
// files[1] 300 .. 500
|
|
|
|
// Note that files are sorted by smallest key.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "300", "v300"));
|
|
|
|
ASSERT_OK(Put(1, "500", "v500"));
|
|
|
|
Flush(1);
|
|
|
|
ASSERT_OK(Put(1, "200", "v200"));
|
|
|
|
ASSERT_OK(Put(1, "600", "v600"));
|
|
|
|
ASSERT_OK(Put(1, "900", "v900"));
|
|
|
|
Flush(1);
|
|
|
|
ASSERT_EQ("2,1,1", FilesPerLevel(1));
|
2011-10-05 23:30:28 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
// Compact away the placeholder files we created initially
|
2014-02-07 22:47:16 +00:00
|
|
|
dbfull()->TEST_CompactRange(1, nullptr, nullptr, handles_[1]);
|
|
|
|
dbfull()->TEST_CompactRange(2, nullptr, nullptr, handles_[1]);
|
|
|
|
ASSERT_EQ("2", FilesPerLevel(1));
|
2011-10-05 23:30:28 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
// Do a memtable compaction. Before bug-fix, the compaction would
|
|
|
|
// not detect the overlap with level-0 files and would incorrectly place
|
|
|
|
// the deletion in a deeper level.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Delete(1, "600"));
|
|
|
|
Flush(1);
|
|
|
|
ASSERT_EQ("3", FilesPerLevel(1));
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get(1, "600"));
|
2014-05-21 18:43:35 +00:00
|
|
|
} while (ChangeOptions(kSkipUniversalCompaction | kSkipFIFOCompaction));
|
2011-10-05 23:30:28 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, ComparatorCheck) {
|
2011-03-18 22:37:00 +00:00
|
|
|
class NewComparator : public Comparator {
|
|
|
|
public:
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual const char* Name() const override {
|
|
|
|
return "rocksdb.NewComparator";
|
|
|
|
}
|
|
|
|
virtual int Compare(const Slice& a, const Slice& b) const override {
|
2011-03-18 22:37:00 +00:00
|
|
|
return BytewiseComparator()->Compare(a, b);
|
|
|
|
}
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual void FindShortestSeparator(std::string* s,
|
|
|
|
const Slice& l) const override {
|
2011-03-18 22:37:00 +00:00
|
|
|
BytewiseComparator()->FindShortestSeparator(s, l);
|
|
|
|
}
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual void FindShortSuccessor(std::string* key) const override {
|
2011-03-18 22:37:00 +00:00
|
|
|
BytewiseComparator()->FindShortSuccessor(key);
|
|
|
|
}
|
|
|
|
};
|
2014-02-07 22:47:16 +00:00
|
|
|
Options new_options, options;
|
2013-10-01 21:46:52 +00:00
|
|
|
NewComparator cmp;
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2014-02-07 22:47:16 +00:00
|
|
|
options = CurrentOptions();
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-10-01 21:46:52 +00:00
|
|
|
new_options = CurrentOptions();
|
2013-08-07 22:20:41 +00:00
|
|
|
new_options.comparator = &cmp;
|
2014-02-07 22:47:16 +00:00
|
|
|
// only the non-default column family has non-matching comparator
|
|
|
|
Status s = TryReopenWithColumnFamilies({"default", "pikachu"},
|
2014-10-29 19:00:42 +00:00
|
|
|
std::vector<Options>({options, new_options}));
|
2013-08-07 22:20:41 +00:00
|
|
|
ASSERT_TRUE(!s.ok());
|
|
|
|
ASSERT_TRUE(s.ToString().find("comparator") != std::string::npos)
|
|
|
|
<< s.ToString();
|
2014-10-29 20:36:18 +00:00
|
|
|
} while (ChangeCompactOptions());
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, CustomComparator) {
|
2011-10-31 17:22:06 +00:00
|
|
|
class NumberComparator : public Comparator {
|
|
|
|
public:
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual const char* Name() const override {
|
|
|
|
return "test.NumberComparator";
|
|
|
|
}
|
|
|
|
virtual int Compare(const Slice& a, const Slice& b) const override {
|
2011-11-14 17:06:16 +00:00
|
|
|
return ToNumber(a) - ToNumber(b);
|
|
|
|
}
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual void FindShortestSeparator(std::string* s,
|
|
|
|
const Slice& l) const override {
|
2011-11-14 17:06:16 +00:00
|
|
|
ToNumber(*s); // Check format
|
|
|
|
ToNumber(l); // Check format
|
|
|
|
}
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual void FindShortSuccessor(std::string* key) const override {
|
2011-11-14 17:06:16 +00:00
|
|
|
ToNumber(*key); // Check format
|
|
|
|
}
|
|
|
|
private:
|
|
|
|
static int ToNumber(const Slice& x) {
|
|
|
|
// Check that there are no extra characters.
|
rocksdb: Replace ASSERT* with EXPECT* in functions that does not return void value
Summary:
gtest does not use exceptions to fail a unit test by design, and `ASSERT*`s are implemented using `return`. As a consequence we cannot use `ASSERT*` in a function that does not return `void` value ([[ https://code.google.com/p/googletest/wiki/AdvancedGuide#Assertion_Placement | 1]]), and have to fix our existing code. This diff does this in a generic way, with no manual changes.
In order to detect all existing `ASSERT*` that are used in functions that doesn't return void value, I change the code to generate compile errors for such cases.
In `util/testharness.h` I defined `EXPECT*` assertions, the same way as `ASSERT*`, and redefined `ASSERT*` to return `void`. Then executed:
```lang=bash
% USE_CLANG=1 make all -j55 -k 2> build.log
% perl -naF: -e 'print "-- -number=".$F[1]." ".$F[0]."\n" if /: error:/' \
build.log | xargs -L 1 perl -spi -e 's/ASSERT/EXPECT/g if $. == $number'
% make format
```
After that I reverted back change to `ASSERT*` in `util/testharness.h`. But preserved introduced `EXPECT*`, which is the same as `ASSERT*`. This will be deleted once switched to gtest.
This diff is independent and contains manual changes only in `util/testharness.h`.
Test Plan:
Make sure all tests are passing.
```lang=bash
% USE_CLANG=1 make check
```
Reviewers: igor, lgalanis, sdong, yufei.zhu, rven, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D33333
2015-03-17 03:52:32 +00:00
|
|
|
EXPECT_TRUE(x.size() >= 2 && x[0] == '[' && x[x.size() - 1] == ']')
|
2011-11-14 17:06:16 +00:00
|
|
|
<< EscapeString(x);
|
|
|
|
int val;
|
|
|
|
char ignored;
|
rocksdb: Replace ASSERT* with EXPECT* in functions that does not return void value
Summary:
gtest does not use exceptions to fail a unit test by design, and `ASSERT*`s are implemented using `return`. As a consequence we cannot use `ASSERT*` in a function that does not return `void` value ([[ https://code.google.com/p/googletest/wiki/AdvancedGuide#Assertion_Placement | 1]]), and have to fix our existing code. This diff does this in a generic way, with no manual changes.
In order to detect all existing `ASSERT*` that are used in functions that doesn't return void value, I change the code to generate compile errors for such cases.
In `util/testharness.h` I defined `EXPECT*` assertions, the same way as `ASSERT*`, and redefined `ASSERT*` to return `void`. Then executed:
```lang=bash
% USE_CLANG=1 make all -j55 -k 2> build.log
% perl -naF: -e 'print "-- -number=".$F[1]." ".$F[0]."\n" if /: error:/' \
build.log | xargs -L 1 perl -spi -e 's/ASSERT/EXPECT/g if $. == $number'
% make format
```
After that I reverted back change to `ASSERT*` in `util/testharness.h`. But preserved introduced `EXPECT*`, which is the same as `ASSERT*`. This will be deleted once switched to gtest.
This diff is independent and contains manual changes only in `util/testharness.h`.
Test Plan:
Make sure all tests are passing.
```lang=bash
% USE_CLANG=1 make check
```
Reviewers: igor, lgalanis, sdong, yufei.zhu, rven, meyering
Reviewed By: meyering
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D33333
2015-03-17 03:52:32 +00:00
|
|
|
EXPECT_TRUE(sscanf(x.ToString().c_str(), "[%i]%c", &val, &ignored) == 1)
|
2011-11-14 17:06:16 +00:00
|
|
|
<< EscapeString(x);
|
|
|
|
return val;
|
2011-10-31 17:22:06 +00:00
|
|
|
}
|
|
|
|
};
|
2013-10-01 21:46:52 +00:00
|
|
|
Options new_options;
|
|
|
|
NumberComparator cmp;
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2013-10-01 21:46:52 +00:00
|
|
|
new_options = CurrentOptions();
|
2013-08-07 22:20:41 +00:00
|
|
|
new_options.create_if_missing = true;
|
|
|
|
new_options.comparator = &cmp;
|
|
|
|
new_options.write_buffer_size = 1000; // Compact more often
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
new_options = CurrentOptions(new_options);
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(new_options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, new_options);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "[10]", "ten"));
|
|
|
|
ASSERT_OK(Put(1, "[0x14]", "twenty"));
|
2013-08-07 22:20:41 +00:00
|
|
|
for (int i = 0; i < 2; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("ten", Get(1, "[10]"));
|
|
|
|
ASSERT_EQ("ten", Get(1, "[0xa]"));
|
|
|
|
ASSERT_EQ("twenty", Get(1, "[20]"));
|
|
|
|
ASSERT_EQ("twenty", Get(1, "[0x14]"));
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get(1, "[15]"));
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get(1, "[0xf]"));
|
|
|
|
Compact(1, "[0]", "[9999]");
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
for (int run = 0; run < 2; run++) {
|
|
|
|
for (int i = 0; i < 1000; i++) {
|
|
|
|
char buf[100];
|
2015-07-21 10:05:57 +00:00
|
|
|
snprintf(buf, sizeof(buf), "[%d]", i*10);
|
|
|
|
ASSERT_OK(Put(1, buf, buf));
|
|
|
|
}
|
|
|
|
Compact(1, "[0]", "[1000000]");
|
2014-12-16 05:48:16 +00:00
|
|
|
}
|
2015-07-21 10:05:57 +00:00
|
|
|
} while (ChangeCompactOptions());
|
2014-12-16 05:48:16 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DBOpen_Options) {
|
2014-10-31 22:08:10 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
std::string dbname = test::TmpDir(env_) + "/db_options_test";
|
|
|
|
ASSERT_OK(DestroyDB(dbname, options));
|
2011-03-18 22:37:00 +00:00
|
|
|
|
|
|
|
// Does not exist, and create_if_missing == false: error
|
2013-03-01 02:04:58 +00:00
|
|
|
DB* db = nullptr;
|
2014-10-31 22:08:10 +00:00
|
|
|
options.create_if_missing = false;
|
|
|
|
Status s = DB::Open(options, dbname, &db);
|
2013-03-01 02:04:58 +00:00
|
|
|
ASSERT_TRUE(strstr(s.ToString().c_str(), "does not exist") != nullptr);
|
|
|
|
ASSERT_TRUE(db == nullptr);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
|
|
|
// Does not exist, and create_if_missing == true: OK
|
2014-10-31 22:08:10 +00:00
|
|
|
options.create_if_missing = true;
|
|
|
|
s = DB::Open(options, dbname, &db);
|
2011-03-18 22:37:00 +00:00
|
|
|
ASSERT_OK(s);
|
2013-03-01 02:04:58 +00:00
|
|
|
ASSERT_TRUE(db != nullptr);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
|
|
|
delete db;
|
2013-03-01 02:04:58 +00:00
|
|
|
db = nullptr;
|
2011-03-18 22:37:00 +00:00
|
|
|
|
|
|
|
// Does exist, and error_if_exists == true: error
|
2014-10-31 22:08:10 +00:00
|
|
|
options.create_if_missing = false;
|
|
|
|
options.error_if_exists = true;
|
|
|
|
s = DB::Open(options, dbname, &db);
|
2013-03-01 02:04:58 +00:00
|
|
|
ASSERT_TRUE(strstr(s.ToString().c_str(), "exists") != nullptr);
|
|
|
|
ASSERT_TRUE(db == nullptr);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
|
|
|
// Does exist, and error_if_exists == false: OK
|
2014-10-31 22:08:10 +00:00
|
|
|
options.create_if_missing = true;
|
|
|
|
options.error_if_exists = false;
|
|
|
|
s = DB::Open(options, dbname, &db);
|
2011-03-18 22:37:00 +00:00
|
|
|
ASSERT_OK(s);
|
2013-03-01 02:04:58 +00:00
|
|
|
ASSERT_TRUE(db != nullptr);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
|
|
|
delete db;
|
2013-03-01 02:04:58 +00:00
|
|
|
db = nullptr;
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DBOpen_Change_NumLevels) {
|
2014-10-31 22:08:10 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
DestroyAndReopen(options);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(db_ != nullptr);
|
2014-10-31 22:08:10 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-02-07 22:47:16 +00:00
|
|
|
|
|
|
|
ASSERT_OK(Put(1, "a", "123"));
|
|
|
|
ASSERT_OK(Put(1, "b", "234"));
|
2015-07-17 19:02:52 +00:00
|
|
|
Flush(1);
|
|
|
|
MoveFilesToLevel(3, 1);
|
2014-02-07 22:47:16 +00:00
|
|
|
Close();
|
2012-10-29 22:25:01 +00:00
|
|
|
|
2014-10-31 22:08:10 +00:00
|
|
|
options.create_if_missing = false;
|
|
|
|
options.num_levels = 2;
|
|
|
|
Status s = TryReopenWithColumnFamilies({"default", "pikachu"}, options);
|
2014-01-14 23:54:11 +00:00
|
|
|
ASSERT_TRUE(strstr(s.ToString().c_str(), "Invalid argument") != nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_TRUE(db_ == nullptr);
|
2012-10-29 22:25:01 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DestroyDBMetaDatabase) {
|
2014-10-31 22:08:10 +00:00
|
|
|
std::string dbname = test::TmpDir(env_) + "/db_meta";
|
2015-02-12 00:11:40 +00:00
|
|
|
ASSERT_OK(env_->CreateDirIfMissing(dbname));
|
2012-12-17 19:26:59 +00:00
|
|
|
std::string metadbname = MetaDatabaseName(dbname, 0);
|
2015-02-12 00:11:40 +00:00
|
|
|
ASSERT_OK(env_->CreateDirIfMissing(metadbname));
|
2012-12-17 19:26:59 +00:00
|
|
|
std::string metametadbname = MetaDatabaseName(metadbname, 0);
|
2015-02-12 00:11:40 +00:00
|
|
|
ASSERT_OK(env_->CreateDirIfMissing(metametadbname));
|
2012-12-17 19:26:59 +00:00
|
|
|
|
|
|
|
// Destroy previous versions if they exist. Using the long way.
|
2014-10-31 22:08:10 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
ASSERT_OK(DestroyDB(metametadbname, options));
|
|
|
|
ASSERT_OK(DestroyDB(metadbname, options));
|
|
|
|
ASSERT_OK(DestroyDB(dbname, options));
|
2012-12-17 19:26:59 +00:00
|
|
|
|
|
|
|
// Setup databases
|
2013-03-01 02:04:58 +00:00
|
|
|
DB* db = nullptr;
|
2014-10-31 22:08:10 +00:00
|
|
|
ASSERT_OK(DB::Open(options, dbname, &db));
|
2012-12-17 19:26:59 +00:00
|
|
|
delete db;
|
2013-03-01 02:04:58 +00:00
|
|
|
db = nullptr;
|
2014-10-31 22:08:10 +00:00
|
|
|
ASSERT_OK(DB::Open(options, metadbname, &db));
|
2012-12-17 19:26:59 +00:00
|
|
|
delete db;
|
2013-03-01 02:04:58 +00:00
|
|
|
db = nullptr;
|
2014-10-31 22:08:10 +00:00
|
|
|
ASSERT_OK(DB::Open(options, metametadbname, &db));
|
2012-12-17 19:26:59 +00:00
|
|
|
delete db;
|
2013-03-01 02:04:58 +00:00
|
|
|
db = nullptr;
|
2012-12-17 19:26:59 +00:00
|
|
|
|
|
|
|
// Delete databases
|
2014-10-31 22:08:10 +00:00
|
|
|
ASSERT_OK(DestroyDB(dbname, options));
|
2012-12-17 19:26:59 +00:00
|
|
|
|
|
|
|
// Check if deletion worked.
|
2014-10-31 22:08:10 +00:00
|
|
|
options.create_if_missing = false;
|
|
|
|
ASSERT_TRUE(!(DB::Open(options, dbname, &db)).ok());
|
|
|
|
ASSERT_TRUE(!(DB::Open(options, metadbname, &db)).ok());
|
|
|
|
ASSERT_TRUE(!(DB::Open(options, metametadbname, &db)).ok());
|
2012-12-17 19:26:59 +00:00
|
|
|
}
|
|
|
|
|
2014-09-11 00:00:00 +00:00
|
|
|
// Check that number of files does not grow when writes are dropped
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DropWrites) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.env = env_;
|
2014-03-31 19:44:54 +00:00
|
|
|
options.paranoid_checks = false;
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2012-01-25 22:56:52 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
ASSERT_OK(Put("foo", "v1"));
|
|
|
|
ASSERT_EQ("v1", Get("foo"));
|
|
|
|
Compact("a", "z");
|
2014-11-11 21:47:22 +00:00
|
|
|
const size_t num_files = CountFiles();
|
2014-10-27 21:50:21 +00:00
|
|
|
// Force out-of-space errors
|
|
|
|
env_->drop_writes_.store(true, std::memory_order_release);
|
2013-08-07 22:20:41 +00:00
|
|
|
env_->sleep_counter_.Reset();
|
2015-06-29 23:04:24 +00:00
|
|
|
env_->no_sleep_ = true;
|
2013-08-07 22:20:41 +00:00
|
|
|
for (int i = 0; i < 5; i++) {
|
2015-04-22 23:55:22 +00:00
|
|
|
if (option_config_ != kUniversalCompactionMultiLevel) {
|
|
|
|
for (int level = 0; level < dbfull()->NumberLevels(); level++) {
|
|
|
|
if (level > 0 && level == dbfull()->NumberLevels() - 1) {
|
|
|
|
break;
|
|
|
|
}
|
Allowing L0 -> L1 trivial move on sorted data
Summary:
This diff updates the logic of how we do trivial move, now trivial move can run on any number of files in input level as long as they are not overlapping
The conditions for trivial move have been updated
Introduced conditions:
- Trivial move cannot happen if we have a compaction filter (except if the compaction is not manual)
- Input level files cannot be overlapping
Removed conditions:
- Trivial move only run when the compaction is not manual
- Input level should can contain only 1 file
More context on what tests failed because of Trivial move
```
DBTest.CompactionsGenerateMultipleFiles
This test is expecting compaction on a file in L0 to generate multiple files in L1, this test will fail with trivial move because we end up with one file in L1
```
```
DBTest.NoSpaceCompactRange
This test expect compaction to fail when we force environment to report running out of space, of course this is not valid in trivial move situation
because trivial move does not need any extra space, and did not check for that
```
```
DBTest.DropWrites
Similar to DBTest.NoSpaceCompactRange
```
```
DBTest.DeleteObsoleteFilesPendingOutputs
This test expect that a file in L2 is deleted after it's moved to L3, this is not valid with trivial move because although the file was moved it is now used by L3
```
```
CuckooTableDBTest.CompactionIntoMultipleFiles
Same as DBTest.CompactionsGenerateMultipleFiles
```
This diff is based on a work by @sdong https://reviews.facebook.net/D34149
Test Plan: make -j64 check
Reviewers: rven, sdong, igor
Reviewed By: igor
Subscribers: yhchiang, ott, march, dhruba, sdong
Differential Revision: https://reviews.facebook.net/D34797
2015-06-04 23:51:25 +00:00
|
|
|
dbfull()->TEST_CompactRange(level, nullptr, nullptr, nullptr,
|
|
|
|
true /* disallow trivial move */);
|
2015-03-30 21:04:21 +00:00
|
|
|
}
|
2015-04-22 23:55:22 +00:00
|
|
|
} else {
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
2012-01-25 22:56:52 +00:00
|
|
|
}
|
2014-03-18 19:25:08 +00:00
|
|
|
|
|
|
|
std::string property_value;
|
|
|
|
ASSERT_TRUE(db_->GetProperty("rocksdb.background-errors", &property_value));
|
|
|
|
ASSERT_EQ("5", property_value);
|
|
|
|
|
2014-10-27 21:50:21 +00:00
|
|
|
env_->drop_writes_.store(false, std::memory_order_release);
|
2013-08-07 22:20:41 +00:00
|
|
|
ASSERT_LT(CountFiles(), num_files + 3);
|
2012-08-22 23:57:51 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Check that compaction attempts slept after errors
|
2015-07-20 19:37:54 +00:00
|
|
|
// TODO @krad: Figure out why ASSERT_EQ 5 keeps failing in certain compiler
|
|
|
|
// versions
|
|
|
|
ASSERT_GE(env_->sleep_counter_.Read(), 4);
|
2013-08-07 22:20:41 +00:00
|
|
|
} while (ChangeCompactOptions());
|
2012-08-22 23:57:51 +00:00
|
|
|
}
|
|
|
|
|
2014-03-18 19:25:08 +00:00
|
|
|
// Check background error counter bumped on flush failures.
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DropWritesFlush) {
|
2014-03-18 19:25:08 +00:00
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.env = env_;
|
|
|
|
options.max_background_flushes = 1;
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-03-18 19:25:08 +00:00
|
|
|
|
|
|
|
ASSERT_OK(Put("foo", "v1"));
|
2014-10-27 21:50:21 +00:00
|
|
|
// Force out-of-space errors
|
|
|
|
env_->drop_writes_.store(true, std::memory_order_release);
|
2014-03-18 19:25:08 +00:00
|
|
|
|
|
|
|
std::string property_value;
|
|
|
|
// Background error count is 0 now.
|
|
|
|
ASSERT_TRUE(db_->GetProperty("rocksdb.background-errors", &property_value));
|
|
|
|
ASSERT_EQ("0", property_value);
|
|
|
|
|
2014-11-07 00:07:07 +00:00
|
|
|
dbfull()->TEST_FlushMemTable(true);
|
|
|
|
|
|
|
|
ASSERT_TRUE(db_->GetProperty("rocksdb.background-errors", &property_value));
|
2014-03-18 19:25:08 +00:00
|
|
|
ASSERT_EQ("1", property_value);
|
|
|
|
|
2014-10-27 21:50:21 +00:00
|
|
|
env_->drop_writes_.store(false, std::memory_order_release);
|
2014-09-11 00:00:00 +00:00
|
|
|
} while (ChangeCompactOptions());
|
|
|
|
}
|
|
|
|
|
|
|
|
// Check that CompactRange() returns failure if there is not enough space left
|
|
|
|
// on device
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, NoSpaceCompactRange) {
|
2014-09-11 00:00:00 +00:00
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.env = env_;
|
|
|
|
options.disable_auto_compactions = true;
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-09-11 00:00:00 +00:00
|
|
|
|
|
|
|
// generate 5 tables
|
|
|
|
for (int i = 0; i < 5; ++i) {
|
|
|
|
ASSERT_OK(Put(Key(i), Key(i) + "v"));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
}
|
|
|
|
|
2014-10-27 21:50:21 +00:00
|
|
|
// Force out-of-space errors
|
|
|
|
env_->no_space_.store(true, std::memory_order_release);
|
2014-09-11 00:00:00 +00:00
|
|
|
|
Allowing L0 -> L1 trivial move on sorted data
Summary:
This diff updates the logic of how we do trivial move, now trivial move can run on any number of files in input level as long as they are not overlapping
The conditions for trivial move have been updated
Introduced conditions:
- Trivial move cannot happen if we have a compaction filter (except if the compaction is not manual)
- Input level files cannot be overlapping
Removed conditions:
- Trivial move only run when the compaction is not manual
- Input level should can contain only 1 file
More context on what tests failed because of Trivial move
```
DBTest.CompactionsGenerateMultipleFiles
This test is expecting compaction on a file in L0 to generate multiple files in L1, this test will fail with trivial move because we end up with one file in L1
```
```
DBTest.NoSpaceCompactRange
This test expect compaction to fail when we force environment to report running out of space, of course this is not valid in trivial move situation
because trivial move does not need any extra space, and did not check for that
```
```
DBTest.DropWrites
Similar to DBTest.NoSpaceCompactRange
```
```
DBTest.DeleteObsoleteFilesPendingOutputs
This test expect that a file in L2 is deleted after it's moved to L3, this is not valid with trivial move because although the file was moved it is now used by L3
```
```
CuckooTableDBTest.CompactionIntoMultipleFiles
Same as DBTest.CompactionsGenerateMultipleFiles
```
This diff is based on a work by @sdong https://reviews.facebook.net/D34149
Test Plan: make -j64 check
Reviewers: rven, sdong, igor
Reviewed By: igor
Subscribers: yhchiang, ott, march, dhruba, sdong
Differential Revision: https://reviews.facebook.net/D34797
2015-06-04 23:51:25 +00:00
|
|
|
Status s = dbfull()->TEST_CompactRange(0, nullptr, nullptr, nullptr,
|
|
|
|
true /* disallow trivial move */);
|
2014-09-11 00:00:00 +00:00
|
|
|
ASSERT_TRUE(s.IsIOError());
|
|
|
|
|
2014-10-27 21:50:21 +00:00
|
|
|
env_->no_space_.store(false, std::memory_order_release);
|
2014-03-18 19:25:08 +00:00
|
|
|
} while (ChangeCompactOptions());
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, NonWritableFileSystem) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.write_buffer_size = 1000;
|
|
|
|
options.env = env_;
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2013-08-07 22:20:41 +00:00
|
|
|
ASSERT_OK(Put("foo", "v1"));
|
2014-10-28 21:27:26 +00:00
|
|
|
env_->non_writeable_rate_.store(100);
|
2013-08-07 22:20:41 +00:00
|
|
|
std::string big(100000, 'x');
|
|
|
|
int errors = 0;
|
|
|
|
for (int i = 0; i < 20; i++) {
|
|
|
|
if (!Put("foo", big).ok()) {
|
|
|
|
errors++;
|
|
|
|
env_->SleepForMicroseconds(100000);
|
|
|
|
}
|
2012-08-22 23:57:51 +00:00
|
|
|
}
|
2013-08-07 22:20:41 +00:00
|
|
|
ASSERT_GT(errors, 0);
|
2014-10-28 21:27:26 +00:00
|
|
|
env_->non_writeable_rate_.store(0);
|
2013-08-07 22:20:41 +00:00
|
|
|
} while (ChangeCompactOptions());
|
2012-01-25 22:56:52 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, ManifestWriteError) {
|
2013-01-08 20:00:13 +00:00
|
|
|
// Test for the following problem:
|
|
|
|
// (a) Compaction produces file F
|
|
|
|
// (b) Log record containing F is written to MANIFEST file, but Sync() fails
|
|
|
|
// (c) GC deletes F
|
|
|
|
// (d) After reopening DB, reads fail since deleted F is named in log record
|
|
|
|
|
|
|
|
// We iterate twice. In the second iteration, everything is the
|
|
|
|
// same except the log record never makes it to the MANIFEST file.
|
|
|
|
for (int iter = 0; iter < 2; iter++) {
|
2014-10-27 21:50:21 +00:00
|
|
|
std::atomic<bool>* error_type = (iter == 0)
|
2013-01-08 20:00:13 +00:00
|
|
|
? &env_->manifest_sync_error_
|
|
|
|
: &env_->manifest_write_error_;
|
|
|
|
|
|
|
|
// Insert foo=>bar mapping
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.env = env_;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.error_if_exists = false;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2013-01-08 20:00:13 +00:00
|
|
|
ASSERT_OK(Put("foo", "bar"));
|
|
|
|
ASSERT_EQ("bar", Get("foo"));
|
|
|
|
|
|
|
|
// Memtable compaction (will succeed)
|
2014-02-07 22:47:16 +00:00
|
|
|
Flush();
|
2013-01-08 20:00:13 +00:00
|
|
|
ASSERT_EQ("bar", Get("foo"));
|
2015-07-17 19:02:52 +00:00
|
|
|
const int last = 2;
|
|
|
|
MoveFilesToLevel(2);
|
2013-01-08 20:00:13 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(last), 1); // foo=>bar is now in last level
|
|
|
|
|
|
|
|
// Merging compaction (will fail)
|
2014-10-27 21:50:21 +00:00
|
|
|
error_type->store(true, std::memory_order_release);
|
2013-03-01 02:04:58 +00:00
|
|
|
dbfull()->TEST_CompactRange(last, nullptr, nullptr); // Should fail
|
2013-01-08 20:00:13 +00:00
|
|
|
ASSERT_EQ("bar", Get("foo"));
|
|
|
|
|
|
|
|
// Recovery: should not lose data
|
2014-10-27 21:50:21 +00:00
|
|
|
error_type->store(false, std::memory_order_release);
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2013-01-08 20:00:13 +00:00
|
|
|
ASSERT_EQ("bar", Get("foo"));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, PutFailsParanoid) {
|
2013-10-28 19:36:02 +00:00
|
|
|
// Test the following:
|
|
|
|
// (a) A random put fails in paranoid mode (simulate by sync fail)
|
|
|
|
// (b) All other puts have to fail, even if writes would succeed
|
|
|
|
// (c) All of that should happen ONLY if paranoid_checks = true
|
|
|
|
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.env = env_;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.error_if_exists = false;
|
|
|
|
options.paranoid_checks = true;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-10-28 19:36:02 +00:00
|
|
|
Status s;
|
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "bar"));
|
|
|
|
ASSERT_OK(Put(1, "foo1", "bar1"));
|
2013-10-28 19:36:02 +00:00
|
|
|
// simulate error
|
2014-10-27 21:50:21 +00:00
|
|
|
env_->log_write_error_.store(true, std::memory_order_release);
|
2014-02-07 22:47:16 +00:00
|
|
|
s = Put(1, "foo2", "bar2");
|
2013-10-28 19:36:02 +00:00
|
|
|
ASSERT_TRUE(!s.ok());
|
2014-10-27 21:50:21 +00:00
|
|
|
env_->log_write_error_.store(false, std::memory_order_release);
|
2014-02-07 22:47:16 +00:00
|
|
|
s = Put(1, "foo3", "bar3");
|
2013-10-28 19:36:02 +00:00
|
|
|
// the next put should fail, too
|
|
|
|
ASSERT_TRUE(!s.ok());
|
|
|
|
// but we're still able to read
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("bar", Get(1, "foo"));
|
2013-10-28 19:36:02 +00:00
|
|
|
|
|
|
|
// do the same thing with paranoid checks off
|
|
|
|
options.paranoid_checks = false;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-10-28 19:36:02 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "foo", "bar"));
|
|
|
|
ASSERT_OK(Put(1, "foo1", "bar1"));
|
2013-10-28 19:36:02 +00:00
|
|
|
// simulate error
|
2014-10-27 21:50:21 +00:00
|
|
|
env_->log_write_error_.store(true, std::memory_order_release);
|
2014-02-07 22:47:16 +00:00
|
|
|
s = Put(1, "foo2", "bar2");
|
2013-10-28 19:36:02 +00:00
|
|
|
ASSERT_TRUE(!s.ok());
|
2014-10-27 21:50:21 +00:00
|
|
|
env_->log_write_error_.store(false, std::memory_order_release);
|
2014-02-07 22:47:16 +00:00
|
|
|
s = Put(1, "foo3", "bar3");
|
2013-10-28 19:36:02 +00:00
|
|
|
// the next put should NOT fail
|
|
|
|
ASSERT_TRUE(s.ok());
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, BloomFilter) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
2014-08-25 21:22:05 +00:00
|
|
|
env_->count_random_reads_ = true;
|
2013-08-07 22:20:41 +00:00
|
|
|
options.env = env_;
|
2014-08-25 21:22:05 +00:00
|
|
|
// ChangeCompactOptions() only changes compaction style, which does not
|
|
|
|
// trigger reset of table_factory
|
|
|
|
BlockBasedTableOptions table_options;
|
|
|
|
table_options.no_block_cache = true;
|
|
|
|
table_options.filter_policy.reset(NewBloomFilterPolicy(10));
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
|
|
|
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Populate multiple layers
|
|
|
|
const int N = 10000;
|
|
|
|
for (int i = 0; i < N; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, Key(i), Key(i)));
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
2014-02-07 22:47:16 +00:00
|
|
|
Compact(1, "a", "z");
|
2013-08-07 22:20:41 +00:00
|
|
|
for (int i = 0; i < N; i += 100) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, Key(i), Key(i)));
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
2014-02-07 22:47:16 +00:00
|
|
|
Flush(1);
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Prevent auto compactions triggered by seeks
|
2014-10-27 21:50:21 +00:00
|
|
|
env_->delay_sstable_sync_.store(true, std::memory_order_release);
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Lookup present keys. Should rarely read from small sstable.
|
|
|
|
env_->random_read_counter_.Reset();
|
|
|
|
for (int i = 0; i < N; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(Key(i), Get(1, Key(i)));
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
|
|
|
int reads = env_->random_read_counter_.Read();
|
|
|
|
fprintf(stderr, "%d present => %d reads\n", N, reads);
|
|
|
|
ASSERT_GE(reads, N);
|
|
|
|
ASSERT_LE(reads, N + 2*N/100);
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Lookup present keys. Should rarely read from either sstable.
|
|
|
|
env_->random_read_counter_.Reset();
|
|
|
|
for (int i = 0; i < N; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ("NOT_FOUND", Get(1, Key(i) + ".missing"));
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
|
|
|
reads = env_->random_read_counter_.Read();
|
|
|
|
fprintf(stderr, "%d missing => %d reads\n", N, reads);
|
|
|
|
ASSERT_LE(reads, 3*N/100);
|
2012-04-17 15:36:46 +00:00
|
|
|
|
2014-10-27 21:50:21 +00:00
|
|
|
env_->delay_sstable_sync_.store(false, std::memory_order_release);
|
2013-08-07 22:20:41 +00:00
|
|
|
Close();
|
|
|
|
} while (ChangeCompactOptions());
|
2012-04-17 15:36:46 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, BloomFilterRate) {
|
2014-09-08 17:37:05 +00:00
|
|
|
while (ChangeFilterOptions()) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-09-08 17:37:05 +00:00
|
|
|
|
|
|
|
const int maxKey = 10000;
|
|
|
|
for (int i = 0; i < maxKey; i++) {
|
|
|
|
ASSERT_OK(Put(1, Key(i), Key(i)));
|
|
|
|
}
|
|
|
|
// Add a large key to make the file contain wide range
|
|
|
|
ASSERT_OK(Put(1, Key(maxKey + 55555), Key(maxKey + 55555)));
|
|
|
|
Flush(1);
|
|
|
|
|
|
|
|
// Check if they can be found
|
|
|
|
for (int i = 0; i < maxKey; i++) {
|
|
|
|
ASSERT_EQ(Key(i), Get(1, Key(i)));
|
|
|
|
}
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 0);
|
|
|
|
|
|
|
|
// Check if filter is useful
|
|
|
|
for (int i = 0; i < maxKey; i++) {
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get(1, Key(i+33333)));
|
|
|
|
}
|
|
|
|
ASSERT_GE(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), maxKey*0.98);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, BloomFilterCompatibility) {
|
2014-10-31 22:08:10 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-09-08 17:37:05 +00:00
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
BlockBasedTableOptions table_options;
|
|
|
|
table_options.filter_policy.reset(NewBloomFilterPolicy(10, true));
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
|
|
|
|
|
|
|
// Create with block based filter
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-09-08 17:37:05 +00:00
|
|
|
|
|
|
|
const int maxKey = 10000;
|
|
|
|
for (int i = 0; i < maxKey; i++) {
|
|
|
|
ASSERT_OK(Put(1, Key(i), Key(i)));
|
|
|
|
}
|
|
|
|
ASSERT_OK(Put(1, Key(maxKey + 55555), Key(maxKey + 55555)));
|
|
|
|
Flush(1);
|
|
|
|
|
|
|
|
// Check db with full filter
|
|
|
|
table_options.filter_policy.reset(NewBloomFilterPolicy(10, false));
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, options);
|
2014-09-08 17:37:05 +00:00
|
|
|
|
|
|
|
// Check if they can be found
|
|
|
|
for (int i = 0; i < maxKey; i++) {
|
|
|
|
ASSERT_EQ(Key(i), Get(1, Key(i)));
|
|
|
|
}
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 0);
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, BloomFilterReverseCompatibility) {
|
2014-10-31 22:08:10 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-09-08 17:37:05 +00:00
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
BlockBasedTableOptions table_options;
|
|
|
|
table_options.filter_policy.reset(NewBloomFilterPolicy(10, false));
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
|
|
|
|
|
|
|
// Create with full filter
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-09-08 17:37:05 +00:00
|
|
|
|
|
|
|
const int maxKey = 10000;
|
|
|
|
for (int i = 0; i < maxKey; i++) {
|
|
|
|
ASSERT_OK(Put(1, Key(i), Key(i)));
|
|
|
|
}
|
|
|
|
ASSERT_OK(Put(1, Key(maxKey + 55555), Key(maxKey + 55555)));
|
|
|
|
Flush(1);
|
|
|
|
|
|
|
|
// Check db with block_based filter
|
|
|
|
table_options.filter_policy.reset(NewBloomFilterPolicy(10, true));
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
2014-10-29 19:00:42 +00:00
|
|
|
ReopenWithColumnFamilies({"default", "pikachu"}, options);
|
2014-09-08 17:37:05 +00:00
|
|
|
|
|
|
|
// Check if they can be found
|
|
|
|
for (int i = 0; i < maxKey; i++) {
|
|
|
|
ASSERT_EQ(Key(i), Get(1, Key(i)));
|
|
|
|
}
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 0);
|
|
|
|
}
|
|
|
|
|
2014-09-11 23:33:46 +00:00
|
|
|
namespace {
|
|
|
|
// A wrapped bloom over default FilterPolicy
|
|
|
|
class WrappedBloom : public FilterPolicy {
|
|
|
|
public:
|
|
|
|
explicit WrappedBloom(int bits_per_key) :
|
|
|
|
filter_(NewBloomFilterPolicy(bits_per_key)),
|
|
|
|
counter_(0) {}
|
|
|
|
|
|
|
|
~WrappedBloom() { delete filter_; }
|
|
|
|
|
|
|
|
const char* Name() const override { return "WrappedRocksDbFilterPolicy"; }
|
|
|
|
|
|
|
|
void CreateFilter(const rocksdb::Slice* keys, int n, std::string* dst)
|
|
|
|
const override {
|
|
|
|
std::unique_ptr<rocksdb::Slice[]> user_keys(new rocksdb::Slice[n]);
|
|
|
|
for (int i = 0; i < n; ++i) {
|
|
|
|
user_keys[i] = convertKey(keys[i]);
|
|
|
|
}
|
|
|
|
return filter_->CreateFilter(user_keys.get(), n, dst);
|
|
|
|
}
|
|
|
|
|
|
|
|
bool KeyMayMatch(const rocksdb::Slice& key, const rocksdb::Slice& filter)
|
|
|
|
const override {
|
|
|
|
counter_++;
|
|
|
|
return filter_->KeyMayMatch(convertKey(key), filter);
|
|
|
|
}
|
|
|
|
|
|
|
|
uint32_t GetCounter() { return counter_; }
|
|
|
|
|
|
|
|
private:
|
|
|
|
const FilterPolicy* filter_;
|
|
|
|
mutable uint32_t counter_;
|
|
|
|
|
2014-09-26 16:01:23 +00:00
|
|
|
rocksdb::Slice convertKey(const rocksdb::Slice& key) const {
|
2014-09-11 23:33:46 +00:00
|
|
|
return key;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
} // namespace
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, BloomFilterWrapper) {
|
2014-10-31 22:08:10 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-09-11 23:33:46 +00:00
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
|
|
|
|
BlockBasedTableOptions table_options;
|
|
|
|
WrappedBloom* policy = new WrappedBloom(10);
|
|
|
|
table_options.filter_policy.reset(policy);
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
|
|
|
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-09-11 23:33:46 +00:00
|
|
|
|
|
|
|
const int maxKey = 10000;
|
|
|
|
for (int i = 0; i < maxKey; i++) {
|
|
|
|
ASSERT_OK(Put(1, Key(i), Key(i)));
|
|
|
|
}
|
|
|
|
// Add a large key to make the file contain wide range
|
|
|
|
ASSERT_OK(Put(1, Key(maxKey + 55555), Key(maxKey + 55555)));
|
2014-09-12 21:05:07 +00:00
|
|
|
ASSERT_EQ(0U, policy->GetCounter());
|
2014-09-11 23:33:46 +00:00
|
|
|
Flush(1);
|
|
|
|
|
|
|
|
// Check if they can be found
|
|
|
|
for (int i = 0; i < maxKey; i++) {
|
|
|
|
ASSERT_EQ(Key(i), Get(1, Key(i)));
|
|
|
|
}
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), 0);
|
2014-09-12 21:05:07 +00:00
|
|
|
ASSERT_EQ(1U * maxKey, policy->GetCounter());
|
2014-09-11 23:33:46 +00:00
|
|
|
|
|
|
|
// Check if filter is useful
|
|
|
|
for (int i = 0; i < maxKey; i++) {
|
|
|
|
ASSERT_EQ("NOT_FOUND", Get(1, Key(i+33333)));
|
|
|
|
}
|
|
|
|
ASSERT_GE(TestGetTickerCount(options, BLOOM_FILTER_USEFUL), maxKey*0.98);
|
2014-09-12 21:05:07 +00:00
|
|
|
ASSERT_EQ(2U * maxKey, policy->GetCounter());
|
2014-09-11 23:33:46 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, SnapshotFiles) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.write_buffer_size = 100000000; // Large write buffer
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2012-09-15 00:11:35 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
Random rnd(301);
|
2012-09-24 21:01:01 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Write 8MB (80 values, each 100K)
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 1), 0);
|
2013-08-07 22:20:41 +00:00
|
|
|
std::vector<std::string> values;
|
|
|
|
for (int i = 0; i < 80; i++) {
|
|
|
|
values.push_back(RandomString(&rnd, 100000));
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put((i < 40), Key(i), values[i]));
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
2012-09-24 21:01:01 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// assert that nothing makes it to disk yet.
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 1), 0);
|
2012-09-24 21:01:01 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// get a file snapshot
|
|
|
|
uint64_t manifest_number = 0;
|
|
|
|
uint64_t manifest_size = 0;
|
|
|
|
std::vector<std::string> files;
|
|
|
|
dbfull()->DisableFileDeletions();
|
|
|
|
dbfull()->GetLiveFiles(files, &manifest_size);
|
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
// CURRENT, MANIFEST, *.sst files (one for each CF)
|
|
|
|
ASSERT_EQ(files.size(), 4U);
|
2013-08-07 22:20:41 +00:00
|
|
|
|
|
|
|
uint64_t number = 0;
|
|
|
|
FileType type;
|
|
|
|
|
|
|
|
// copy these files to a new snapshot directory
|
|
|
|
std::string snapdir = dbname_ + ".snapdir/";
|
2014-10-31 22:08:10 +00:00
|
|
|
ASSERT_OK(env_->CreateDirIfMissing(snapdir));
|
2013-08-07 22:20:41 +00:00
|
|
|
|
|
|
|
for (unsigned int i = 0; i < files.size(); i++) {
|
2013-11-08 23:23:46 +00:00
|
|
|
// our clients require that GetLiveFiles returns
|
|
|
|
// files with "/" as first character!
|
|
|
|
ASSERT_EQ(files[i][0], '/');
|
|
|
|
std::string src = dbname_ + files[i];
|
|
|
|
std::string dest = snapdir + files[i];
|
2013-08-07 22:20:41 +00:00
|
|
|
|
|
|
|
uint64_t size;
|
|
|
|
ASSERT_OK(env_->GetFileSize(src, &size));
|
|
|
|
|
|
|
|
// record the number and the size of the
|
|
|
|
// latest manifest file
|
|
|
|
if (ParseFileName(files[i].substr(1), &number, &type)) {
|
|
|
|
if (type == kDescriptorFile) {
|
|
|
|
if (number > manifest_number) {
|
|
|
|
manifest_number = number;
|
|
|
|
ASSERT_GE(size, manifest_size);
|
|
|
|
size = manifest_size; // copy only valid MANIFEST data
|
|
|
|
}
|
2012-09-24 21:01:01 +00:00
|
|
|
}
|
|
|
|
}
|
Refactor Recover() code
Summary:
This diff does two things:
* Rethinks how we call Recover() with read_only option. Before, we call it with pointer to memtable where we'd like to apply those changes to. This memtable is set in db_impl_readonly.cc and it's actually DBImpl::mem_. Why don't we just apply updates to mem_ right away? It seems more intuitive.
* Changes when we apply updates to manifest. Before, the process is to recover all the logs, flush it to sst files and then do one giant commit that atomically adds all recovered sst files and sets the next log number. This works good enough, but causes some small troubles for my column family approach, since I can't have one VersionEdit apply to more than single column family[1]. The change here is to commit the files recovered from logs right away. Here is the state of the world before the change:
1. Recover log 5, add new sst files to edit
2. Recover log 7, add new sst files to edit
3. Recover log 8, add new sst files to edit
4. Commit all added sst files to manifest and mark log files 5, 7 and 8 as recoverd (via SetLogNumber(9) function)
After the change, we'll do:
1. Recover log 5, commit the new sst files and set log 5 as recovered
2. Recover log 7, commit the new sst files and set log 7 as recovered
3. Recover log 8, commit the new sst files and set log 8 as recovered
The added (small) benefit is that if we fail after (2), the new recovery will only have to recover log 8. In previous case, we'll have to restart the recovery from the beginning. The bigger benefit will be to enable easier integration of multiple column families in Recovery code path.
[1] I'm happy to dicuss this decison, but I believe this is the cleanest way to go. It also makes backward compatibility much easier. We don't have a requirement of adding multiple column families atomically.
Test Plan: make check
Reviewers: dhruba, haobo, kailiu, sdong
Reviewed By: kailiu
CC: leveldb
Differential Revision: https://reviews.facebook.net/D15237
2014-01-22 18:45:26 +00:00
|
|
|
CopyFile(src, dest, size);
|
2012-09-24 21:01:01 +00:00
|
|
|
}
|
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// release file snapshot
|
|
|
|
dbfull()->DisableFileDeletions();
|
|
|
|
// overwrite one key, this key should not appear in the snapshot
|
|
|
|
std::vector<std::string> extras;
|
|
|
|
for (unsigned int i = 0; i < 1; i++) {
|
|
|
|
extras.push_back(RandomString(&rnd, 100000));
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(0, Key(i), extras[i]));
|
2013-08-07 22:20:41 +00:00
|
|
|
}
|
2012-11-01 17:50:08 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// verify that data in the snapshot are correct
|
2014-02-07 22:47:16 +00:00
|
|
|
std::vector<ColumnFamilyDescriptor> column_families;
|
|
|
|
column_families.emplace_back("default", ColumnFamilyOptions());
|
|
|
|
column_families.emplace_back("pikachu", ColumnFamilyOptions());
|
|
|
|
std::vector<ColumnFamilyHandle*> cf_handles;
|
2013-08-07 22:20:41 +00:00
|
|
|
DB* snapdb;
|
2014-02-07 22:47:16 +00:00
|
|
|
DBOptions opts;
|
2014-10-31 22:08:10 +00:00
|
|
|
opts.env = env_;
|
2013-08-07 22:20:41 +00:00
|
|
|
opts.create_if_missing = false;
|
2014-02-07 22:47:16 +00:00
|
|
|
Status stat =
|
|
|
|
DB::Open(opts, snapdir, column_families, &cf_handles, &snapdb);
|
2013-10-04 17:21:03 +00:00
|
|
|
ASSERT_OK(stat);
|
2013-08-07 22:20:41 +00:00
|
|
|
|
|
|
|
ReadOptions roptions;
|
|
|
|
std::string val;
|
|
|
|
for (unsigned int i = 0; i < 80; i++) {
|
2014-02-07 22:47:16 +00:00
|
|
|
stat = snapdb->Get(roptions, cf_handles[i < 40], Key(i), &val);
|
2013-08-07 22:20:41 +00:00
|
|
|
ASSERT_EQ(values[i].compare(val), 0);
|
|
|
|
}
|
2014-02-07 22:47:16 +00:00
|
|
|
for (auto cfh : cf_handles) {
|
|
|
|
delete cfh;
|
|
|
|
}
|
2013-08-07 22:20:41 +00:00
|
|
|
delete snapdb;
|
|
|
|
|
|
|
|
// look at the new live files after we added an 'extra' key
|
|
|
|
// and after we took the first snapshot.
|
|
|
|
uint64_t new_manifest_number = 0;
|
|
|
|
uint64_t new_manifest_size = 0;
|
|
|
|
std::vector<std::string> newfiles;
|
|
|
|
dbfull()->DisableFileDeletions();
|
|
|
|
dbfull()->GetLiveFiles(newfiles, &new_manifest_size);
|
|
|
|
|
|
|
|
// find the new manifest file. assert that this manifest file is
|
|
|
|
// the same one as in the previous snapshot. But its size should be
|
|
|
|
// larger because we added an extra key after taking the
|
|
|
|
// previous shapshot.
|
|
|
|
for (unsigned int i = 0; i < newfiles.size(); i++) {
|
|
|
|
std::string src = dbname_ + "/" + newfiles[i];
|
|
|
|
// record the lognumber and the size of the
|
|
|
|
// latest manifest file
|
|
|
|
if (ParseFileName(newfiles[i].substr(1), &number, &type)) {
|
|
|
|
if (type == kDescriptorFile) {
|
|
|
|
if (number > new_manifest_number) {
|
|
|
|
uint64_t size;
|
|
|
|
new_manifest_number = number;
|
|
|
|
ASSERT_OK(env_->GetFileSize(src, &size));
|
|
|
|
ASSERT_GE(size, new_manifest_size);
|
|
|
|
}
|
2012-09-24 21:01:01 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2013-08-07 22:20:41 +00:00
|
|
|
ASSERT_EQ(manifest_number, new_manifest_number);
|
|
|
|
ASSERT_GT(new_manifest_size, manifest_size);
|
2012-11-01 17:50:08 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// release file snapshot
|
|
|
|
dbfull()->DisableFileDeletions();
|
|
|
|
} while (ChangeCompactOptions());
|
2012-09-15 00:11:35 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, CompactOnFlush) {
|
2015-02-02 22:49:22 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.skip_policy = kSkipNoSnapshot;
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2015-02-02 22:49:22 +00:00
|
|
|
Options options = CurrentOptions(options_override);
|
2013-08-07 22:20:41 +00:00
|
|
|
options.disable_auto_compactions = true;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "v1");
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v1 ]");
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Write two new keys
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "a", "begin");
|
|
|
|
Put(1, "z", "end");
|
|
|
|
Flush(1);
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Case1: Delete followed by a put
|
2014-02-07 22:47:16 +00:00
|
|
|
Delete(1, "foo");
|
|
|
|
Put(1, "foo", "v2");
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v2, DEL, v1 ]");
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// After the current memtable is flushed, the DEL should
|
|
|
|
// have been removed
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v2, v1 ]");
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), handles_[1], nullptr,
|
|
|
|
nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v2 ]");
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Case 2: Delete followed by another delete
|
2014-02-07 22:47:16 +00:00
|
|
|
Delete(1, "foo");
|
|
|
|
Delete(1, "foo");
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ DEL, DEL, v2 ]");
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ DEL, v2 ]");
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), handles_[1], nullptr,
|
|
|
|
nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ ]");
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Case 3: Put followed by a delete
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "v3");
|
|
|
|
Delete(1, "foo");
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ DEL, v3 ]");
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ DEL ]");
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), handles_[1], nullptr,
|
|
|
|
nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ ]");
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Case 4: Put followed by another Put
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "v4");
|
|
|
|
Put(1, "foo", "v5");
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v5, v4 ]");
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v5 ]");
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), handles_[1], nullptr,
|
|
|
|
nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v5 ]");
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// clear database
|
2014-02-07 22:47:16 +00:00
|
|
|
Delete(1, "foo");
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), handles_[1], nullptr,
|
|
|
|
nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ ]");
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Case 5: Put followed by snapshot followed by another Put
|
|
|
|
// Both puts should remain.
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "v6");
|
2013-08-07 22:20:41 +00:00
|
|
|
const Snapshot* snapshot = db_->GetSnapshot();
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "v7");
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v7, v6 ]");
|
2013-08-07 22:20:41 +00:00
|
|
|
db_->ReleaseSnapshot(snapshot);
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// clear database
|
2014-02-07 22:47:16 +00:00
|
|
|
Delete(1, "foo");
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), handles_[1], nullptr,
|
|
|
|
nullptr);
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ ]");
|
2013-02-28 22:09:30 +00:00
|
|
|
|
2013-08-07 22:20:41 +00:00
|
|
|
// Case 5: snapshot followed by a put followed by another Put
|
|
|
|
// Only the last put should remain.
|
|
|
|
const Snapshot* snapshot1 = db_->GetSnapshot();
|
2014-02-07 22:47:16 +00:00
|
|
|
Put(1, "foo", "v8");
|
|
|
|
Put(1, "foo", "v9");
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
ASSERT_EQ(AllEntriesFor("foo", 1), "[ v9 ]");
|
2013-08-07 22:20:41 +00:00
|
|
|
db_->ReleaseSnapshot(snapshot1);
|
|
|
|
} while (ChangeCompactOptions());
|
2013-02-28 22:09:30 +00:00
|
|
|
}
|
|
|
|
|
2014-04-10 04:17:14 +00:00
|
|
|
namespace {
|
2014-08-12 05:10:32 +00:00
|
|
|
std::vector<std::uint64_t> ListSpecificFiles(
|
|
|
|
Env* env, const std::string& path, const FileType expected_file_type) {
|
2012-11-26 21:56:45 +00:00
|
|
|
std::vector<std::string> files;
|
2014-09-09 18:18:50 +00:00
|
|
|
std::vector<uint64_t> file_numbers;
|
2012-11-26 21:56:45 +00:00
|
|
|
env->GetChildren(path, &files);
|
|
|
|
uint64_t number;
|
|
|
|
FileType type;
|
|
|
|
for (size_t i = 0; i < files.size(); ++i) {
|
|
|
|
if (ParseFileName(files[i], &number, &type)) {
|
2014-08-12 05:10:32 +00:00
|
|
|
if (type == expected_file_type) {
|
2014-09-09 18:18:50 +00:00
|
|
|
file_numbers.push_back(number);
|
2012-11-26 21:56:45 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2014-09-09 18:18:50 +00:00
|
|
|
return std::move(file_numbers);
|
2012-11-26 21:56:45 +00:00
|
|
|
}
|
2014-08-12 05:10:32 +00:00
|
|
|
|
|
|
|
std::vector<std::uint64_t> ListTableFiles(Env* env, const std::string& path) {
|
|
|
|
return ListSpecificFiles(env, path, kTableFile);
|
|
|
|
}
|
2014-04-10 04:17:14 +00:00
|
|
|
} // namespace
|
2012-11-26 21:56:45 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, FlushOneColumnFamily) {
|
2014-10-31 22:08:10 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-08-12 05:10:32 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu", "ilya", "muromec", "dobrynia", "nikitich",
|
|
|
|
"alyosha", "popovich"},
|
2014-10-29 19:00:01 +00:00
|
|
|
options);
|
2014-08-12 05:10:32 +00:00
|
|
|
|
|
|
|
ASSERT_OK(Put(0, "Default", "Default"));
|
|
|
|
ASSERT_OK(Put(1, "pikachu", "pikachu"));
|
|
|
|
ASSERT_OK(Put(2, "ilya", "ilya"));
|
|
|
|
ASSERT_OK(Put(3, "muromec", "muromec"));
|
|
|
|
ASSERT_OK(Put(4, "dobrynia", "dobrynia"));
|
|
|
|
ASSERT_OK(Put(5, "nikitich", "nikitich"));
|
|
|
|
ASSERT_OK(Put(6, "alyosha", "alyosha"));
|
|
|
|
ASSERT_OK(Put(7, "popovich", "popovich"));
|
|
|
|
|
2014-11-11 21:47:22 +00:00
|
|
|
for (int i = 0; i < 8; ++i) {
|
2014-08-12 05:10:32 +00:00
|
|
|
Flush(i);
|
|
|
|
auto tables = ListTableFiles(env_, dbname_);
|
2014-08-13 00:26:47 +00:00
|
|
|
ASSERT_EQ(tables.size(), i + 1U);
|
2014-08-12 05:10:32 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-09-09 18:18:50 +00:00
|
|
|
// In https://reviews.facebook.net/D20661 we change
|
|
|
|
// recovery behavior: previously for each log file each column family
|
|
|
|
// memtable was flushed, even it was empty. Now it's changed:
|
|
|
|
// we try to create the smallest number of table files by merging
|
|
|
|
// updates from multiple logs
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, RecoverCheckFileAmountWithSmallWriteBuffer) {
|
2014-10-31 22:08:10 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-09-09 18:18:50 +00:00
|
|
|
options.write_buffer_size = 5000000;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu", "dobrynia", "nikitich"}, options);
|
2014-09-09 18:18:50 +00:00
|
|
|
|
|
|
|
// Since we will reopen DB with smaller write_buffer_size,
|
|
|
|
// each key will go to new SST file
|
|
|
|
ASSERT_OK(Put(1, Key(10), DummyString(1000000)));
|
|
|
|
ASSERT_OK(Put(1, Key(10), DummyString(1000000)));
|
|
|
|
ASSERT_OK(Put(1, Key(10), DummyString(1000000)));
|
|
|
|
ASSERT_OK(Put(1, Key(10), DummyString(1000000)));
|
|
|
|
|
|
|
|
ASSERT_OK(Put(3, Key(10), DummyString(1)));
|
|
|
|
// Make 'dobrynia' to be flushed and new WAL file to be created
|
|
|
|
ASSERT_OK(Put(2, Key(10), DummyString(7500000)));
|
|
|
|
ASSERT_OK(Put(2, Key(1), DummyString(1)));
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[2]);
|
|
|
|
{
|
|
|
|
auto tables = ListTableFiles(env_, dbname_);
|
2014-09-10 01:42:35 +00:00
|
|
|
ASSERT_EQ(tables.size(), static_cast<size_t>(1));
|
2014-09-09 18:18:50 +00:00
|
|
|
// Make sure 'dobrynia' was flushed: check sst files amount
|
2014-09-10 01:42:35 +00:00
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "dobrynia"),
|
|
|
|
static_cast<uint64_t>(1));
|
2014-09-09 18:18:50 +00:00
|
|
|
}
|
|
|
|
// New WAL file
|
|
|
|
ASSERT_OK(Put(1, Key(1), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(1, Key(1), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(3, Key(10), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(3, Key(10), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(3, Key(10), DummyString(1)));
|
|
|
|
|
|
|
|
options.write_buffer_size = 10;
|
|
|
|
ReopenWithColumnFamilies({"default", "pikachu", "dobrynia", "nikitich"},
|
2014-10-29 19:00:42 +00:00
|
|
|
options);
|
2014-09-09 18:18:50 +00:00
|
|
|
{
|
|
|
|
// No inserts => default is empty
|
2014-09-10 01:42:35 +00:00
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "default"),
|
|
|
|
static_cast<uint64_t>(0));
|
2014-09-09 18:18:50 +00:00
|
|
|
// First 4 keys goes to separate SSTs + 1 more SST for 2 smaller keys
|
2014-09-10 01:42:35 +00:00
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "pikachu"),
|
|
|
|
static_cast<uint64_t>(5));
|
2014-09-09 18:18:50 +00:00
|
|
|
// 1 SST for big key + 1 SST for small one
|
2014-09-10 01:42:35 +00:00
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "dobrynia"),
|
|
|
|
static_cast<uint64_t>(2));
|
2014-09-09 18:18:50 +00:00
|
|
|
// 1 SST for all keys
|
2014-09-10 01:42:35 +00:00
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "nikitich"),
|
|
|
|
static_cast<uint64_t>(1));
|
2014-09-09 18:18:50 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// In https://reviews.facebook.net/D20661 we change
|
|
|
|
// recovery behavior: previously for each log file each column family
|
|
|
|
// memtable was flushed, even it wasn't empty. Now it's changed:
|
|
|
|
// we try to create the smallest number of table files by merging
|
|
|
|
// updates from multiple logs
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, RecoverCheckFileAmount) {
|
2014-10-31 22:08:10 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-09-09 18:18:50 +00:00
|
|
|
options.write_buffer_size = 100000;
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu", "dobrynia", "nikitich"}, options);
|
2014-09-09 18:18:50 +00:00
|
|
|
|
|
|
|
ASSERT_OK(Put(0, Key(1), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(1, Key(1), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(2, Key(1), DummyString(1)));
|
|
|
|
|
|
|
|
// Make 'nikitich' memtable to be flushed
|
|
|
|
ASSERT_OK(Put(3, Key(10), DummyString(1002400)));
|
|
|
|
ASSERT_OK(Put(3, Key(1), DummyString(1)));
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[3]);
|
|
|
|
// 4 memtable are not flushed, 1 sst file
|
|
|
|
{
|
|
|
|
auto tables = ListTableFiles(env_, dbname_);
|
2014-09-10 01:42:35 +00:00
|
|
|
ASSERT_EQ(tables.size(), static_cast<size_t>(1));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "nikitich"),
|
|
|
|
static_cast<uint64_t>(1));
|
2014-09-09 18:18:50 +00:00
|
|
|
}
|
|
|
|
// Memtable for 'nikitich' has flushed, new WAL file has opened
|
|
|
|
// 4 memtable still not flushed
|
|
|
|
|
|
|
|
// Write to new WAL file
|
|
|
|
ASSERT_OK(Put(0, Key(1), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(1, Key(1), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(2, Key(1), DummyString(1)));
|
|
|
|
|
|
|
|
// Fill up 'nikitich' one more time
|
|
|
|
ASSERT_OK(Put(3, Key(10), DummyString(1002400)));
|
|
|
|
// make it flush
|
|
|
|
ASSERT_OK(Put(3, Key(1), DummyString(1)));
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[3]);
|
|
|
|
// There are still 4 memtable not flushed, and 2 sst tables
|
|
|
|
ASSERT_OK(Put(0, Key(1), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(1, Key(1), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(2, Key(1), DummyString(1)));
|
|
|
|
|
|
|
|
{
|
|
|
|
auto tables = ListTableFiles(env_, dbname_);
|
2014-09-10 01:42:35 +00:00
|
|
|
ASSERT_EQ(tables.size(), static_cast<size_t>(2));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "nikitich"),
|
|
|
|
static_cast<uint64_t>(2));
|
2014-09-09 18:18:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ReopenWithColumnFamilies({"default", "pikachu", "dobrynia", "nikitich"},
|
2014-10-29 19:00:42 +00:00
|
|
|
options);
|
2014-09-09 18:18:50 +00:00
|
|
|
{
|
|
|
|
std::vector<uint64_t> table_files = ListTableFiles(env_, dbname_);
|
|
|
|
// Check, that records for 'default', 'dobrynia' and 'pikachu' from
|
|
|
|
// first, second and third WALs went to the same SST.
|
|
|
|
// So, there is 6 SSTs: three for 'nikitich', one for 'default', one for
|
|
|
|
// 'dobrynia', one for 'pikachu'
|
2014-09-10 01:42:35 +00:00
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "default"),
|
|
|
|
static_cast<uint64_t>(1));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "nikitich"),
|
|
|
|
static_cast<uint64_t>(3));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "dobrynia"),
|
|
|
|
static_cast<uint64_t>(1));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "pikachu"),
|
|
|
|
static_cast<uint64_t>(1));
|
2014-09-09 18:18:50 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, SharedWriteBuffer) {
|
2015-01-28 23:30:02 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-12-02 20:09:20 +00:00
|
|
|
options.db_write_buffer_size = 100000; // this is the real limit
|
|
|
|
options.write_buffer_size = 500000; // this is never hit
|
|
|
|
CreateAndReopenWithCF({"pikachu", "dobrynia", "nikitich"}, options);
|
|
|
|
|
|
|
|
// Trigger a flush on every CF
|
|
|
|
ASSERT_OK(Put(0, Key(1), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(1, Key(1), DummyString(1)));
|
|
|
|
ASSERT_OK(Put(3, Key(1), DummyString(90000)));
|
|
|
|
ASSERT_OK(Put(2, Key(2), DummyString(20000)));
|
|
|
|
ASSERT_OK(Put(2, Key(1), DummyString(1)));
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[0]);
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[1]);
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[2]);
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[3]);
|
|
|
|
{
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "default"),
|
|
|
|
static_cast<uint64_t>(1));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "pikachu"),
|
|
|
|
static_cast<uint64_t>(1));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "dobrynia"),
|
|
|
|
static_cast<uint64_t>(1));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "nikitich"),
|
|
|
|
static_cast<uint64_t>(1));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Flush 'dobrynia' and 'nikitich'
|
|
|
|
ASSERT_OK(Put(2, Key(2), DummyString(50000)));
|
|
|
|
ASSERT_OK(Put(3, Key(2), DummyString(40000)));
|
|
|
|
ASSERT_OK(Put(2, Key(3), DummyString(20000)));
|
|
|
|
ASSERT_OK(Put(3, Key(2), DummyString(40000)));
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[1]);
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[2]);
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[3]);
|
|
|
|
{
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "default"),
|
|
|
|
static_cast<uint64_t>(1));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "pikachu"),
|
|
|
|
static_cast<uint64_t>(1));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "dobrynia"),
|
|
|
|
static_cast<uint64_t>(2));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "nikitich"),
|
|
|
|
static_cast<uint64_t>(2));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Make 'dobrynia' and 'nikitich' both take up 40% of space
|
|
|
|
// When 'pikachu' puts us over 100%, all 3 flush.
|
|
|
|
ASSERT_OK(Put(2, Key(2), DummyString(40000)));
|
|
|
|
ASSERT_OK(Put(1, Key(2), DummyString(20000)));
|
|
|
|
ASSERT_OK(Put(0, Key(1), DummyString(1)));
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[2]);
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable(handles_[3]);
|
|
|
|
{
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "default"),
|
|
|
|
static_cast<uint64_t>(1));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "pikachu"),
|
|
|
|
static_cast<uint64_t>(2));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "dobrynia"),
|
|
|
|
static_cast<uint64_t>(3));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "nikitich"),
|
|
|
|
static_cast<uint64_t>(3));
|
|
|
|
}
|
|
|
|
|
|
|
|
// Some remaining writes so 'default' and 'nikitich' flush on closure.
|
|
|
|
ASSERT_OK(Put(3, Key(1), DummyString(1)));
|
|
|
|
ReopenWithColumnFamilies({"default", "pikachu", "dobrynia", "nikitich"},
|
|
|
|
options);
|
|
|
|
{
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "default"),
|
|
|
|
static_cast<uint64_t>(2));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "pikachu"),
|
|
|
|
static_cast<uint64_t>(2));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "dobrynia"),
|
|
|
|
static_cast<uint64_t>(3));
|
|
|
|
ASSERT_EQ(GetNumberOfSstFilesForColumnFamily(db_, "nikitich"),
|
|
|
|
static_cast<uint64_t>(4));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, PurgeInfoLogs) {
|
2014-08-14 17:05:16 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.keep_log_file_num = 5;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
for (int mode = 0; mode <= 1; mode++) {
|
|
|
|
if (mode == 1) {
|
|
|
|
options.db_log_dir = dbname_ + "_logs";
|
|
|
|
env_->CreateDirIfMissing(options.db_log_dir);
|
|
|
|
} else {
|
|
|
|
options.db_log_dir = "";
|
|
|
|
}
|
|
|
|
for (int i = 0; i < 8; i++) {
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-08-14 17:05:16 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
std::vector<std::string> files;
|
|
|
|
env_->GetChildren(options.db_log_dir.empty() ? dbname_ : options.db_log_dir,
|
|
|
|
&files);
|
|
|
|
int info_log_count = 0;
|
|
|
|
for (std::string file : files) {
|
|
|
|
if (file.find("LOG") != std::string::npos) {
|
|
|
|
info_log_count++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
ASSERT_EQ(5, info_log_count);
|
2014-08-20 17:29:32 +00:00
|
|
|
|
2014-10-29 18:59:18 +00:00
|
|
|
Destroy(options);
|
2014-09-11 01:46:09 +00:00
|
|
|
// For mode (1), test DestroyDB() to delete all the logs under DB dir.
|
2014-08-20 17:29:32 +00:00
|
|
|
// For mode (2), no info log file should have been put under DB dir.
|
|
|
|
std::vector<std::string> db_files;
|
|
|
|
env_->GetChildren(dbname_, &db_files);
|
|
|
|
for (std::string file : db_files) {
|
|
|
|
ASSERT_TRUE(file.find("LOG") == std::string::npos);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mode == 1) {
|
|
|
|
// Cleaning up
|
|
|
|
env_->GetChildren(options.db_log_dir, &files);
|
|
|
|
for (std::string file : files) {
|
|
|
|
env_->DeleteFile(options.db_log_dir + "/" + file);
|
|
|
|
}
|
|
|
|
env_->DeleteDir(options.db_log_dir);
|
|
|
|
}
|
2014-08-14 17:05:16 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-10-25 02:09:02 +00:00
|
|
|
|
2015-06-15 19:03:13 +00:00
|
|
|
//
|
|
|
|
// Test WAL recovery for the various modes available
|
|
|
|
//
|
|
|
|
class RecoveryTestHelper {
|
|
|
|
public:
|
2015-06-29 21:57:18 +00:00
|
|
|
// Number of WAL files to generate
|
|
|
|
static const int kWALFilesCount = 10;
|
|
|
|
// Starting number for the WAL file name like 00010.log
|
|
|
|
static const int kWALFileOffset = 10;
|
|
|
|
// Keys to be written per WAL file
|
|
|
|
static const int kKeysPerWALFile = 1024;
|
|
|
|
// Size of the value
|
|
|
|
static const int kValueSize = 10;
|
2015-06-15 19:03:13 +00:00
|
|
|
|
2015-06-29 21:57:18 +00:00
|
|
|
// Create WAL files with values filled in
|
2015-07-13 19:11:05 +00:00
|
|
|
static void FillData(DBTest* test, Options& options, const size_t wal_count,
|
|
|
|
size_t& count) {
|
|
|
|
DBOptions& db_options = options;
|
2015-06-29 21:57:18 +00:00
|
|
|
|
|
|
|
count = 0;
|
|
|
|
|
|
|
|
shared_ptr<Cache> table_cache = NewLRUCache(50000, 16);
|
|
|
|
EnvOptions env_options;
|
|
|
|
WriteBuffer write_buffer(db_options.db_write_buffer_size);
|
|
|
|
|
|
|
|
unique_ptr<VersionSet> versions;
|
|
|
|
unique_ptr<WalManager> wal_manager;
|
|
|
|
WriteController write_controller;
|
|
|
|
|
|
|
|
versions.reset(new VersionSet(test->dbname_, &db_options, env_options,
|
|
|
|
table_cache.get(), &write_buffer,
|
|
|
|
&write_controller));
|
|
|
|
|
|
|
|
wal_manager.reset(new WalManager(db_options, env_options));
|
|
|
|
|
|
|
|
std::unique_ptr<log::Writer> current_log_writer;
|
|
|
|
|
|
|
|
for (size_t j = kWALFileOffset; j < wal_count + kWALFileOffset; j++) {
|
2015-07-13 19:11:05 +00:00
|
|
|
uint64_t current_log_number = j;
|
2015-06-29 21:57:18 +00:00
|
|
|
std::string fname = LogFileName(test->dbname_, current_log_number);
|
|
|
|
unique_ptr<WritableFile> file;
|
|
|
|
ASSERT_OK(db_options.env->NewWritableFile(fname, &file, env_options));
|
Move rate_limiter, write buffering, most perf context instrumentation and most random kill out of Env
Summary: We want to keep Env a think layer for better portability. Less platform dependent codes should be moved out of Env. In this patch, I create a wrapper of file readers and writers, and put rate limiting, write buffering, as well as most perf context instrumentation and random kill out of Env. It will make it easier to maintain multiple Env in the future.
Test Plan: Run all existing unit tests.
Reviewers: anthony, kradhakrishnan, IslamAbdelRahman, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D42321
2015-07-17 23:16:11 +00:00
|
|
|
unique_ptr<WritableFileWriter> file_writer(
|
|
|
|
new WritableFileWriter(std::move(file), env_options));
|
|
|
|
current_log_writer.reset(new log::Writer(std::move(file_writer)));
|
2015-06-29 21:57:18 +00:00
|
|
|
|
|
|
|
for (int i = 0; i < kKeysPerWALFile; i++) {
|
|
|
|
std::string key = "key" + ToString(count++);
|
|
|
|
std::string value = test->DummyString(kValueSize);
|
|
|
|
assert(current_log_writer.get() != nullptr);
|
2015-07-13 19:11:05 +00:00
|
|
|
uint64_t seq = versions->LastSequence() + 1;
|
2015-06-29 21:57:18 +00:00
|
|
|
WriteBatch batch;
|
|
|
|
batch.Put(key, value);
|
|
|
|
WriteBatchInternal::SetSequence(&batch, seq);
|
|
|
|
current_log_writer->AddRecord(WriteBatchInternal::Contents(&batch));
|
|
|
|
versions->SetLastSequence(seq);
|
|
|
|
}
|
2015-06-15 19:03:13 +00:00
|
|
|
}
|
2015-06-29 21:57:18 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// Recreate and fill the store with some data
|
|
|
|
static size_t FillData(DBTest* test, Options& options) {
|
|
|
|
options.create_if_missing = true;
|
|
|
|
test->DestroyAndReopen(options);
|
|
|
|
test->Close();
|
|
|
|
|
|
|
|
size_t count = 0;
|
|
|
|
FillData(test, options, kWALFilesCount, count);
|
2015-06-15 19:03:13 +00:00
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Read back all the keys we wrote and return the number of keys found
|
|
|
|
static size_t GetData(DBTest* test) {
|
|
|
|
size_t count = 0;
|
2015-06-29 21:57:18 +00:00
|
|
|
for (size_t i = 0; i < kWALFilesCount * kKeysPerWALFile; i++) {
|
2015-06-15 19:03:13 +00:00
|
|
|
if (test->Get("key" + ToString(i)) != "NOT_FOUND") {
|
|
|
|
++count;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return count;
|
|
|
|
}
|
|
|
|
|
2015-06-29 21:57:18 +00:00
|
|
|
// Manuall corrupt the specified WAL
|
2015-07-13 19:11:05 +00:00
|
|
|
static void CorruptWAL(DBTest* test, Options& options, const double off,
|
|
|
|
const double len, const int wal_file_id,
|
|
|
|
const bool trunc = false) {
|
2015-06-29 21:57:18 +00:00
|
|
|
Env* env = options.env;
|
|
|
|
std::string fname = LogFileName(test->dbname_, wal_file_id);
|
|
|
|
uint64_t size;
|
|
|
|
ASSERT_OK(env->GetFileSize(fname, &size));
|
|
|
|
ASSERT_GT(size, 0);
|
2015-07-10 01:01:08 +00:00
|
|
|
#ifdef OS_WIN
|
|
|
|
// Windows disk cache behaves differently. When we truncate
|
|
|
|
// the original content is still in the cache due to the original
|
|
|
|
// handle is still open. Generally, in Windows, one prohibits
|
|
|
|
// shared access to files and it is not needed for WAL but we allow
|
|
|
|
// it to induce corruption at various tests.
|
|
|
|
test->Close();
|
|
|
|
#endif
|
2015-06-29 21:57:18 +00:00
|
|
|
if (trunc) {
|
|
|
|
ASSERT_EQ(0, truncate(fname.c_str(), size * off));
|
|
|
|
} else {
|
|
|
|
InduceCorruption(fname, size * off, size * len);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-06-15 19:03:13 +00:00
|
|
|
// Overwrite data with 'a' from offset for length len
|
|
|
|
static void InduceCorruption(const std::string& filename, uint32_t offset,
|
|
|
|
uint32_t len) {
|
|
|
|
ASSERT_GT(len, 0);
|
|
|
|
|
|
|
|
int fd = open(filename.c_str(), O_RDWR);
|
|
|
|
|
|
|
|
ASSERT_GT(fd, 0);
|
|
|
|
ASSERT_EQ(offset, lseek(fd, offset, SEEK_SET));
|
|
|
|
|
2015-07-01 23:13:49 +00:00
|
|
|
void* buf = alloca(len);
|
2015-06-15 19:03:13 +00:00
|
|
|
memset(buf, 'a', len);
|
|
|
|
ASSERT_EQ(len, write(fd, buf, len));
|
|
|
|
|
|
|
|
close(fd);
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
// Test scope:
|
|
|
|
// - We expect to open the data store when there is incomplete trailing writes
|
|
|
|
// at the end of any of the logs
|
|
|
|
// - We do not expect to open the data store for corruption
|
|
|
|
TEST_F(DBTest, kTolerateCorruptedTailRecords) {
|
2015-07-13 19:11:05 +00:00
|
|
|
const int jstart = RecoveryTestHelper::kWALFileOffset;
|
|
|
|
const int jend = jstart + RecoveryTestHelper::kWALFilesCount;
|
2015-06-29 21:57:18 +00:00
|
|
|
|
2015-07-13 19:11:05 +00:00
|
|
|
for (auto trunc : {true, false}) { /* Corruption style */
|
|
|
|
for (int i = 0; i < 4; i++) { /* Corruption offset position */
|
2015-06-29 21:57:18 +00:00
|
|
|
for (int j = jstart; j < jend; j++) { /* WAL file */
|
|
|
|
// Fill data for testing
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
const size_t row_count = RecoveryTestHelper::FillData(this, options);
|
|
|
|
// test checksum failure or parsing
|
2015-07-13 19:11:05 +00:00
|
|
|
RecoveryTestHelper::CorruptWAL(this, options, /*off=*/i * .3,
|
|
|
|
/*len%=*/.1, /*wal=*/j, trunc);
|
2015-06-29 21:57:18 +00:00
|
|
|
|
|
|
|
if (trunc) {
|
|
|
|
options.wal_recovery_mode =
|
|
|
|
WALRecoveryMode::kTolerateCorruptedTailRecords;
|
|
|
|
options.create_if_missing = false;
|
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
const size_t recovered_row_count = RecoveryTestHelper::GetData(this);
|
|
|
|
ASSERT_TRUE(i == 0 || recovered_row_count > 0);
|
|
|
|
ASSERT_LT(recovered_row_count, row_count);
|
|
|
|
} else {
|
|
|
|
options.wal_recovery_mode =
|
|
|
|
WALRecoveryMode::kTolerateCorruptedTailRecords;
|
|
|
|
ASSERT_NOK(TryReopen(options));
|
|
|
|
}
|
2015-06-15 19:03:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Test scope:
|
|
|
|
// We don't expect the data store to be opened if there is any corruption
|
|
|
|
// (leading, middle or trailing -- incomplete writes or corruption)
|
|
|
|
TEST_F(DBTest, kAbsoluteConsistency) {
|
2015-07-13 19:11:05 +00:00
|
|
|
const int jstart = RecoveryTestHelper::kWALFileOffset;
|
|
|
|
const int jend = jstart + RecoveryTestHelper::kWALFilesCount;
|
2015-06-29 21:57:18 +00:00
|
|
|
|
|
|
|
// Verify clean slate behavior
|
2015-06-15 19:03:13 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
const size_t row_count = RecoveryTestHelper::FillData(this, options);
|
|
|
|
options.wal_recovery_mode = WALRecoveryMode::kAbsoluteConsistency;
|
2015-06-29 21:57:18 +00:00
|
|
|
options.create_if_missing = false;
|
2015-06-15 19:03:13 +00:00
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
ASSERT_EQ(RecoveryTestHelper::GetData(this), row_count);
|
|
|
|
|
2015-06-29 21:57:18 +00:00
|
|
|
for (auto trunc : {true, false}) { /* Corruption style */
|
2015-07-13 19:11:05 +00:00
|
|
|
for (int i = 0; i < 4; i++) { /* Corruption offset position */
|
2015-06-15 19:03:13 +00:00
|
|
|
if (trunc && i == 0) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2015-06-29 21:57:18 +00:00
|
|
|
for (int j = jstart; j < jend; j++) { /* wal files */
|
|
|
|
// fill with new date
|
|
|
|
RecoveryTestHelper::FillData(this, options);
|
|
|
|
// corrupt the wal
|
2015-07-13 19:11:05 +00:00
|
|
|
RecoveryTestHelper::CorruptWAL(this, options, /*off=*/i * .3,
|
2015-06-29 21:57:18 +00:00
|
|
|
/*len%=*/.1, j, trunc);
|
|
|
|
// verify
|
|
|
|
options.wal_recovery_mode = WALRecoveryMode::kAbsoluteConsistency;
|
|
|
|
options.create_if_missing = false;
|
|
|
|
ASSERT_NOK(TryReopen(options));
|
|
|
|
}
|
2015-06-15 19:03:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Test scope:
|
|
|
|
// - We expect to open data store under all circumstances
|
|
|
|
// - We expect only data upto the point where the first error was encountered
|
|
|
|
TEST_F(DBTest, kPointInTimeRecovery) {
|
2015-07-13 19:11:05 +00:00
|
|
|
const int jstart = RecoveryTestHelper::kWALFileOffset;
|
2015-06-29 21:57:18 +00:00
|
|
|
const int jend = jstart + RecoveryTestHelper::kWALFilesCount;
|
2015-07-13 19:11:05 +00:00
|
|
|
const int maxkeys =
|
|
|
|
RecoveryTestHelper::kWALFilesCount * RecoveryTestHelper::kKeysPerWALFile;
|
2015-06-29 21:57:18 +00:00
|
|
|
|
2015-07-13 19:11:05 +00:00
|
|
|
for (auto trunc : {true, false}) { /* Corruption style */
|
|
|
|
for (int i = 0; i < 4; i++) { /* Offset of corruption */
|
2015-06-29 21:57:18 +00:00
|
|
|
for (int j = jstart; j < jend; j++) { /* WAL file */
|
|
|
|
// Fill data for testing
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
const size_t row_count = RecoveryTestHelper::FillData(this, options);
|
|
|
|
|
|
|
|
// Corrupt the wal
|
2015-07-13 19:11:05 +00:00
|
|
|
RecoveryTestHelper::CorruptWAL(this, options, /*off=*/i * .3,
|
2015-06-29 21:57:18 +00:00
|
|
|
/*len%=*/.1, j, trunc);
|
|
|
|
|
|
|
|
// Verify
|
|
|
|
options.wal_recovery_mode = WALRecoveryMode::kPointInTimeRecovery;
|
|
|
|
options.create_if_missing = false;
|
|
|
|
ASSERT_OK(TryReopen(options));
|
2015-06-15 19:03:13 +00:00
|
|
|
|
2015-06-29 21:57:18 +00:00
|
|
|
// Probe data for invariants
|
|
|
|
size_t recovered_row_count = RecoveryTestHelper::GetData(this);
|
|
|
|
ASSERT_LT(recovered_row_count, row_count);
|
2015-06-15 19:03:13 +00:00
|
|
|
|
2015-06-29 21:57:18 +00:00
|
|
|
bool expect_data = true;
|
|
|
|
for (size_t k = 0; k < maxkeys; ++k) {
|
|
|
|
bool found = Get("key" + ToString(i)) != "NOT_FOUND";
|
|
|
|
if (expect_data && !found) {
|
|
|
|
expect_data = false;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(found, expect_data);
|
|
|
|
}
|
2015-06-15 19:03:13 +00:00
|
|
|
|
2015-06-29 21:57:18 +00:00
|
|
|
const size_t min = RecoveryTestHelper::kKeysPerWALFile *
|
2015-07-13 19:11:05 +00:00
|
|
|
(j - RecoveryTestHelper::kWALFileOffset);
|
2015-06-29 21:57:18 +00:00
|
|
|
ASSERT_GE(recovered_row_count, min);
|
|
|
|
if (!trunc && i != 0) {
|
|
|
|
const size_t max = RecoveryTestHelper::kKeysPerWALFile *
|
2015-07-13 19:11:05 +00:00
|
|
|
(j - RecoveryTestHelper::kWALFileOffset + 1);
|
2015-06-29 21:57:18 +00:00
|
|
|
ASSERT_LE(recovered_row_count, max);
|
2015-06-15 19:03:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// Test scope:
|
|
|
|
// - We expect to open the data store under all scenarios
|
|
|
|
// - We expect to have recovered records past the corruption zone
|
|
|
|
TEST_F(DBTest, kSkipAnyCorruptedRecords) {
|
2015-07-13 19:11:05 +00:00
|
|
|
const int jstart = RecoveryTestHelper::kWALFileOffset;
|
|
|
|
const int jend = jstart + RecoveryTestHelper::kWALFilesCount;
|
2015-06-29 21:57:18 +00:00
|
|
|
|
2015-07-13 19:11:05 +00:00
|
|
|
for (auto trunc : {true, false}) { /* Corruption style */
|
|
|
|
for (int i = 0; i < 4; i++) { /* Corruption offset */
|
2015-06-29 21:57:18 +00:00
|
|
|
for (int j = jstart; j < jend; j++) { /* wal files */
|
|
|
|
// Fill data for testing
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
const size_t row_count = RecoveryTestHelper::FillData(this, options);
|
|
|
|
|
|
|
|
// Corrupt the WAL
|
2015-07-13 19:11:05 +00:00
|
|
|
RecoveryTestHelper::CorruptWAL(this, options, /*off=*/i * .3,
|
2015-06-29 21:57:18 +00:00
|
|
|
/*len%=*/.1, j, trunc);
|
|
|
|
|
|
|
|
// Verify behavior
|
|
|
|
options.wal_recovery_mode = WALRecoveryMode::kSkipAnyCorruptedRecords;
|
|
|
|
options.create_if_missing = false;
|
|
|
|
ASSERT_OK(TryReopen(options));
|
2015-06-15 19:03:13 +00:00
|
|
|
|
2015-06-29 21:57:18 +00:00
|
|
|
// Probe data for invariants
|
|
|
|
size_t recovered_row_count = RecoveryTestHelper::GetData(this);
|
|
|
|
ASSERT_LT(recovered_row_count, row_count);
|
2015-06-15 19:03:13 +00:00
|
|
|
|
2015-06-29 21:57:18 +00:00
|
|
|
if (!trunc) {
|
|
|
|
ASSERT_TRUE(i != 0 || recovered_row_count > 0);
|
|
|
|
}
|
2015-06-15 19:03:13 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-08-14 23:32:46 +00:00
|
|
|
|
2011-05-28 00:53:58 +00:00
|
|
|
// Multi-threaded test:
|
|
|
|
namespace {
|
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
static const int kColumnFamilies = 10;
|
|
|
|
static const int kNumThreads = 10;
|
2011-05-28 00:53:58 +00:00
|
|
|
static const int kTestSeconds = 10;
|
|
|
|
static const int kNumKeys = 1000;
|
|
|
|
|
|
|
|
struct MTState {
|
|
|
|
DBTest* test;
|
2014-10-27 21:50:21 +00:00
|
|
|
std::atomic<bool> stop;
|
|
|
|
std::atomic<int> counter[kNumThreads];
|
|
|
|
std::atomic<bool> thread_done[kNumThreads];
|
2011-05-28 00:53:58 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
struct MTThread {
|
|
|
|
MTState* state;
|
|
|
|
int id;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void MTThreadBody(void* arg) {
|
|
|
|
MTThread* t = reinterpret_cast<MTThread*>(arg);
|
2012-01-25 22:56:52 +00:00
|
|
|
int id = t->id;
|
2011-05-28 00:53:58 +00:00
|
|
|
DB* db = t->state->test->db_;
|
2014-10-27 21:50:21 +00:00
|
|
|
int counter = 0;
|
2012-01-25 22:56:52 +00:00
|
|
|
fprintf(stderr, "... starting thread %d\n", id);
|
|
|
|
Random rnd(1000 + id);
|
2011-05-28 00:53:58 +00:00
|
|
|
char valbuf[1500];
|
2014-10-27 21:50:21 +00:00
|
|
|
while (t->state->stop.load(std::memory_order_acquire) == false) {
|
|
|
|
t->state->counter[id].store(counter, std::memory_order_release);
|
2011-05-28 00:53:58 +00:00
|
|
|
|
|
|
|
int key = rnd.Uniform(kNumKeys);
|
|
|
|
char keybuf[20];
|
|
|
|
snprintf(keybuf, sizeof(keybuf), "%016d", key);
|
|
|
|
|
|
|
|
if (rnd.OneIn(2)) {
|
2014-02-07 22:47:16 +00:00
|
|
|
// Write values of the form <key, my id, counter, cf, unique_id>.
|
|
|
|
// into each of the CFs
|
2011-05-28 00:53:58 +00:00
|
|
|
// We add some padding for force compactions.
|
2014-02-07 22:47:16 +00:00
|
|
|
int unique_id = rnd.Uniform(1000000);
|
2014-08-18 22:19:17 +00:00
|
|
|
|
|
|
|
// Half of the time directly use WriteBatch. Half of the time use
|
|
|
|
// WriteBatchWithIndex.
|
|
|
|
if (rnd.OneIn(2)) {
|
|
|
|
WriteBatch batch;
|
|
|
|
for (int cf = 0; cf < kColumnFamilies; ++cf) {
|
|
|
|
snprintf(valbuf, sizeof(valbuf), "%d.%d.%d.%d.%-1000d", key, id,
|
|
|
|
static_cast<int>(counter), cf, unique_id);
|
|
|
|
batch.Put(t->state->test->handles_[cf], Slice(keybuf), Slice(valbuf));
|
|
|
|
}
|
|
|
|
ASSERT_OK(db->Write(WriteOptions(), &batch));
|
|
|
|
} else {
|
|
|
|
WriteBatchWithIndex batch(db->GetOptions().comparator);
|
|
|
|
for (int cf = 0; cf < kColumnFamilies; ++cf) {
|
|
|
|
snprintf(valbuf, sizeof(valbuf), "%d.%d.%d.%d.%-1000d", key, id,
|
|
|
|
static_cast<int>(counter), cf, unique_id);
|
|
|
|
batch.Put(t->state->test->handles_[cf], Slice(keybuf), Slice(valbuf));
|
|
|
|
}
|
|
|
|
ASSERT_OK(db->Write(WriteOptions(), batch.GetWriteBatch()));
|
2014-02-07 22:47:16 +00:00
|
|
|
}
|
2011-05-28 00:53:58 +00:00
|
|
|
} else {
|
2014-02-07 22:47:16 +00:00
|
|
|
// Read a value and verify that it matches the pattern written above
|
|
|
|
// and that writes to all column families were atomic (unique_id is the
|
|
|
|
// same)
|
|
|
|
std::vector<Slice> keys(kColumnFamilies, Slice(keybuf));
|
|
|
|
std::vector<std::string> values;
|
|
|
|
std::vector<Status> statuses =
|
|
|
|
db->MultiGet(ReadOptions(), t->state->test->handles_, keys, &values);
|
|
|
|
Status s = statuses[0];
|
|
|
|
// all statuses have to be the same
|
|
|
|
for (size_t i = 1; i < statuses.size(); ++i) {
|
|
|
|
// they are either both ok or both not-found
|
|
|
|
ASSERT_TRUE((s.ok() && statuses[i].ok()) ||
|
|
|
|
(s.IsNotFound() && statuses[i].IsNotFound()));
|
|
|
|
}
|
2011-05-28 00:53:58 +00:00
|
|
|
if (s.IsNotFound()) {
|
|
|
|
// Key has not yet been written
|
|
|
|
} else {
|
|
|
|
// Check that the writer thread counter is >= the counter in the value
|
|
|
|
ASSERT_OK(s);
|
2014-02-07 22:47:16 +00:00
|
|
|
int unique_id = -1;
|
|
|
|
for (int i = 0; i < kColumnFamilies; ++i) {
|
|
|
|
int k, w, c, cf, u;
|
|
|
|
ASSERT_EQ(5, sscanf(values[i].c_str(), "%d.%d.%d.%d.%d", &k, &w,
|
|
|
|
&c, &cf, &u))
|
|
|
|
<< values[i];
|
|
|
|
ASSERT_EQ(k, key);
|
|
|
|
ASSERT_GE(w, 0);
|
|
|
|
ASSERT_LT(w, kNumThreads);
|
2014-10-27 21:50:21 +00:00
|
|
|
ASSERT_LE(c, t->state->counter[w].load(std::memory_order_acquire));
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_EQ(cf, i);
|
|
|
|
if (i == 0) {
|
|
|
|
unique_id = u;
|
|
|
|
} else {
|
|
|
|
// this checks that updates across column families happened
|
|
|
|
// atomically -- all unique ids are the same
|
|
|
|
ASSERT_EQ(u, unique_id);
|
|
|
|
}
|
|
|
|
}
|
2011-05-28 00:53:58 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
counter++;
|
|
|
|
}
|
2014-10-27 21:50:21 +00:00
|
|
|
t->state->thread_done[id].store(true, std::memory_order_release);
|
2012-01-25 22:56:52 +00:00
|
|
|
fprintf(stderr, "... stopping thread %d after %d ops\n", id, int(counter));
|
2011-05-28 00:53:58 +00:00
|
|
|
}
|
|
|
|
|
2011-10-31 17:22:06 +00:00
|
|
|
} // namespace
|
2011-05-28 00:53:58 +00:00
|
|
|
|
2015-03-19 01:18:12 +00:00
|
|
|
class MultiThreadedDBTest : public DBTest,
|
|
|
|
public ::testing::WithParamInterface<int> {
|
|
|
|
public:
|
|
|
|
virtual void SetUp() override { option_config_ = GetParam(); }
|
|
|
|
|
|
|
|
static std::vector<int> GenerateOptionConfigs() {
|
|
|
|
std::vector<int> optionConfigs;
|
|
|
|
for (int optionConfig = kDefault; optionConfig < kEnd; ++optionConfig) {
|
|
|
|
// skip as HashCuckooRep does not support snapshot
|
|
|
|
if (optionConfig != kHashCuckoo) {
|
|
|
|
optionConfigs.push_back(optionConfig);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return optionConfigs;
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
TEST_P(MultiThreadedDBTest, MultiThreaded) {
|
2015-02-02 22:49:22 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.skip_policy = kSkipNoSnapshot;
|
2015-03-19 01:18:12 +00:00
|
|
|
std::vector<std::string> cfs;
|
|
|
|
for (int i = 1; i < kColumnFamilies; ++i) {
|
|
|
|
cfs.push_back(ToString(i));
|
|
|
|
}
|
|
|
|
CreateAndReopenWithCF(cfs, CurrentOptions(options_override));
|
|
|
|
// Initialize state
|
|
|
|
MTState mt;
|
|
|
|
mt.test = this;
|
|
|
|
mt.stop.store(false, std::memory_order_release);
|
|
|
|
for (int id = 0; id < kNumThreads; id++) {
|
|
|
|
mt.counter[id].store(0, std::memory_order_release);
|
|
|
|
mt.thread_done[id].store(false, std::memory_order_release);
|
|
|
|
}
|
2011-05-28 00:53:58 +00:00
|
|
|
|
2015-03-19 01:18:12 +00:00
|
|
|
// Start threads
|
|
|
|
MTThread thread[kNumThreads];
|
|
|
|
for (int id = 0; id < kNumThreads; id++) {
|
|
|
|
thread[id].state = &mt;
|
|
|
|
thread[id].id = id;
|
|
|
|
env_->StartThread(MTThreadBody, &thread[id]);
|
|
|
|
}
|
2011-05-28 00:53:58 +00:00
|
|
|
|
2015-03-19 01:18:12 +00:00
|
|
|
// Let them run for a while
|
|
|
|
env_->SleepForMicroseconds(kTestSeconds * 1000000);
|
2011-05-28 00:53:58 +00:00
|
|
|
|
2015-03-19 01:18:12 +00:00
|
|
|
// Stop the threads and wait for them to finish
|
|
|
|
mt.stop.store(true, std::memory_order_release);
|
|
|
|
for (int id = 0; id < kNumThreads; id++) {
|
|
|
|
while (mt.thread_done[id].load(std::memory_order_acquire) == false) {
|
|
|
|
env_->SleepForMicroseconds(100000);
|
2011-05-28 00:53:58 +00:00
|
|
|
}
|
2015-03-19 01:18:12 +00:00
|
|
|
}
|
2011-05-28 00:53:58 +00:00
|
|
|
}
|
|
|
|
|
2015-03-19 01:18:12 +00:00
|
|
|
INSTANTIATE_TEST_CASE_P(
|
|
|
|
MultiThreaded, MultiThreadedDBTest,
|
|
|
|
::testing::ValuesIn(MultiThreadedDBTest::GenerateOptionConfigs()));
|
|
|
|
|
2014-01-14 22:49:31 +00:00
|
|
|
// Group commit test:
|
|
|
|
namespace {
|
|
|
|
|
|
|
|
static const int kGCNumThreads = 4;
|
|
|
|
static const int kGCNumKeys = 1000;
|
|
|
|
|
|
|
|
struct GCThread {
|
|
|
|
DB* db;
|
|
|
|
int id;
|
|
|
|
std::atomic<bool> done;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void GCThreadBody(void* arg) {
|
|
|
|
GCThread* t = reinterpret_cast<GCThread*>(arg);
|
|
|
|
int id = t->id;
|
|
|
|
DB* db = t->db;
|
|
|
|
WriteOptions wo;
|
|
|
|
|
|
|
|
for (int i = 0; i < kGCNumKeys; ++i) {
|
2014-11-25 04:44:49 +00:00
|
|
|
std::string kv(ToString(i + id * kGCNumKeys));
|
2014-01-14 22:49:31 +00:00
|
|
|
ASSERT_OK(db->Put(wo, kv, kv));
|
|
|
|
}
|
|
|
|
t->done = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
} // namespace
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GroupCommitTest) {
|
2014-01-14 22:49:31 +00:00
|
|
|
do {
|
2014-02-07 22:47:16 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-11-06 18:14:47 +00:00
|
|
|
options.env = env_;
|
|
|
|
env_->log_write_slowdown_.store(100);
|
2014-02-07 22:47:16 +00:00
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-02-07 22:47:16 +00:00
|
|
|
|
2014-01-14 22:49:31 +00:00
|
|
|
// Start threads
|
|
|
|
GCThread thread[kGCNumThreads];
|
|
|
|
for (int id = 0; id < kGCNumThreads; id++) {
|
|
|
|
thread[id].id = id;
|
|
|
|
thread[id].db = db_;
|
|
|
|
thread[id].done = false;
|
|
|
|
env_->StartThread(GCThreadBody, &thread[id]);
|
|
|
|
}
|
|
|
|
|
|
|
|
for (int id = 0; id < kGCNumThreads; id++) {
|
|
|
|
while (thread[id].done == false) {
|
|
|
|
env_->SleepForMicroseconds(100000);
|
|
|
|
}
|
|
|
|
}
|
2014-11-06 18:14:47 +00:00
|
|
|
env_->log_write_slowdown_.store(0);
|
|
|
|
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_GT(TestGetTickerCount(options, WRITE_DONE_BY_OTHER), 0);
|
2014-01-14 22:49:31 +00:00
|
|
|
|
|
|
|
std::vector<std::string> expected_db;
|
|
|
|
for (int i = 0; i < kGCNumThreads * kGCNumKeys; ++i) {
|
2014-11-25 04:44:49 +00:00
|
|
|
expected_db.push_back(ToString(i));
|
2014-01-14 22:49:31 +00:00
|
|
|
}
|
|
|
|
sort(expected_db.begin(), expected_db.end());
|
|
|
|
|
|
|
|
Iterator* itr = db_->NewIterator(ReadOptions());
|
|
|
|
itr->SeekToFirst();
|
|
|
|
for (auto x : expected_db) {
|
|
|
|
ASSERT_TRUE(itr->Valid());
|
|
|
|
ASSERT_EQ(itr->key().ToString(), x);
|
|
|
|
ASSERT_EQ(itr->value().ToString(), x);
|
|
|
|
itr->Next();
|
|
|
|
}
|
|
|
|
ASSERT_TRUE(!itr->Valid());
|
2014-01-14 23:41:30 +00:00
|
|
|
delete itr;
|
2014-01-14 22:49:31 +00:00
|
|
|
|
2015-06-02 19:35:12 +00:00
|
|
|
HistogramData hist_data = {0};
|
|
|
|
options.statistics->histogramData(DB_WRITE, &hist_data);
|
|
|
|
ASSERT_GT(hist_data.average, 0.0);
|
2014-04-25 19:21:34 +00:00
|
|
|
} while (ChangeOptions(kSkipNoSeekToLast));
|
2014-01-14 22:49:31 +00:00
|
|
|
}
|
|
|
|
|
2011-05-21 02:17:43 +00:00
|
|
|
namespace {
|
|
|
|
typedef std::map<std::string, std::string> KVMap;
|
|
|
|
}
|
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
class ModelDB: public DB {
|
|
|
|
public:
|
2011-05-21 02:17:43 +00:00
|
|
|
class ModelSnapshot : public Snapshot {
|
|
|
|
public:
|
|
|
|
KVMap map_;
|
2015-01-30 02:22:43 +00:00
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual SequenceNumber GetSequenceNumber() const override {
|
2015-01-30 02:22:43 +00:00
|
|
|
// no need to call this
|
|
|
|
assert(false);
|
|
|
|
return 0;
|
|
|
|
}
|
2011-05-21 02:17:43 +00:00
|
|
|
};
|
|
|
|
|
2014-02-11 01:04:44 +00:00
|
|
|
explicit ModelDB(const Options& options) : options_(options) {}
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::Put;
|
2014-02-11 01:04:44 +00:00
|
|
|
virtual Status Put(const WriteOptions& o, ColumnFamilyHandle* cf,
|
2015-02-26 19:28:41 +00:00
|
|
|
const Slice& k, const Slice& v) override {
|
2014-02-11 01:04:44 +00:00
|
|
|
WriteBatch batch;
|
2014-03-14 20:40:06 +00:00
|
|
|
batch.Put(cf, k, v);
|
2014-02-11 01:04:44 +00:00
|
|
|
return Write(o, &batch);
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
}
|
|
|
|
using DB::Merge;
|
2014-02-11 01:04:44 +00:00
|
|
|
virtual Status Merge(const WriteOptions& o, ColumnFamilyHandle* cf,
|
2015-02-26 19:28:41 +00:00
|
|
|
const Slice& k, const Slice& v) override {
|
2014-02-11 01:04:44 +00:00
|
|
|
WriteBatch batch;
|
2014-03-14 20:40:06 +00:00
|
|
|
batch.Merge(cf, k, v);
|
2014-02-11 01:04:44 +00:00
|
|
|
return Write(o, &batch);
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
}
|
|
|
|
using DB::Delete;
|
2014-02-11 01:04:44 +00:00
|
|
|
virtual Status Delete(const WriteOptions& o, ColumnFamilyHandle* cf,
|
2015-02-26 19:28:41 +00:00
|
|
|
const Slice& key) override {
|
2014-02-11 01:04:44 +00:00
|
|
|
WriteBatch batch;
|
2014-03-14 20:40:06 +00:00
|
|
|
batch.Delete(cf, key);
|
2014-02-11 01:04:44 +00:00
|
|
|
return Write(o, &batch);
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
}
|
|
|
|
using DB::Get;
|
2014-02-11 01:04:44 +00:00
|
|
|
virtual Status Get(const ReadOptions& options, ColumnFamilyHandle* cf,
|
2015-02-26 19:28:41 +00:00
|
|
|
const Slice& key, std::string* value) override {
|
2013-06-05 18:22:38 +00:00
|
|
|
return Status::NotSupported(key);
|
|
|
|
}
|
|
|
|
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::MultiGet;
|
|
|
|
virtual std::vector<Status> MultiGet(
|
|
|
|
const ReadOptions& options,
|
2014-02-11 01:04:44 +00:00
|
|
|
const std::vector<ColumnFamilyHandle*>& column_family,
|
2015-02-26 19:28:41 +00:00
|
|
|
const std::vector<Slice>& keys,
|
|
|
|
std::vector<std::string>* values) override {
|
2013-06-05 18:22:38 +00:00
|
|
|
std::vector<Status> s(keys.size(),
|
|
|
|
Status::NotSupported("Not implemented."));
|
|
|
|
return s;
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
2014-02-15 00:46:03 +00:00
|
|
|
|
2014-02-15 01:02:10 +00:00
|
|
|
using DB::GetPropertiesOfAllTables;
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual Status GetPropertiesOfAllTables(
|
|
|
|
ColumnFamilyHandle* column_family,
|
|
|
|
TablePropertiesCollection* props) override {
|
2014-02-14 00:28:21 +00:00
|
|
|
return Status();
|
|
|
|
}
|
2014-02-15 00:46:03 +00:00
|
|
|
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::KeyMayExist;
|
2013-07-26 19:57:01 +00:00
|
|
|
virtual bool KeyMayExist(const ReadOptions& options,
|
2014-02-11 01:04:44 +00:00
|
|
|
ColumnFamilyHandle* column_family, const Slice& key,
|
2015-02-26 19:28:41 +00:00
|
|
|
std::string* value,
|
|
|
|
bool* value_found = nullptr) override {
|
2013-07-26 19:57:01 +00:00
|
|
|
if (value_found != nullptr) {
|
|
|
|
*value_found = false;
|
|
|
|
}
|
2013-07-06 01:49:18 +00:00
|
|
|
return true; // Not Supported directly
|
|
|
|
}
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::NewIterator;
|
|
|
|
virtual Iterator* NewIterator(const ReadOptions& options,
|
2015-02-26 19:28:41 +00:00
|
|
|
ColumnFamilyHandle* column_family) override {
|
2013-03-01 02:04:58 +00:00
|
|
|
if (options.snapshot == nullptr) {
|
2011-03-18 22:37:00 +00:00
|
|
|
KVMap* saved = new KVMap;
|
|
|
|
*saved = map_;
|
|
|
|
return new ModelIter(saved, true);
|
|
|
|
} else {
|
|
|
|
const KVMap* snapshot_state =
|
2011-05-21 02:17:43 +00:00
|
|
|
&(reinterpret_cast<const ModelSnapshot*>(options.snapshot)->map_);
|
2011-03-18 22:37:00 +00:00
|
|
|
return new ModelIter(snapshot_state, false);
|
|
|
|
}
|
|
|
|
}
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
virtual Status NewIterators(
|
|
|
|
const ReadOptions& options,
|
2014-02-11 01:04:44 +00:00
|
|
|
const std::vector<ColumnFamilyHandle*>& column_family,
|
2015-02-26 19:28:41 +00:00
|
|
|
std::vector<Iterator*>* iterators) override {
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
return Status::NotSupported("Not supported yet");
|
|
|
|
}
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual const Snapshot* GetSnapshot() override {
|
2011-05-21 02:17:43 +00:00
|
|
|
ModelSnapshot* snapshot = new ModelSnapshot;
|
|
|
|
snapshot->map_ = map_;
|
|
|
|
return snapshot;
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual void ReleaseSnapshot(const Snapshot* snapshot) override {
|
2011-05-21 02:17:43 +00:00
|
|
|
delete reinterpret_cast<const ModelSnapshot*>(snapshot);
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual Status Write(const WriteOptions& options,
|
|
|
|
WriteBatch* batch) override {
|
2011-05-21 02:17:43 +00:00
|
|
|
class Handler : public WriteBatch::Handler {
|
|
|
|
public:
|
|
|
|
KVMap* map_;
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual void Put(const Slice& key, const Slice& value) override {
|
2011-05-21 02:17:43 +00:00
|
|
|
(*map_)[key.ToString()] = value.ToString();
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual void Merge(const Slice& key, const Slice& value) override {
|
2013-03-21 22:59:47 +00:00
|
|
|
// ignore merge for now
|
|
|
|
//(*map_)[key.ToString()] = value.ToString();
|
|
|
|
}
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual void Delete(const Slice& key) override {
|
2011-05-21 02:17:43 +00:00
|
|
|
map_->erase(key.ToString());
|
|
|
|
}
|
|
|
|
};
|
|
|
|
Handler handler;
|
|
|
|
handler.map_ = &map_;
|
|
|
|
return batch->Iterate(&handler);
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::GetProperty;
|
2014-02-11 01:04:44 +00:00
|
|
|
virtual bool GetProperty(ColumnFamilyHandle* column_family,
|
2015-02-26 19:28:41 +00:00
|
|
|
const Slice& property, std::string* value) override {
|
2011-03-18 22:37:00 +00:00
|
|
|
return false;
|
|
|
|
}
|
2014-07-28 22:28:53 +00:00
|
|
|
using DB::GetIntProperty;
|
|
|
|
virtual bool GetIntProperty(ColumnFamilyHandle* column_family,
|
|
|
|
const Slice& property, uint64_t* value) override {
|
|
|
|
return false;
|
|
|
|
}
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::GetApproximateSizes;
|
2014-02-11 01:04:44 +00:00
|
|
|
virtual void GetApproximateSizes(ColumnFamilyHandle* column_family,
|
2015-06-13 01:04:30 +00:00
|
|
|
const Range* range, int n, uint64_t* sizes,
|
|
|
|
bool include_memtable) override {
|
2011-03-18 22:37:00 +00:00
|
|
|
for (int i = 0; i < n; i++) {
|
|
|
|
sizes[i] = 0;
|
|
|
|
}
|
|
|
|
}
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::CompactRange;
|
2015-06-17 21:36:14 +00:00
|
|
|
virtual Status CompactRange(const CompactRangeOptions& options,
|
|
|
|
ColumnFamilyHandle* column_family,
|
|
|
|
const Slice* start, const Slice* end) override {
|
2014-01-22 20:46:24 +00:00
|
|
|
return Status::NotSupported("Not supported operation.");
|
2011-10-05 23:30:28 +00:00
|
|
|
}
|
|
|
|
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
2014-11-07 22:45:18 +00:00
|
|
|
using DB::CompactFiles;
|
|
|
|
virtual Status CompactFiles(
|
|
|
|
const CompactionOptions& compact_options,
|
|
|
|
ColumnFamilyHandle* column_family,
|
|
|
|
const std::vector<std::string>& input_file_names,
|
|
|
|
const int output_level, const int output_path_id = -1) override {
|
|
|
|
return Status::NotSupported("Not supported operation.");
|
|
|
|
}
|
|
|
|
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::NumberLevels;
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual int NumberLevels(ColumnFamilyHandle* column_family) override {
|
|
|
|
return 1;
|
|
|
|
}
|
2012-06-23 02:30:03 +00:00
|
|
|
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::MaxMemCompactionLevel;
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual int MaxMemCompactionLevel(
|
|
|
|
ColumnFamilyHandle* column_family) override {
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
return 1;
|
2012-06-23 02:30:03 +00:00
|
|
|
}
|
|
|
|
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::Level0StopWriteTrigger;
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual int Level0StopWriteTrigger(
|
|
|
|
ColumnFamilyHandle* column_family) override {
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
return -1;
|
2012-06-23 02:30:03 +00:00
|
|
|
}
|
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual const std::string& GetName() const override { return name_; }
|
[RocksDB] BackupableDB
Summary:
In this diff I present you BackupableDB v1. You can easily use it to backup your DB and it will do incremental snapshots for you.
Let's first describe how you would use BackupableDB. It's inheriting StackableDB interface so you can easily construct it with your DB object -- it will add a method RollTheSnapshot() to the DB object. When you call RollTheSnapshot(), current snapshot of the DB will be stored in the backup dir. To restore, you can just call RestoreDBFromBackup() on a BackupableDB (which is a static method) and it will restore all files from the backup dir. In the next version, it will even support automatic backuping every X minutes.
There are multiple things you can configure:
1. backup_env and db_env can be different, which is awesome because then you can easily backup to HDFS or wherever you feel like.
2. sync - if true, it *guarantees* backup consistency on machine reboot
3. number of snapshots to keep - this will keep last N snapshots around if you want, for some reason, be able to restore from an earlier snapshot. All the backuping is done in incremental fashion - if we already have 00010.sst, we will not copy it again. *IMPORTANT* -- This is based on assumption that 00010.sst never changes - two files named 00010.sst from the same DB will always be exactly the same. Is this true? I always copy manifest, current and log files.
4. You can decide if you want to flush the memtables before you backup, or you're fine with backing up the log files -- either way, you get a complete and consistent view of the database at a time of backup.
5. More things you can find in BackupableDBOptions
Here is the directory structure I use:
backup_dir/CURRENT_SNAPSHOT - just 4 bytes holding the latest snapshot
0, 1, 2, ... - files containing serialized version of each snapshot - containing a list of files
files/*.sst - sst files shared between snapshots - if one snapshot references 00010.sst and another one needs to backup it from the DB, it will just reference the same file
files/ 0/, 1/, 2/, ... - snapshot directories containing private snapshot files - current, manifest and log files
All the files are ref counted and deleted immediatelly when they get out of scope.
Some other stuff in this diff:
1. Added GetEnv() method to the DB. Discussed with @haobo and we agreed that it seems right thing to do.
2. Fixed StackableDB interface. The way it was set up before, I was not able to implement BackupableDB.
Test Plan:
I have a unittest, but please don't look at this yet. I just hacked it up to help me with debugging. I will write a lot of good tests and update the diff.
Also, `make asan_check`
Reviewers: dhruba, haobo, emayanke
Reviewed By: dhruba
CC: leveldb, haobo
Differential Revision: https://reviews.facebook.net/D14295
2013-12-09 22:06:52 +00:00
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual Env* GetEnv() const override { return nullptr; }
|
2013-11-25 20:39:23 +00:00
|
|
|
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::GetOptions;
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual const Options& GetOptions(
|
|
|
|
ColumnFamilyHandle* column_family) const override {
|
2013-11-25 23:51:50 +00:00
|
|
|
return options_;
|
|
|
|
}
|
|
|
|
|
2015-05-11 21:51:51 +00:00
|
|
|
using DB::GetDBOptions;
|
|
|
|
virtual const DBOptions& GetDBOptions() const override { return options_; }
|
|
|
|
|
[RocksDB] [Column Family] Interface proposal
Summary:
<This diff is for Column Family branch>
Sharing some of the work I've done so far. This diff compiles and passes the tests.
The biggest change is in options.h - I broke down Options into two parts - DBOptions and ColumnFamilyOptions. DBOptions is DB-specific (env, create_if_missing, block_cache, etc.) and ColumnFamilyOptions is column family-specific (all compaction options, compresion options, etc.). Note that this does not break backwards compatibility at all.
Further, I created DBWithColumnFamily which inherits DB interface and adds new functions with column family support. Clients can transparently switch to DBWithColumnFamily and it will not break their backwards compatibility.
There are few methods worth checking out: ListColumnFamilies(), MultiNewIterator(), MultiGet() and GetSnapshot(). [GetSnapshot() returns the snapshot across all column families for now - I think that's what we agreed on]
Finally, I made small changes to WriteBatch so we are able to atomically insert data across column families.
Please provide feedback.
Test Plan: make check works, the code is backward compatible
Reviewers: dhruba, haobo, sdong, kailiu, emayanke
CC: leveldb
Differential Revision: https://reviews.facebook.net/D14445
2013-12-03 19:14:09 +00:00
|
|
|
using DB::Flush;
|
|
|
|
virtual Status Flush(const rocksdb::FlushOptions& options,
|
2015-02-26 19:28:41 +00:00
|
|
|
ColumnFamilyHandle* column_family) override {
|
2012-07-06 18:42:09 +00:00
|
|
|
Status ret;
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
[wal changes 3/3] method in DB to sync WAL without blocking writers
Summary:
Subj. We really need this feature.
Previous diff D40899 has most of the changes to make this possible, this diff just adds the method.
Test Plan: `make check`, the new test fails without this diff; ran with ASAN, TSAN and valgrind.
Reviewers: igor, rven, IslamAbdelRahman, anthony, kradhakrishnan, tnovak, yhchiang, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, maykov, hermanlee4, yoshinorim, tnovak, dhruba
Differential Revision: https://reviews.facebook.net/D40905
2015-08-05 13:06:39 +00:00
|
|
|
virtual Status SyncWAL() override {
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual Status DisableFileDeletions() override { return Status::OK(); }
|
|
|
|
virtual Status EnableFileDeletions(bool force) override {
|
2012-09-15 00:11:35 +00:00
|
|
|
return Status::OK();
|
|
|
|
}
|
2013-10-03 21:38:32 +00:00
|
|
|
virtual Status GetLiveFiles(std::vector<std::string>&, uint64_t* size,
|
2015-02-26 19:28:41 +00:00
|
|
|
bool flush_memtable = true) override {
|
2012-09-15 00:11:35 +00:00
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual Status GetSortedWalFiles(VectorLogPtr& files) override {
|
2013-08-06 19:54:37 +00:00
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual Status DeleteFile(std::string name) override { return Status::OK(); }
|
2013-08-06 19:54:37 +00:00
|
|
|
|
2015-05-21 18:01:48 +00:00
|
|
|
virtual Status GetDbIdentity(std::string& identity) const override {
|
2013-12-05 18:16:39 +00:00
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual SequenceNumber GetLatestSequenceNumber() const override { return 0; }
|
2014-02-28 19:50:36 +00:00
|
|
|
virtual Status GetUpdatesSince(
|
|
|
|
rocksdb::SequenceNumber, unique_ptr<rocksdb::TransactionLogIterator>*,
|
|
|
|
const TransactionLogIterator::ReadOptions&
|
2015-02-26 19:28:41 +00:00
|
|
|
read_options = TransactionLogIterator::ReadOptions()) override {
|
2012-11-30 01:28:37 +00:00
|
|
|
return Status::NotSupported("Not supported in Model DB");
|
|
|
|
}
|
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual ColumnFamilyHandle* DefaultColumnFamily() const override {
|
|
|
|
return nullptr;
|
|
|
|
}
|
2014-02-11 01:04:44 +00:00
|
|
|
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
2014-11-07 22:45:18 +00:00
|
|
|
virtual void GetColumnFamilyMetaData(
|
|
|
|
ColumnFamilyHandle* column_family,
|
2015-02-26 19:28:41 +00:00
|
|
|
ColumnFamilyMetaData* metadata) override {}
|
CompactFiles, EventListener and GetDatabaseMetaData
Summary:
This diff adds three sets of APIs to RocksDB.
= GetColumnFamilyMetaData =
* This APIs allow users to obtain the current state of a RocksDB instance on one column family.
* See GetColumnFamilyMetaData in include/rocksdb/db.h
= EventListener =
* A virtual class that allows users to implement a set of
call-back functions which will be called when specific
events of a RocksDB instance happens.
* To register EventListener, simply insert an EventListener to ColumnFamilyOptions::listeners
= CompactFiles =
* CompactFiles API inputs a set of file numbers and an output level, and RocksDB
will try to compact those files into the specified level.
= Example =
* Example code can be found in example/compact_files_example.cc, which implements
a simple external compactor using EventListener, GetColumnFamilyMetaData, and
CompactFiles API.
Test Plan:
listener_test
compactor_test
example/compact_files_example
export ROCKSDB_TESTS=CompactFiles
db_test
export ROCKSDB_TESTS=MetaData
db_test
Reviewers: ljin, igor, rven, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D24705
2014-11-07 22:45:18 +00:00
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
private:
|
|
|
|
class ModelIter: public Iterator {
|
|
|
|
public:
|
|
|
|
ModelIter(const KVMap* map, bool owned)
|
|
|
|
: map_(map), owned_(owned), iter_(map_->end()) {
|
|
|
|
}
|
|
|
|
~ModelIter() {
|
|
|
|
if (owned_) delete map_;
|
|
|
|
}
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual bool Valid() const override { return iter_ != map_->end(); }
|
|
|
|
virtual void SeekToFirst() override { iter_ = map_->begin(); }
|
|
|
|
virtual void SeekToLast() override {
|
2011-03-18 22:37:00 +00:00
|
|
|
if (map_->empty()) {
|
|
|
|
iter_ = map_->end();
|
|
|
|
} else {
|
|
|
|
iter_ = map_->find(map_->rbegin()->first);
|
|
|
|
}
|
|
|
|
}
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual void Seek(const Slice& k) override {
|
2011-03-18 22:37:00 +00:00
|
|
|
iter_ = map_->lower_bound(k.ToString());
|
|
|
|
}
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual void Next() override { ++iter_; }
|
|
|
|
virtual void Prev() override {
|
2014-08-08 16:44:14 +00:00
|
|
|
if (iter_ == map_->begin()) {
|
|
|
|
iter_ = map_->end();
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
--iter_;
|
|
|
|
}
|
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual Slice key() const override { return iter_->first; }
|
|
|
|
virtual Slice value() const override { return iter_->second; }
|
|
|
|
virtual Status status() const override { return Status::OK(); }
|
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
private:
|
|
|
|
const KVMap* const map_;
|
|
|
|
const bool owned_; // Do we own map_
|
|
|
|
KVMap::const_iterator iter_;
|
|
|
|
};
|
|
|
|
const Options options_;
|
|
|
|
KVMap map_;
|
[RocksDB] BackupableDB
Summary:
In this diff I present you BackupableDB v1. You can easily use it to backup your DB and it will do incremental snapshots for you.
Let's first describe how you would use BackupableDB. It's inheriting StackableDB interface so you can easily construct it with your DB object -- it will add a method RollTheSnapshot() to the DB object. When you call RollTheSnapshot(), current snapshot of the DB will be stored in the backup dir. To restore, you can just call RestoreDBFromBackup() on a BackupableDB (which is a static method) and it will restore all files from the backup dir. In the next version, it will even support automatic backuping every X minutes.
There are multiple things you can configure:
1. backup_env and db_env can be different, which is awesome because then you can easily backup to HDFS or wherever you feel like.
2. sync - if true, it *guarantees* backup consistency on machine reboot
3. number of snapshots to keep - this will keep last N snapshots around if you want, for some reason, be able to restore from an earlier snapshot. All the backuping is done in incremental fashion - if we already have 00010.sst, we will not copy it again. *IMPORTANT* -- This is based on assumption that 00010.sst never changes - two files named 00010.sst from the same DB will always be exactly the same. Is this true? I always copy manifest, current and log files.
4. You can decide if you want to flush the memtables before you backup, or you're fine with backing up the log files -- either way, you get a complete and consistent view of the database at a time of backup.
5. More things you can find in BackupableDBOptions
Here is the directory structure I use:
backup_dir/CURRENT_SNAPSHOT - just 4 bytes holding the latest snapshot
0, 1, 2, ... - files containing serialized version of each snapshot - containing a list of files
files/*.sst - sst files shared between snapshots - if one snapshot references 00010.sst and another one needs to backup it from the DB, it will just reference the same file
files/ 0/, 1/, 2/, ... - snapshot directories containing private snapshot files - current, manifest and log files
All the files are ref counted and deleted immediatelly when they get out of scope.
Some other stuff in this diff:
1. Added GetEnv() method to the DB. Discussed with @haobo and we agreed that it seems right thing to do.
2. Fixed StackableDB interface. The way it was set up before, I was not able to implement BackupableDB.
Test Plan:
I have a unittest, but please don't look at this yet. I just hacked it up to help me with debugging. I will write a lot of good tests and update the diff.
Also, `make asan_check`
Reviewers: dhruba, haobo, emayanke
Reviewed By: dhruba
CC: leveldb, haobo
Differential Revision: https://reviews.facebook.net/D14295
2013-12-09 22:06:52 +00:00
|
|
|
std::string name_ = "";
|
2011-03-18 22:37:00 +00:00
|
|
|
};
|
|
|
|
|
2013-08-23 06:10:02 +00:00
|
|
|
static std::string RandomKey(Random* rnd, int minimum = 0) {
|
|
|
|
int len;
|
|
|
|
do {
|
|
|
|
len = (rnd->OneIn(3)
|
|
|
|
? 1 // Short sometimes to encourage collisions
|
|
|
|
: (rnd->OneIn(100) ? rnd->Skewed(10) : rnd->Uniform(10)));
|
|
|
|
} while (len < minimum);
|
2011-03-18 22:37:00 +00:00
|
|
|
return test::RandomKey(rnd, len);
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool CompareIterators(int step,
|
|
|
|
DB* model,
|
|
|
|
DB* db,
|
|
|
|
const Snapshot* model_snap,
|
|
|
|
const Snapshot* db_snap) {
|
|
|
|
ReadOptions options;
|
|
|
|
options.snapshot = model_snap;
|
|
|
|
Iterator* miter = model->NewIterator(options);
|
|
|
|
options.snapshot = db_snap;
|
|
|
|
Iterator* dbiter = db->NewIterator(options);
|
|
|
|
bool ok = true;
|
|
|
|
int count = 0;
|
|
|
|
for (miter->SeekToFirst(), dbiter->SeekToFirst();
|
|
|
|
ok && miter->Valid() && dbiter->Valid();
|
|
|
|
miter->Next(), dbiter->Next()) {
|
|
|
|
count++;
|
|
|
|
if (miter->key().compare(dbiter->key()) != 0) {
|
|
|
|
fprintf(stderr, "step %d: Key mismatch: '%s' vs. '%s'\n",
|
|
|
|
step,
|
|
|
|
EscapeString(miter->key()).c_str(),
|
|
|
|
EscapeString(dbiter->key()).c_str());
|
|
|
|
ok = false;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (miter->value().compare(dbiter->value()) != 0) {
|
|
|
|
fprintf(stderr, "step %d: Value mismatch for key '%s': '%s' vs. '%s'\n",
|
|
|
|
step,
|
|
|
|
EscapeString(miter->key()).c_str(),
|
|
|
|
EscapeString(miter->value()).c_str(),
|
|
|
|
EscapeString(miter->value()).c_str());
|
|
|
|
ok = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ok) {
|
|
|
|
if (miter->Valid() != dbiter->Valid()) {
|
|
|
|
fprintf(stderr, "step %d: Mismatch at end of iterators: %d vs. %d\n",
|
|
|
|
step, miter->Valid(), dbiter->Valid());
|
|
|
|
ok = false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
delete miter;
|
|
|
|
delete dbiter;
|
|
|
|
return ok;
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, Randomized) {
|
2015-02-03 20:19:56 +00:00
|
|
|
anon::OptionsOverride options_override;
|
|
|
|
options_override.skip_policy = kSkipNoSnapshot;
|
2011-03-18 22:37:00 +00:00
|
|
|
Random rnd(test::RandomSeed());
|
2012-04-17 15:36:46 +00:00
|
|
|
do {
|
2015-02-03 20:19:56 +00:00
|
|
|
ModelDB model(CurrentOptions(options_override));
|
2012-04-17 15:36:46 +00:00
|
|
|
const int N = 10000;
|
2013-03-01 02:04:58 +00:00
|
|
|
const Snapshot* model_snap = nullptr;
|
|
|
|
const Snapshot* db_snap = nullptr;
|
2012-04-17 15:36:46 +00:00
|
|
|
std::string k, v;
|
|
|
|
for (int step = 0; step < N; step++) {
|
|
|
|
// TODO(sanjay): Test Get() works
|
|
|
|
int p = rnd.Uniform(100);
|
2013-08-23 06:10:02 +00:00
|
|
|
int minimum = 0;
|
2013-12-20 17:35:24 +00:00
|
|
|
if (option_config_ == kHashSkipList ||
|
2013-12-27 20:56:27 +00:00
|
|
|
option_config_ == kHashLinkList ||
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
option_config_ == kHashCuckoo ||
|
2014-04-10 21:19:43 +00:00
|
|
|
option_config_ == kPlainTableFirstBytePrefix ||
|
|
|
|
option_config_ == kBlockBasedTableWithWholeKeyHashIndex ||
|
|
|
|
option_config_ == kBlockBasedTableWithPrefixHashIndex) {
|
2013-08-23 06:10:02 +00:00
|
|
|
minimum = 1;
|
|
|
|
}
|
2012-04-17 15:36:46 +00:00
|
|
|
if (p < 45) { // Put
|
2013-08-23 06:10:02 +00:00
|
|
|
k = RandomKey(&rnd, minimum);
|
2012-04-17 15:36:46 +00:00
|
|
|
v = RandomString(&rnd,
|
|
|
|
rnd.OneIn(20)
|
|
|
|
? 100 + rnd.Uniform(100)
|
|
|
|
: rnd.Uniform(8));
|
|
|
|
ASSERT_OK(model.Put(WriteOptions(), k, v));
|
|
|
|
ASSERT_OK(db_->Put(WriteOptions(), k, v));
|
|
|
|
|
|
|
|
} else if (p < 90) { // Delete
|
2013-08-23 06:10:02 +00:00
|
|
|
k = RandomKey(&rnd, minimum);
|
2012-04-17 15:36:46 +00:00
|
|
|
ASSERT_OK(model.Delete(WriteOptions(), k));
|
|
|
|
ASSERT_OK(db_->Delete(WriteOptions(), k));
|
|
|
|
|
|
|
|
|
|
|
|
} else { // Multi-element batch
|
|
|
|
WriteBatch b;
|
|
|
|
const int num = rnd.Uniform(8);
|
|
|
|
for (int i = 0; i < num; i++) {
|
|
|
|
if (i == 0 || !rnd.OneIn(10)) {
|
2013-08-23 06:10:02 +00:00
|
|
|
k = RandomKey(&rnd, minimum);
|
2012-04-17 15:36:46 +00:00
|
|
|
} else {
|
|
|
|
// Periodically re-use the same key from the previous iter, so
|
|
|
|
// we have multiple entries in the write batch for the same key
|
|
|
|
}
|
|
|
|
if (rnd.OneIn(2)) {
|
|
|
|
v = RandomString(&rnd, rnd.Uniform(10));
|
|
|
|
b.Put(k, v);
|
|
|
|
} else {
|
|
|
|
b.Delete(k);
|
|
|
|
}
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
2012-04-17 15:36:46 +00:00
|
|
|
ASSERT_OK(model.Write(WriteOptions(), &b));
|
|
|
|
ASSERT_OK(db_->Write(WriteOptions(), &b));
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
if ((step % 100) == 0) {
|
2014-08-08 16:44:14 +00:00
|
|
|
// For DB instances that use the hash index + block-based table, the
|
|
|
|
// iterator will be invalid right when seeking a non-existent key, right
|
|
|
|
// than return a key that is close to it.
|
|
|
|
if (option_config_ != kBlockBasedTableWithWholeKeyHashIndex &&
|
|
|
|
option_config_ != kBlockBasedTableWithPrefixHashIndex) {
|
|
|
|
ASSERT_TRUE(CompareIterators(step, &model, db_, nullptr, nullptr));
|
|
|
|
ASSERT_TRUE(CompareIterators(step, &model, db_, model_snap, db_snap));
|
|
|
|
}
|
2014-04-10 21:19:43 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
// Save a snapshot from each DB this time that we'll use next
|
|
|
|
// time we compare things, to make sure the current state is
|
|
|
|
// preserved with the snapshot
|
2013-03-01 02:04:58 +00:00
|
|
|
if (model_snap != nullptr) model.ReleaseSnapshot(model_snap);
|
|
|
|
if (db_snap != nullptr) db_->ReleaseSnapshot(db_snap);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
|
2015-02-03 20:19:56 +00:00
|
|
|
auto options = CurrentOptions(options_override);
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2013-03-01 02:04:58 +00:00
|
|
|
ASSERT_TRUE(CompareIterators(step, &model, db_, nullptr, nullptr));
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2012-04-17 15:36:46 +00:00
|
|
|
model_snap = model.GetSnapshot();
|
|
|
|
db_snap = db_->GetSnapshot();
|
|
|
|
}
|
2014-08-08 16:44:14 +00:00
|
|
|
|
|
|
|
if ((step % 2000) == 0) {
|
2015-01-28 05:00:33 +00:00
|
|
|
fprintf(stderr,
|
2014-08-08 16:44:14 +00:00
|
|
|
"DBTest.Randomized, option ID: %d, step: %d out of %d\n",
|
|
|
|
option_config_, step, N);
|
|
|
|
}
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
2013-03-01 02:04:58 +00:00
|
|
|
if (model_snap != nullptr) model.ReleaseSnapshot(model_snap);
|
|
|
|
if (db_snap != nullptr) db_->ReleaseSnapshot(db_snap);
|
Add a new mem-table representation based on cuckoo hash.
Summary:
= Major Changes =
* Add a new mem-table representation, HashCuckooRep, which is based cuckoo hash.
Cuckoo hash uses multiple hash functions. This allows each key to have multiple
possible locations in the mem-table.
- Put: When insert a key, it will try to find whether one of its possible
locations is vacant and store the key. If none of its possible
locations are available, then it will kick out a victim key and
store at that location. The kicked-out victim key will then be
stored at a vacant space of its possible locations or kick-out
another victim. In this diff, the kick-out path (known as
cuckoo-path) is found using BFS, which guarantees to be the shortest.
- Get: Simply tries all possible locations of a key --- this guarantees
worst-case constant time complexity.
- Time complexity: O(1) for Get, and average O(1) for Put if the
fullness of the mem-table is below 80%.
- Default using two hash functions, the number of hash functions used
by the cuckoo-hash may dynamically increase if it fails to find a
short-enough kick-out path.
- Currently, HashCuckooRep does not support iteration and snapshots,
as our current main purpose of this is to optimize point access.
= Minor Changes =
* Add IsSnapshotSupported() to DB to indicate whether the current DB
supports snapshots. If it returns false, then DB::GetSnapshot() will
always return nullptr.
Test Plan:
Run existing tests. Will develop a test specifically for cuckoo hash in
the next diff.
Reviewers: sdong, haobo
Reviewed By: sdong
CC: leveldb, dhruba, igor
Differential Revision: https://reviews.facebook.net/D16155
2014-04-30 00:13:46 +00:00
|
|
|
// skip cuckoo hash as it does not support snapshot.
|
2014-08-08 16:44:14 +00:00
|
|
|
} while (ChangeOptions(kSkipDeletesFilterFirst | kSkipNoSeekToLast |
|
|
|
|
kSkipHashCuckoo));
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, MultiGetSimple) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2014-02-07 22:47:16 +00:00
|
|
|
ASSERT_OK(Put(1, "k1", "v1"));
|
|
|
|
ASSERT_OK(Put(1, "k2", "v2"));
|
|
|
|
ASSERT_OK(Put(1, "k3", "v3"));
|
|
|
|
ASSERT_OK(Put(1, "k4", "v4"));
|
|
|
|
ASSERT_OK(Delete(1, "k4"));
|
|
|
|
ASSERT_OK(Put(1, "k5", "v5"));
|
|
|
|
ASSERT_OK(Delete(1, "no_key"));
|
|
|
|
|
|
|
|
std::vector<Slice> keys({"k1", "k2", "k3", "k4", "k5", "no_key"});
|
|
|
|
|
|
|
|
std::vector<std::string> values(20, "Temporary data to be overwritten");
|
|
|
|
std::vector<ColumnFamilyHandle*> cfs(keys.size(), handles_[1]);
|
|
|
|
|
|
|
|
std::vector<Status> s = db_->MultiGet(ReadOptions(), cfs, keys, &values);
|
|
|
|
ASSERT_EQ(values.size(), keys.size());
|
2013-08-07 22:20:41 +00:00
|
|
|
ASSERT_EQ(values[0], "v1");
|
|
|
|
ASSERT_EQ(values[1], "v2");
|
|
|
|
ASSERT_EQ(values[2], "v3");
|
|
|
|
ASSERT_EQ(values[4], "v5");
|
|
|
|
|
|
|
|
ASSERT_OK(s[0]);
|
|
|
|
ASSERT_OK(s[1]);
|
|
|
|
ASSERT_OK(s[2]);
|
|
|
|
ASSERT_TRUE(s[3].IsNotFound());
|
|
|
|
ASSERT_OK(s[4]);
|
|
|
|
ASSERT_TRUE(s[5].IsNotFound());
|
|
|
|
} while (ChangeCompactOptions());
|
2013-06-05 18:22:38 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, MultiGetEmpty) {
|
2013-08-07 22:20:41 +00:00
|
|
|
do {
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, CurrentOptions());
|
2013-08-07 22:20:41 +00:00
|
|
|
// Empty Key Set
|
|
|
|
std::vector<Slice> keys;
|
|
|
|
std::vector<std::string> values;
|
2014-02-07 22:47:16 +00:00
|
|
|
std::vector<ColumnFamilyHandle*> cfs;
|
|
|
|
std::vector<Status> s = db_->MultiGet(ReadOptions(), cfs, keys, &values);
|
|
|
|
ASSERT_EQ(s.size(), 0U);
|
2013-08-07 22:20:41 +00:00
|
|
|
|
|
|
|
// Empty Database, Empty Key Set
|
2014-10-29 18:59:18 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
DestroyAndReopen(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-02-07 22:47:16 +00:00
|
|
|
s = db_->MultiGet(ReadOptions(), cfs, keys, &values);
|
|
|
|
ASSERT_EQ(s.size(), 0U);
|
2013-08-07 22:20:41 +00:00
|
|
|
|
|
|
|
// Empty Database, Search for Keys
|
|
|
|
keys.resize(2);
|
|
|
|
keys[0] = "a";
|
|
|
|
keys[1] = "b";
|
2014-02-07 22:47:16 +00:00
|
|
|
cfs.push_back(handles_[0]);
|
|
|
|
cfs.push_back(handles_[1]);
|
|
|
|
s = db_->MultiGet(ReadOptions(), cfs, keys, &values);
|
2013-08-07 22:20:41 +00:00
|
|
|
ASSERT_EQ((int)s.size(), 2);
|
|
|
|
ASSERT_TRUE(s[0].IsNotFound() && s[1].IsNotFound());
|
|
|
|
} while (ChangeCompactOptions());
|
2013-06-05 18:22:38 +00:00
|
|
|
}
|
|
|
|
|
2014-04-10 04:17:14 +00:00
|
|
|
namespace {
|
2013-08-13 21:04:56 +00:00
|
|
|
void PrefixScanInit(DBTest *dbtest) {
|
|
|
|
char buf[100];
|
|
|
|
std::string keystr;
|
|
|
|
const int small_range_sstfiles = 5;
|
|
|
|
const int big_range_sstfiles = 5;
|
|
|
|
|
|
|
|
// Generate 11 sst files with the following prefix ranges.
|
|
|
|
// GROUP 0: [0,10] (level 1)
|
|
|
|
// GROUP 1: [1,2], [2,3], [3,4], [4,5], [5, 6] (level 0)
|
|
|
|
// GROUP 2: [0,6], [0,7], [0,8], [0,9], [0,10] (level 0)
|
|
|
|
//
|
|
|
|
// A seek with the previous API would do 11 random I/Os (to all the
|
|
|
|
// files). With the new API and a prefix filter enabled, we should
|
|
|
|
// only do 2 random I/O, to the 2 files containing the key.
|
|
|
|
|
|
|
|
// GROUP 0
|
|
|
|
snprintf(buf, sizeof(buf), "%02d______:start", 0);
|
|
|
|
keystr = std::string(buf);
|
|
|
|
ASSERT_OK(dbtest->Put(keystr, keystr));
|
|
|
|
snprintf(buf, sizeof(buf), "%02d______:end", 10);
|
|
|
|
keystr = std::string(buf);
|
|
|
|
ASSERT_OK(dbtest->Put(keystr, keystr));
|
2014-02-07 22:47:16 +00:00
|
|
|
dbtest->Flush();
|
2015-06-17 21:36:14 +00:00
|
|
|
dbtest->dbfull()->CompactRange(CompactRangeOptions(), nullptr,
|
|
|
|
nullptr); // move to level 1
|
2013-08-13 21:04:56 +00:00
|
|
|
|
|
|
|
// GROUP 1
|
|
|
|
for (int i = 1; i <= small_range_sstfiles; i++) {
|
|
|
|
snprintf(buf, sizeof(buf), "%02d______:start", i);
|
|
|
|
keystr = std::string(buf);
|
|
|
|
ASSERT_OK(dbtest->Put(keystr, keystr));
|
|
|
|
snprintf(buf, sizeof(buf), "%02d______:end", i+1);
|
|
|
|
keystr = std::string(buf);
|
|
|
|
ASSERT_OK(dbtest->Put(keystr, keystr));
|
2014-02-07 22:47:16 +00:00
|
|
|
dbtest->Flush();
|
2013-08-13 21:04:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
// GROUP 2
|
|
|
|
for (int i = 1; i <= big_range_sstfiles; i++) {
|
|
|
|
snprintf(buf, sizeof(buf), "%02d______:start", 0);
|
|
|
|
keystr = std::string(buf);
|
|
|
|
ASSERT_OK(dbtest->Put(keystr, keystr));
|
|
|
|
snprintf(buf, sizeof(buf), "%02d______:end",
|
|
|
|
small_range_sstfiles+i+1);
|
|
|
|
keystr = std::string(buf);
|
|
|
|
ASSERT_OK(dbtest->Put(keystr, keystr));
|
2014-02-07 22:47:16 +00:00
|
|
|
dbtest->Flush();
|
2013-08-13 21:04:56 +00:00
|
|
|
}
|
|
|
|
}
|
2014-04-10 04:17:14 +00:00
|
|
|
} // namespace
|
2013-08-13 21:04:56 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, PrefixScan) {
|
2015-02-18 19:49:31 +00:00
|
|
|
XFUNC_TEST("", "dbtest_prefix", prefix_skip1, XFuncPoint::SetSkip,
|
|
|
|
kSkipNoPrefix);
|
2014-09-08 17:37:05 +00:00
|
|
|
while (ChangeFilterOptions()) {
|
|
|
|
int count;
|
|
|
|
Slice prefix;
|
|
|
|
Slice key;
|
|
|
|
char buf[100];
|
|
|
|
Iterator* iter;
|
|
|
|
snprintf(buf, sizeof(buf), "03______:");
|
|
|
|
prefix = Slice(buf, 8);
|
|
|
|
key = Slice(buf, 9);
|
|
|
|
// db configs
|
|
|
|
env_->count_random_reads_ = true;
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.env = env_;
|
|
|
|
options.prefix_extractor.reset(NewFixedPrefixTransform(8));
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.max_background_compactions = 2;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.memtable_factory.reset(NewHashSkipListRepFactory(16));
|
2013-08-13 21:04:56 +00:00
|
|
|
|
2014-09-08 17:37:05 +00:00
|
|
|
BlockBasedTableOptions table_options;
|
|
|
|
table_options.no_block_cache = true;
|
|
|
|
table_options.filter_policy.reset(NewBloomFilterPolicy(10));
|
|
|
|
table_options.whole_key_filtering = false;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
2014-08-25 21:22:05 +00:00
|
|
|
|
2014-09-08 17:37:05 +00:00
|
|
|
// 11 RAND I/Os
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-09-08 17:37:05 +00:00
|
|
|
PrefixScanInit(this);
|
|
|
|
count = 0;
|
|
|
|
env_->random_read_counter_.Reset();
|
|
|
|
iter = db_->NewIterator(ReadOptions());
|
|
|
|
for (iter->Seek(prefix); iter->Valid(); iter->Next()) {
|
|
|
|
if (! iter->key().starts_with(prefix)) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
count++;
|
2013-08-13 21:04:56 +00:00
|
|
|
}
|
2014-09-08 17:37:05 +00:00
|
|
|
ASSERT_OK(iter->status());
|
|
|
|
delete iter;
|
|
|
|
ASSERT_EQ(count, 2);
|
|
|
|
ASSERT_EQ(env_->random_read_counter_.Read(), 2);
|
|
|
|
Close();
|
|
|
|
} // end of while
|
2015-02-18 19:49:31 +00:00
|
|
|
XFUNC_TEST("", "dbtest_prefix", prefix_skip1, XFuncPoint::SetSkip, 0);
|
2013-08-13 21:04:56 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, BlockBasedTablePrefixIndexTest) {
|
2014-06-19 22:32:31 +00:00
|
|
|
// create a DB with block prefix index
|
|
|
|
BlockBasedTableOptions table_options;
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
table_options.index_type = BlockBasedTableOptions::kHashSearch;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
|
|
|
options.prefix_extractor.reset(NewFixedPrefixTransform(1));
|
|
|
|
|
|
|
|
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-06-19 22:32:31 +00:00
|
|
|
ASSERT_OK(Put("k1", "v1"));
|
|
|
|
Flush();
|
|
|
|
ASSERT_OK(Put("k2", "v2"));
|
|
|
|
|
|
|
|
// Reopen it without prefix extractor, make sure everything still works.
|
|
|
|
// RocksDB should just fall back to the binary index.
|
|
|
|
table_options.index_type = BlockBasedTableOptions::kBinarySearch;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
|
|
|
options.prefix_extractor.reset();
|
|
|
|
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-06-19 22:32:31 +00:00
|
|
|
ASSERT_EQ("v1", Get("k1"));
|
|
|
|
ASSERT_EQ("v2", Get("k2"));
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, ChecksumTest) {
|
2014-05-01 18:09:32 +00:00
|
|
|
BlockBasedTableOptions table_options;
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
|
|
|
|
table_options.checksum = kCRC32c;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-05-01 18:09:32 +00:00
|
|
|
ASSERT_OK(Put("a", "b"));
|
|
|
|
ASSERT_OK(Put("c", "d"));
|
|
|
|
ASSERT_OK(Flush()); // table with crc checksum
|
|
|
|
|
|
|
|
table_options.checksum = kxxHash;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-05-01 18:09:32 +00:00
|
|
|
ASSERT_OK(Put("e", "f"));
|
|
|
|
ASSERT_OK(Put("g", "h"));
|
|
|
|
ASSERT_OK(Flush()); // table with xxhash checksum
|
|
|
|
|
|
|
|
table_options.checksum = kCRC32c;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-05-01 18:09:32 +00:00
|
|
|
ASSERT_EQ("b", Get("a"));
|
|
|
|
ASSERT_EQ("d", Get("c"));
|
|
|
|
ASSERT_EQ("f", Get("e"));
|
|
|
|
ASSERT_EQ("h", Get("g"));
|
2014-01-17 05:56:26 +00:00
|
|
|
|
2014-05-01 18:09:32 +00:00
|
|
|
table_options.checksum = kCRC32c;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-05-01 18:09:32 +00:00
|
|
|
ASSERT_EQ("b", Get("a"));
|
|
|
|
ASSERT_EQ("d", Get("c"));
|
|
|
|
ASSERT_EQ("f", Get("e"));
|
|
|
|
ASSERT_EQ("h", Get("g"));
|
|
|
|
}
|
2014-05-21 18:43:35 +00:00
|
|
|
|
2015-08-05 05:19:07 +00:00
|
|
|
TEST_P(DBTestWithParam, FIFOCompactionTest) {
|
2014-05-21 18:43:35 +00:00
|
|
|
for (int iter = 0; iter < 2; ++iter) {
|
|
|
|
// first iteration -- auto compaction
|
|
|
|
// second iteration -- manual compaction
|
|
|
|
Options options;
|
|
|
|
options.compaction_style = kCompactionStyleFIFO;
|
|
|
|
options.write_buffer_size = 100 << 10; // 100KB
|
|
|
|
options.compaction_options_fifo.max_table_files_size = 500 << 10; // 500KB
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.create_if_missing = true;
|
2015-08-05 05:19:07 +00:00
|
|
|
options.num_subcompactions = num_subcompactions_;
|
2014-05-21 18:43:35 +00:00
|
|
|
if (iter == 1) {
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
}
|
2014-10-31 22:08:10 +00:00
|
|
|
options = CurrentOptions(options);
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-05-21 18:43:35 +00:00
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
for (int i = 0; i < 6; ++i) {
|
|
|
|
for (int j = 0; j < 100; ++j) {
|
2014-11-25 04:44:49 +00:00
|
|
|
ASSERT_OK(Put(ToString(i * 100 + j), RandomString(&rnd, 1024)));
|
2014-05-21 18:43:35 +00:00
|
|
|
}
|
|
|
|
// flush should happen here
|
2015-04-13 18:39:45 +00:00
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForFlushMemTable());
|
2014-05-21 18:43:35 +00:00
|
|
|
}
|
|
|
|
if (iter == 0) {
|
|
|
|
ASSERT_OK(dbfull()->TEST_WaitForCompact());
|
|
|
|
} else {
|
2015-06-17 21:36:14 +00:00
|
|
|
ASSERT_OK(db_->CompactRange(CompactRangeOptions(), nullptr, nullptr));
|
2014-05-21 18:43:35 +00:00
|
|
|
}
|
|
|
|
// only 5 files should survive
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 5);
|
|
|
|
for (int i = 0; i < 50; ++i) {
|
|
|
|
// these keys should be deleted in previous compaction
|
2014-11-25 04:44:49 +00:00
|
|
|
ASSERT_EQ("NOT_FOUND", Get(ToString(i)));
|
2014-05-21 18:43:35 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2014-07-03 22:47:02 +00:00
|
|
|
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
// verify that we correctly deprecated timeout_hint_us
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, SimpleWriteTimeoutTest) {
|
2014-10-16 23:57:59 +00:00
|
|
|
WriteOptions write_opt;
|
2014-07-04 07:02:12 +00:00
|
|
|
write_opt.timeout_hint_us = 0;
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
ASSERT_OK(Put(Key(1), Key(1) + std::string(100, 'v'), write_opt));
|
|
|
|
write_opt.timeout_hint_us = 10;
|
|
|
|
ASSERT_NOK(Put(Key(1), Key(1) + std::string(100, 'v'), write_opt));
|
2014-07-03 22:47:02 +00:00
|
|
|
}
|
|
|
|
|
2014-07-10 05:46:15 +00:00
|
|
|
/*
|
|
|
|
* This test is not reliable enough as it heavily depends on disk behavior.
|
2014-07-25 22:17:06 +00:00
|
|
|
*/
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, RateLimitingTest) {
|
2014-07-08 19:31:49 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-07-08 22:15:00 +00:00
|
|
|
options.write_buffer_size = 1 << 20; // 1MB
|
|
|
|
options.level0_file_num_compaction_trigger = 2;
|
|
|
|
options.target_file_size_base = 1 << 20; // 1MB
|
|
|
|
options.max_bytes_for_level_base = 4 << 20; // 4MB
|
|
|
|
options.max_bytes_for_level_multiplier = 4;
|
2014-07-08 19:31:49 +00:00
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.env = env_;
|
2014-07-08 22:15:00 +00:00
|
|
|
options.IncreaseParallelism(4);
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-07-08 19:31:49 +00:00
|
|
|
|
2014-07-08 22:15:00 +00:00
|
|
|
WriteOptions wo;
|
|
|
|
wo.disableWAL = true;
|
|
|
|
|
2014-07-08 19:31:49 +00:00
|
|
|
// # no rate limiting
|
|
|
|
Random rnd(301);
|
|
|
|
uint64_t start = env_->NowMicros();
|
2014-07-08 22:15:00 +00:00
|
|
|
// Write ~96M data
|
|
|
|
for (int64_t i = 0; i < (96 << 10); ++i) {
|
|
|
|
ASSERT_OK(Put(RandomString(&rnd, 32),
|
|
|
|
RandomString(&rnd, (1 << 10) + 1), wo));
|
2014-07-08 19:31:49 +00:00
|
|
|
}
|
|
|
|
uint64_t elapsed = env_->NowMicros() - start;
|
|
|
|
double raw_rate = env_->bytes_written_ * 1000000 / elapsed;
|
|
|
|
Close();
|
|
|
|
|
|
|
|
// # rate limiting with 0.7 x threshold
|
|
|
|
options.rate_limiter.reset(
|
2014-07-25 22:17:06 +00:00
|
|
|
NewGenericRateLimiter(static_cast<int64_t>(0.7 * raw_rate)));
|
2014-07-08 19:31:49 +00:00
|
|
|
env_->bytes_written_ = 0;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-07-08 19:31:49 +00:00
|
|
|
|
|
|
|
start = env_->NowMicros();
|
2014-07-08 22:15:00 +00:00
|
|
|
// Write ~96M data
|
|
|
|
for (int64_t i = 0; i < (96 << 10); ++i) {
|
|
|
|
ASSERT_OK(Put(RandomString(&rnd, 32),
|
|
|
|
RandomString(&rnd, (1 << 10) + 1), wo));
|
2014-07-08 19:31:49 +00:00
|
|
|
}
|
|
|
|
elapsed = env_->NowMicros() - start;
|
|
|
|
Close();
|
Move rate_limiter, write buffering, most perf context instrumentation and most random kill out of Env
Summary: We want to keep Env a think layer for better portability. Less platform dependent codes should be moved out of Env. In this patch, I create a wrapper of file readers and writers, and put rate limiting, write buffering, as well as most perf context instrumentation and random kill out of Env. It will make it easier to maintain multiple Env in the future.
Test Plan: Run all existing unit tests.
Reviewers: anthony, kradhakrishnan, IslamAbdelRahman, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D42321
2015-07-17 23:16:11 +00:00
|
|
|
ASSERT_EQ(options.rate_limiter->GetTotalBytesThrough(), env_->bytes_written_);
|
2014-07-08 19:31:49 +00:00
|
|
|
double ratio = env_->bytes_written_ * 1000000 / elapsed / raw_rate;
|
|
|
|
fprintf(stderr, "write rate ratio = %.2lf, expected 0.7\n", ratio);
|
2014-07-25 22:17:06 +00:00
|
|
|
ASSERT_TRUE(ratio < 0.8);
|
2014-07-08 19:31:49 +00:00
|
|
|
|
|
|
|
// # rate limiting with half of the raw_rate
|
|
|
|
options.rate_limiter.reset(
|
2014-07-25 22:17:06 +00:00
|
|
|
NewGenericRateLimiter(static_cast<int64_t>(raw_rate / 2)));
|
2014-07-08 19:31:49 +00:00
|
|
|
env_->bytes_written_ = 0;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-07-08 19:31:49 +00:00
|
|
|
|
|
|
|
start = env_->NowMicros();
|
2014-07-08 22:15:00 +00:00
|
|
|
// Write ~96M data
|
|
|
|
for (int64_t i = 0; i < (96 << 10); ++i) {
|
|
|
|
ASSERT_OK(Put(RandomString(&rnd, 32),
|
|
|
|
RandomString(&rnd, (1 << 10) + 1), wo));
|
2014-07-08 19:31:49 +00:00
|
|
|
}
|
|
|
|
elapsed = env_->NowMicros() - start;
|
|
|
|
Close();
|
Move rate_limiter, write buffering, most perf context instrumentation and most random kill out of Env
Summary: We want to keep Env a think layer for better portability. Less platform dependent codes should be moved out of Env. In this patch, I create a wrapper of file readers and writers, and put rate limiting, write buffering, as well as most perf context instrumentation and random kill out of Env. It will make it easier to maintain multiple Env in the future.
Test Plan: Run all existing unit tests.
Reviewers: anthony, kradhakrishnan, IslamAbdelRahman, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D42321
2015-07-17 23:16:11 +00:00
|
|
|
ASSERT_EQ(options.rate_limiter->GetTotalBytesThrough(), env_->bytes_written_);
|
2014-07-08 19:31:49 +00:00
|
|
|
ratio = env_->bytes_written_ * 1000000 / elapsed / raw_rate;
|
|
|
|
fprintf(stderr, "write rate ratio = %.2lf, expected 0.5\n", ratio);
|
Move rate_limiter, write buffering, most perf context instrumentation and most random kill out of Env
Summary: We want to keep Env a think layer for better portability. Less platform dependent codes should be moved out of Env. In this patch, I create a wrapper of file readers and writers, and put rate limiting, write buffering, as well as most perf context instrumentation and random kill out of Env. It will make it easier to maintain multiple Env in the future.
Test Plan: Run all existing unit tests.
Reviewers: anthony, kradhakrishnan, IslamAbdelRahman, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D42321
2015-07-17 23:16:11 +00:00
|
|
|
ASSERT_LT(ratio, 0.6);
|
2014-07-08 19:31:49 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, TableOptionsSanitizeTest) {
|
2014-08-20 22:53:39 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-08-20 22:53:39 +00:00
|
|
|
ASSERT_EQ(db_->GetOptions().allow_mmap_reads, false);
|
|
|
|
|
|
|
|
options.table_factory.reset(new PlainTableFactory());
|
|
|
|
options.prefix_extractor.reset(NewNoopTransform());
|
2014-10-29 18:59:18 +00:00
|
|
|
Destroy(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
ASSERT_TRUE(TryReopen(options).IsNotSupported());
|
2014-10-18 04:18:36 +00:00
|
|
|
|
|
|
|
// Test for check of prefix_extractor when hash index is used for
|
|
|
|
// block-based table
|
|
|
|
BlockBasedTableOptions to;
|
|
|
|
to.index_type = BlockBasedTableOptions::kHashSearch;
|
2014-10-31 22:08:10 +00:00
|
|
|
options = CurrentOptions();
|
2014-10-18 04:18:36 +00:00
|
|
|
options.create_if_missing = true;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(to));
|
2014-10-29 19:00:01 +00:00
|
|
|
ASSERT_TRUE(TryReopen(options).IsInvalidArgument());
|
2014-10-18 04:18:36 +00:00
|
|
|
options.prefix_extractor.reset(NewFixedPrefixTransform(1));
|
2014-10-29 19:00:01 +00:00
|
|
|
ASSERT_OK(TryReopen(options));
|
2014-08-20 22:53:39 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, SanitizeNumThreads) {
|
2014-11-03 22:11:33 +00:00
|
|
|
for (int attempt = 0; attempt < 2; attempt++) {
|
|
|
|
const size_t kTotalTasks = 8;
|
|
|
|
SleepingBackgroundTask sleeping_tasks[kTotalTasks];
|
|
|
|
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
if (attempt == 0) {
|
|
|
|
options.max_background_compactions = 3;
|
|
|
|
options.max_background_flushes = 2;
|
|
|
|
}
|
|
|
|
options.create_if_missing = true;
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
for (size_t i = 0; i < kTotalTasks; i++) {
|
|
|
|
// Insert 5 tasks to low priority queue and 5 tasks to high priority queue
|
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_tasks[i],
|
|
|
|
(i < 4) ? Env::Priority::LOW : Env::Priority::HIGH);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Wait 100 milliseconds for they are scheduled.
|
|
|
|
env_->SleepForMicroseconds(100000);
|
|
|
|
|
|
|
|
// pool size 3, total task 4. Queue size should be 1.
|
|
|
|
ASSERT_EQ(1U, options.env->GetThreadPoolQueueLen(Env::Priority::LOW));
|
|
|
|
// pool size 2, total task 4. Queue size should be 2.
|
|
|
|
ASSERT_EQ(2U, options.env->GetThreadPoolQueueLen(Env::Priority::HIGH));
|
|
|
|
|
|
|
|
for (size_t i = 0; i < kTotalTasks; i++) {
|
|
|
|
sleeping_tasks[i].WakeUp();
|
|
|
|
sleeping_tasks[i].WaitUntilDone();
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_OK(Put("abc", "def"));
|
|
|
|
ASSERT_EQ("def", Get("abc"));
|
|
|
|
Flush();
|
|
|
|
ASSERT_EQ("def", Get("abc"));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DBIteratorBoundTest) {
|
2014-10-31 22:08:10 +00:00
|
|
|
Options options = CurrentOptions();
|
2014-09-04 17:48:24 +00:00
|
|
|
options.env = env_;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
|
|
|
|
options.prefix_extractor = nullptr;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-09-04 17:48:24 +00:00
|
|
|
ASSERT_OK(Put("a", "0"));
|
|
|
|
ASSERT_OK(Put("foo", "bar"));
|
|
|
|
ASSERT_OK(Put("foo1", "bar1"));
|
|
|
|
ASSERT_OK(Put("g1", "0"));
|
|
|
|
|
|
|
|
// testing basic case with no iterate_upper_bound and no prefix_extractor
|
|
|
|
{
|
|
|
|
ReadOptions ro;
|
|
|
|
ro.iterate_upper_bound = nullptr;
|
|
|
|
|
|
|
|
std::unique_ptr<Iterator> iter(db_->NewIterator(ro));
|
|
|
|
|
|
|
|
iter->Seek("foo");
|
|
|
|
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(Slice("foo")), 0);
|
|
|
|
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(Slice("foo1")), 0);
|
|
|
|
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(Slice("g1")), 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
// testing iterate_upper_bound and forward iterator
|
|
|
|
// to make sure it stops at bound
|
|
|
|
{
|
|
|
|
ReadOptions ro;
|
|
|
|
// iterate_upper_bound points beyond the last expected entry
|
2014-09-05 07:47:54 +00:00
|
|
|
Slice prefix("foo2");
|
|
|
|
ro.iterate_upper_bound = &prefix;
|
2014-09-04 17:48:24 +00:00
|
|
|
|
|
|
|
std::unique_ptr<Iterator> iter(db_->NewIterator(ro));
|
|
|
|
|
|
|
|
iter->Seek("foo");
|
|
|
|
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(Slice("foo")), 0);
|
|
|
|
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(("foo1")), 0);
|
|
|
|
|
|
|
|
iter->Next();
|
|
|
|
// should stop here...
|
|
|
|
ASSERT_TRUE(!iter->Valid());
|
|
|
|
}
|
2015-06-25 16:44:30 +00:00
|
|
|
// Testing SeekToLast with iterate_upper_bound set
|
|
|
|
{
|
|
|
|
ReadOptions ro;
|
|
|
|
|
|
|
|
Slice prefix("foo");
|
|
|
|
ro.iterate_upper_bound = &prefix;
|
|
|
|
|
|
|
|
std::unique_ptr<Iterator> iter(db_->NewIterator(ro));
|
|
|
|
|
|
|
|
iter->SeekToLast();
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(Slice("a")), 0);
|
|
|
|
}
|
2014-09-04 17:48:24 +00:00
|
|
|
|
|
|
|
// prefix is the first letter of the key
|
|
|
|
options.prefix_extractor.reset(NewFixedPrefixTransform(1));
|
|
|
|
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-09-04 17:48:24 +00:00
|
|
|
ASSERT_OK(Put("a", "0"));
|
|
|
|
ASSERT_OK(Put("foo", "bar"));
|
|
|
|
ASSERT_OK(Put("foo1", "bar1"));
|
|
|
|
ASSERT_OK(Put("g1", "0"));
|
|
|
|
|
|
|
|
// testing with iterate_upper_bound and prefix_extractor
|
|
|
|
// Seek target and iterate_upper_bound are not is same prefix
|
|
|
|
// This should be an error
|
|
|
|
{
|
|
|
|
ReadOptions ro;
|
2014-09-05 07:47:54 +00:00
|
|
|
Slice prefix("g1");
|
|
|
|
ro.iterate_upper_bound = &prefix;
|
2014-09-04 17:48:24 +00:00
|
|
|
|
|
|
|
std::unique_ptr<Iterator> iter(db_->NewIterator(ro));
|
|
|
|
|
|
|
|
iter->Seek("foo");
|
|
|
|
|
|
|
|
ASSERT_TRUE(!iter->Valid());
|
|
|
|
ASSERT_TRUE(iter->status().IsInvalidArgument());
|
|
|
|
}
|
|
|
|
|
|
|
|
// testing that iterate_upper_bound prevents iterating over deleted items
|
|
|
|
// if the bound has already reached
|
|
|
|
{
|
|
|
|
options.prefix_extractor = nullptr;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-09-04 17:48:24 +00:00
|
|
|
ASSERT_OK(Put("a", "0"));
|
|
|
|
ASSERT_OK(Put("b", "0"));
|
|
|
|
ASSERT_OK(Put("b1", "0"));
|
|
|
|
ASSERT_OK(Put("c", "0"));
|
|
|
|
ASSERT_OK(Put("d", "0"));
|
|
|
|
ASSERT_OK(Put("e", "0"));
|
|
|
|
ASSERT_OK(Delete("c"));
|
|
|
|
ASSERT_OK(Delete("d"));
|
|
|
|
|
|
|
|
// base case with no bound
|
|
|
|
ReadOptions ro;
|
|
|
|
ro.iterate_upper_bound = nullptr;
|
|
|
|
|
|
|
|
std::unique_ptr<Iterator> iter(db_->NewIterator(ro));
|
|
|
|
|
|
|
|
iter->Seek("b");
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(Slice("b")), 0);
|
|
|
|
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(("b1")), 0);
|
|
|
|
|
|
|
|
perf_context.Reset();
|
|
|
|
iter->Next();
|
|
|
|
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(static_cast<int>(perf_context.internal_delete_skipped_count), 2);
|
|
|
|
|
|
|
|
// now testing with iterate_bound
|
2014-09-05 07:47:54 +00:00
|
|
|
Slice prefix("c");
|
|
|
|
ro.iterate_upper_bound = &prefix;
|
2014-09-04 17:48:24 +00:00
|
|
|
|
|
|
|
iter.reset(db_->NewIterator(ro));
|
|
|
|
|
|
|
|
perf_context.Reset();
|
|
|
|
|
|
|
|
iter->Seek("b");
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(Slice("b")), 0);
|
|
|
|
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(("b1")), 0);
|
|
|
|
|
|
|
|
iter->Next();
|
|
|
|
// the iteration should stop as soon as the the bound key is reached
|
|
|
|
// even though the key is deleted
|
|
|
|
// hence internal_delete_skipped_count should be 0
|
|
|
|
ASSERT_TRUE(!iter->Valid());
|
|
|
|
ASSERT_EQ(static_cast<int>(perf_context.internal_delete_skipped_count), 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, WriteSingleThreadEntry) {
|
2014-09-05 22:20:05 +00:00
|
|
|
std::vector<std::thread> threads;
|
|
|
|
dbfull()->TEST_LockMutex();
|
|
|
|
auto w = dbfull()->TEST_BeginWrite();
|
|
|
|
threads.emplace_back([&] { Put("a", "b"); });
|
|
|
|
env_->SleepForMicroseconds(10000);
|
|
|
|
threads.emplace_back([&] { Flush(); });
|
|
|
|
env_->SleepForMicroseconds(10000);
|
|
|
|
dbfull()->TEST_UnlockMutex();
|
|
|
|
dbfull()->TEST_LockMutex();
|
|
|
|
dbfull()->TEST_EndWrite(w);
|
|
|
|
dbfull()->TEST_UnlockMutex();
|
|
|
|
|
|
|
|
for (auto& t : threads) {
|
|
|
|
t.join();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DisableDataSyncTest) {
|
2015-01-22 19:43:38 +00:00
|
|
|
env_->sync_counter_.store(0);
|
2014-09-15 18:32:01 +00:00
|
|
|
// iter 0 -- no sync
|
|
|
|
// iter 1 -- sync
|
|
|
|
for (int iter = 0; iter < 2; ++iter) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.disableDataSync = iter == 0;
|
|
|
|
options.create_if_missing = true;
|
2015-07-17 19:02:52 +00:00
|
|
|
options.num_levels = 10;
|
2014-09-15 18:32:01 +00:00
|
|
|
options.env = env_;
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-10-29 19:00:01 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
2014-09-15 18:32:01 +00:00
|
|
|
|
|
|
|
MakeTables(10, "a", "z");
|
|
|
|
Compact("a", "z");
|
|
|
|
|
|
|
|
if (iter == 0) {
|
|
|
|
ASSERT_EQ(env_->sync_counter_.load(), 0);
|
|
|
|
} else {
|
|
|
|
ASSERT_GT(env_->sync_counter_.load(), 0);
|
|
|
|
}
|
2014-10-29 18:59:18 +00:00
|
|
|
Destroy(options);
|
2014-09-15 18:32:01 +00:00
|
|
|
}
|
|
|
|
}
|
2014-09-05 22:20:05 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DynamicMemtableOptions) {
|
2014-10-07 17:40:45 +00:00
|
|
|
const uint64_t k64KB = 1 << 16;
|
|
|
|
const uint64_t k128KB = 1 << 17;
|
|
|
|
const uint64_t k5KB = 5 * 1024;
|
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.compression = kNoCompression;
|
2014-11-04 19:07:36 +00:00
|
|
|
options.max_background_compactions = 1;
|
2014-10-07 17:40:45 +00:00
|
|
|
options.write_buffer_size = k64KB;
|
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
// Don't trigger compact/slowdown/stop
|
|
|
|
options.level0_file_num_compaction_trigger = 1024;
|
|
|
|
options.level0_slowdown_writes_trigger = 1024;
|
|
|
|
options.level0_stop_writes_trigger = 1024;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-10-07 17:40:45 +00:00
|
|
|
|
|
|
|
auto gen_l0_kb = [this](int size) {
|
|
|
|
Random rnd(301);
|
|
|
|
for (int i = 0; i < size; i++) {
|
2014-10-16 23:57:59 +00:00
|
|
|
ASSERT_OK(Put(Key(i), RandomString(&rnd, 1024)));
|
2014-10-07 17:40:45 +00:00
|
|
|
}
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable();
|
|
|
|
};
|
|
|
|
|
2014-10-16 23:57:59 +00:00
|
|
|
// Test write_buffer_size
|
2014-10-07 17:40:45 +00:00
|
|
|
gen_l0_kb(64);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 1);
|
2014-11-04 19:33:57 +00:00
|
|
|
ASSERT_LT(SizeAtLevel(0), k64KB + k5KB);
|
|
|
|
ASSERT_GT(SizeAtLevel(0), k64KB - k5KB);
|
2014-10-07 17:40:45 +00:00
|
|
|
|
|
|
|
// Clean up L0
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2014-10-07 17:40:45 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 0);
|
|
|
|
|
|
|
|
// Increase buffer size
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-10-07 17:40:45 +00:00
|
|
|
{"write_buffer_size", "131072"},
|
|
|
|
}));
|
|
|
|
|
|
|
|
// The existing memtable is still 64KB in size, after it becomes immutable,
|
|
|
|
// the next memtable will be 128KB in size. Write 256KB total, we should
|
|
|
|
// have a 64KB L0 file, a 128KB L0 file, and a memtable with 64KB data
|
|
|
|
gen_l0_kb(256);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 2);
|
2014-11-04 19:33:57 +00:00
|
|
|
ASSERT_LT(SizeAtLevel(0), k128KB + k64KB + 2 * k5KB);
|
|
|
|
ASSERT_GT(SizeAtLevel(0), k128KB + k64KB - 2 * k5KB);
|
2014-10-16 23:57:59 +00:00
|
|
|
|
|
|
|
// Test max_write_buffer_number
|
|
|
|
// Block compaction thread, which will also block the flushes because
|
|
|
|
// max_background_flushes == 0, so flushes are getting executed by the
|
|
|
|
// compaction thread
|
|
|
|
env_->SetBackgroundThreads(1, Env::LOW);
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
SleepingBackgroundTask sleeping_task_low;
|
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_low,
|
2014-10-16 23:57:59 +00:00
|
|
|
Env::Priority::LOW);
|
|
|
|
// Start from scratch and disable compaction/flush. Flush can only happen
|
|
|
|
// during compaction but trigger is pretty high
|
|
|
|
options.max_background_flushes = 0;
|
|
|
|
options.disable_auto_compactions = true;
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-10-16 23:57:59 +00:00
|
|
|
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
// Put until writes are stopped, bounded by 256 puts. We should see stop at
|
|
|
|
// ~128KB
|
2014-10-16 23:57:59 +00:00
|
|
|
int count = 0;
|
|
|
|
Random rnd(301);
|
2015-03-30 21:04:21 +00:00
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->SetCallBack(
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
"DBImpl::DelayWrite:Wait",
|
|
|
|
[&](void* arg) { sleeping_task_low.WakeUp(); });
|
2015-03-30 21:04:21 +00:00
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
2014-10-16 23:57:59 +00:00
|
|
|
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
while (!sleeping_task_low.WokenUp() && count < 256) {
|
|
|
|
ASSERT_OK(Put(Key(count), RandomString(&rnd, 1024), WriteOptions()));
|
2014-10-16 23:57:59 +00:00
|
|
|
count++;
|
|
|
|
}
|
2014-11-04 19:33:57 +00:00
|
|
|
ASSERT_GT(static_cast<double>(count), 128 * 0.8);
|
|
|
|
ASSERT_LT(static_cast<double>(count), 128 * 1.2);
|
2014-10-16 23:57:59 +00:00
|
|
|
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
sleeping_task_low.WaitUntilDone();
|
2014-10-16 23:57:59 +00:00
|
|
|
|
|
|
|
// Increase
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-10-16 23:57:59 +00:00
|
|
|
{"max_write_buffer_number", "8"},
|
|
|
|
}));
|
|
|
|
// Clean up memtable and L0
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2014-10-16 23:57:59 +00:00
|
|
|
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
sleeping_task_low.Reset();
|
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_low,
|
2014-10-16 23:57:59 +00:00
|
|
|
Env::Priority::LOW);
|
|
|
|
count = 0;
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
while (!sleeping_task_low.WokenUp() && count < 1024) {
|
|
|
|
ASSERT_OK(Put(Key(count), RandomString(&rnd, 1024), WriteOptions()));
|
2014-10-16 23:57:59 +00:00
|
|
|
count++;
|
|
|
|
}
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
// Windows fails this test. Will tune in the future and figure out
|
|
|
|
// approp number
|
2015-07-01 23:13:49 +00:00
|
|
|
#ifndef OS_WIN
|
2014-11-04 19:33:57 +00:00
|
|
|
ASSERT_GT(static_cast<double>(count), 512 * 0.8);
|
|
|
|
ASSERT_LT(static_cast<double>(count), 512 * 1.2);
|
2015-07-01 23:13:49 +00:00
|
|
|
#endif
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
sleeping_task_low.WaitUntilDone();
|
2014-10-16 23:57:59 +00:00
|
|
|
|
|
|
|
// Decrease
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-10-16 23:57:59 +00:00
|
|
|
{"max_write_buffer_number", "4"},
|
|
|
|
}));
|
|
|
|
// Clean up memtable and L0
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2014-10-16 23:57:59 +00:00
|
|
|
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
sleeping_task_low.Reset();
|
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_low,
|
2014-10-17 17:09:45 +00:00
|
|
|
Env::Priority::LOW);
|
2015-03-30 21:04:21 +00:00
|
|
|
|
2014-10-16 23:57:59 +00:00
|
|
|
count = 0;
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
while (!sleeping_task_low.WokenUp() && count < 1024) {
|
|
|
|
ASSERT_OK(Put(Key(count), RandomString(&rnd, 1024), WriteOptions()));
|
2014-10-16 23:57:59 +00:00
|
|
|
count++;
|
|
|
|
}
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
// Windows fails this test. Will tune in the future and figure out
|
|
|
|
// approp number
|
2015-07-01 23:13:49 +00:00
|
|
|
#ifndef OS_WIN
|
2014-11-04 19:33:57 +00:00
|
|
|
ASSERT_GT(static_cast<double>(count), 256 * 0.8);
|
|
|
|
ASSERT_LT(static_cast<double>(count), 266 * 1.2);
|
2015-07-01 23:13:49 +00:00
|
|
|
#endif
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
sleeping_task_low.WaitUntilDone();
|
2015-03-30 21:04:21 +00:00
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
2014-10-07 17:40:45 +00:00
|
|
|
}
|
|
|
|
|
2014-11-20 18:49:32 +00:00
|
|
|
#if ROCKSDB_USING_THREAD_STATUS
|
2015-02-17 18:13:52 +00:00
|
|
|
namespace {
|
|
|
|
void VerifyOperationCount(Env* env, ThreadStatus::OperationType op_type,
|
|
|
|
int expected_count) {
|
|
|
|
int op_count = 0;
|
|
|
|
std::vector<ThreadStatus> thread_list;
|
|
|
|
ASSERT_OK(env->GetThreadList(&thread_list));
|
|
|
|
for (auto thread : thread_list) {
|
|
|
|
if (thread.operation_type == op_type) {
|
|
|
|
op_count++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
ASSERT_EQ(op_count, expected_count);
|
|
|
|
}
|
|
|
|
} // namespace
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, GetThreadStatus) {
|
2014-11-20 18:49:32 +00:00
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
2014-11-21 05:13:18 +00:00
|
|
|
options.enable_thread_tracking = true;
|
|
|
|
TryReopen(options);
|
2014-11-20 18:49:32 +00:00
|
|
|
|
|
|
|
std::vector<ThreadStatus> thread_list;
|
2014-12-22 20:20:17 +00:00
|
|
|
Status s = env_->GetThreadList(&thread_list);
|
2014-11-20 18:49:32 +00:00
|
|
|
|
|
|
|
for (int i = 0; i < 2; ++i) {
|
|
|
|
// repeat the test with differet number of high / low priority threads
|
|
|
|
const int kTestCount = 3;
|
|
|
|
const unsigned int kHighPriCounts[kTestCount] = {3, 2, 5};
|
|
|
|
const unsigned int kLowPriCounts[kTestCount] = {10, 15, 3};
|
|
|
|
for (int test = 0; test < kTestCount; ++test) {
|
|
|
|
// Change the number of threads in high / low priority pool.
|
|
|
|
env_->SetBackgroundThreads(kHighPriCounts[test], Env::HIGH);
|
|
|
|
env_->SetBackgroundThreads(kLowPriCounts[test], Env::LOW);
|
|
|
|
// Wait to ensure the all threads has been registered
|
|
|
|
env_->SleepForMicroseconds(100000);
|
2014-12-22 20:20:17 +00:00
|
|
|
s = env_->GetThreadList(&thread_list);
|
2014-11-20 18:49:32 +00:00
|
|
|
ASSERT_OK(s);
|
2014-12-30 18:39:13 +00:00
|
|
|
unsigned int thread_type_counts[ThreadStatus::NUM_THREAD_TYPES];
|
2014-11-20 18:49:32 +00:00
|
|
|
memset(thread_type_counts, 0, sizeof(thread_type_counts));
|
|
|
|
for (auto thread : thread_list) {
|
2014-12-30 18:39:13 +00:00
|
|
|
ASSERT_LT(thread.thread_type, ThreadStatus::NUM_THREAD_TYPES);
|
2014-11-20 18:49:32 +00:00
|
|
|
thread_type_counts[thread.thread_type]++;
|
|
|
|
}
|
|
|
|
// Verify the total number of threades
|
|
|
|
ASSERT_EQ(
|
2015-01-13 08:38:09 +00:00
|
|
|
thread_type_counts[ThreadStatus::HIGH_PRIORITY] +
|
|
|
|
thread_type_counts[ThreadStatus::LOW_PRIORITY],
|
2014-11-20 18:49:32 +00:00
|
|
|
kHighPriCounts[test] + kLowPriCounts[test]);
|
|
|
|
// Verify the number of high-priority threads
|
|
|
|
ASSERT_EQ(
|
2014-12-30 18:39:13 +00:00
|
|
|
thread_type_counts[ThreadStatus::HIGH_PRIORITY],
|
2014-11-20 18:49:32 +00:00
|
|
|
kHighPriCounts[test]);
|
|
|
|
// Verify the number of low-priority threads
|
|
|
|
ASSERT_EQ(
|
2014-12-30 18:39:13 +00:00
|
|
|
thread_type_counts[ThreadStatus::LOW_PRIORITY],
|
2014-11-20 18:49:32 +00:00
|
|
|
kLowPriCounts[test]);
|
|
|
|
}
|
|
|
|
if (i == 0) {
|
|
|
|
// repeat the test with multiple column families
|
|
|
|
CreateAndReopenWithCF({"pikachu", "about-to-remove"}, options);
|
2014-12-22 20:20:17 +00:00
|
|
|
env_->GetThreadStatusUpdater()->TEST_VerifyColumnFamilyInfoMap(
|
|
|
|
handles_, true);
|
2014-11-20 18:49:32 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
db_->DropColumnFamily(handles_[2]);
|
2014-11-22 08:04:41 +00:00
|
|
|
delete handles_[2];
|
2014-11-20 18:49:32 +00:00
|
|
|
handles_.erase(handles_.begin() + 2);
|
2014-12-22 20:20:17 +00:00
|
|
|
env_->GetThreadStatusUpdater()->TEST_VerifyColumnFamilyInfoMap(
|
|
|
|
handles_, true);
|
2014-11-20 18:49:32 +00:00
|
|
|
Close();
|
2014-12-22 20:20:17 +00:00
|
|
|
env_->GetThreadStatusUpdater()->TEST_VerifyColumnFamilyInfoMap(
|
|
|
|
handles_, true);
|
2014-11-21 05:13:18 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DisableThreadStatus) {
|
2014-11-21 05:13:18 +00:00
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
options.enable_thread_tracking = false;
|
|
|
|
TryReopen(options);
|
|
|
|
CreateAndReopenWithCF({"pikachu", "about-to-remove"}, options);
|
|
|
|
// Verify non of the column family info exists
|
2014-12-22 20:20:17 +00:00
|
|
|
env_->GetThreadStatusUpdater()->TEST_VerifyColumnFamilyInfoMap(
|
|
|
|
handles_, false);
|
2014-11-20 18:49:32 +00:00
|
|
|
}
|
2015-01-13 08:04:08 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, ThreadStatusFlush) {
|
2015-02-17 18:13:52 +00:00
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
options.write_buffer_size = 100000; // Small write buffer
|
|
|
|
options.enable_thread_tracking = true;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->LoadDependency({
|
2015-03-13 17:45:40 +00:00
|
|
|
{"FlushJob::FlushJob()", "DBTest::ThreadStatusFlush:1"},
|
|
|
|
{"DBTest::ThreadStatusFlush:2", "FlushJob::~FlushJob()"},
|
2015-02-17 18:13:52 +00:00
|
|
|
});
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
VerifyOperationCount(env_, ThreadStatus::OP_FLUSH, 0);
|
|
|
|
|
|
|
|
ASSERT_OK(Put(1, "foo", "v1"));
|
|
|
|
ASSERT_EQ("v1", Get(1, "foo"));
|
|
|
|
VerifyOperationCount(env_, ThreadStatus::OP_FLUSH, 0);
|
|
|
|
|
|
|
|
Put(1, "k1", std::string(100000, 'x')); // Fill memtable
|
|
|
|
VerifyOperationCount(env_, ThreadStatus::OP_FLUSH, 0);
|
|
|
|
Put(1, "k2", std::string(100000, 'y')); // Trigger flush
|
|
|
|
// wait for flush to be scheduled
|
|
|
|
env_->SleepForMicroseconds(250000);
|
|
|
|
TEST_SYNC_POINT("DBTest::ThreadStatusFlush:1");
|
|
|
|
VerifyOperationCount(env_, ThreadStatus::OP_FLUSH, 1);
|
|
|
|
TEST_SYNC_POINT("DBTest::ThreadStatusFlush:2");
|
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
2015-08-05 05:19:07 +00:00
|
|
|
TEST_P(DBTestWithParam, ThreadStatusSingleCompaction) {
|
2015-01-13 08:04:08 +00:00
|
|
|
const int kTestKeySize = 16;
|
|
|
|
const int kTestValueSize = 984;
|
|
|
|
const int kEntrySize = kTestKeySize + kTestValueSize;
|
|
|
|
const int kEntriesPerBuffer = 100;
|
|
|
|
Options options;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.write_buffer_size = kEntrySize * kEntriesPerBuffer;
|
|
|
|
options.compaction_style = kCompactionStyleLevel;
|
|
|
|
options.target_file_size_base = options.write_buffer_size;
|
|
|
|
options.max_bytes_for_level_base = options.target_file_size_base * 2;
|
|
|
|
options.max_bytes_for_level_multiplier = 2;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
options.env = env_;
|
|
|
|
options.enable_thread_tracking = true;
|
|
|
|
const int kNumL0Files = 4;
|
|
|
|
options.level0_file_num_compaction_trigger = kNumL0Files;
|
2015-08-05 05:19:07 +00:00
|
|
|
options.num_subcompactions = num_subcompactions_;
|
2015-02-17 18:13:52 +00:00
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->LoadDependency({
|
2015-03-03 18:59:36 +00:00
|
|
|
{"DBTest::ThreadStatusSingleCompaction:0", "DBImpl::BGWorkCompaction"},
|
|
|
|
{"CompactionJob::Run():Start", "DBTest::ThreadStatusSingleCompaction:1"},
|
|
|
|
{"DBTest::ThreadStatusSingleCompaction:2", "CompactionJob::Run():End"},
|
2015-02-17 18:13:52 +00:00
|
|
|
});
|
2015-01-13 08:04:08 +00:00
|
|
|
for (int tests = 0; tests < 2; ++tests) {
|
2015-03-13 18:59:00 +00:00
|
|
|
DestroyAndReopen(options);
|
2015-07-07 21:00:03 +00:00
|
|
|
rocksdb::SyncPoint::GetInstance()->ClearTrace();
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
2015-01-13 08:04:08 +00:00
|
|
|
|
|
|
|
Random rnd(301);
|
2015-03-23 22:05:58 +00:00
|
|
|
// The Put Phase.
|
2015-03-13 20:11:08 +00:00
|
|
|
for (int file = 0; file < kNumL0Files; ++file) {
|
|
|
|
for (int key = 0; key < kEntriesPerBuffer; ++key) {
|
|
|
|
ASSERT_OK(Put(ToString(key + file * kEntriesPerBuffer),
|
|
|
|
RandomString(&rnd, kTestValueSize)));
|
|
|
|
}
|
|
|
|
Flush();
|
2015-01-13 08:04:08 +00:00
|
|
|
}
|
2015-03-23 22:05:58 +00:00
|
|
|
// This makes sure a compaction won't be scheduled until
|
|
|
|
// we have done with the above Put Phase.
|
|
|
|
TEST_SYNC_POINT("DBTest::ThreadStatusSingleCompaction:0");
|
2015-03-13 18:59:00 +00:00
|
|
|
ASSERT_GE(NumTableFilesAtLevel(0),
|
|
|
|
options.level0_file_num_compaction_trigger);
|
2015-01-13 08:04:08 +00:00
|
|
|
|
2015-03-23 22:05:58 +00:00
|
|
|
// This makes sure at least one compaction is running.
|
2015-02-17 18:13:52 +00:00
|
|
|
TEST_SYNC_POINT("DBTest::ThreadStatusSingleCompaction:1");
|
2015-03-23 22:05:58 +00:00
|
|
|
|
2015-01-13 08:04:08 +00:00
|
|
|
if (options.enable_thread_tracking) {
|
|
|
|
// expecting one single L0 to L1 compaction
|
2015-02-17 18:13:52 +00:00
|
|
|
VerifyOperationCount(env_, ThreadStatus::OP_COMPACTION, 1);
|
2015-01-13 08:04:08 +00:00
|
|
|
} else {
|
|
|
|
// If thread tracking is not enabled, compaction count should be 0.
|
2015-02-17 18:13:52 +00:00
|
|
|
VerifyOperationCount(env_, ThreadStatus::OP_COMPACTION, 0);
|
2015-01-13 08:04:08 +00:00
|
|
|
}
|
2015-03-13 17:45:40 +00:00
|
|
|
// TODO(yhchiang): adding assert to verify each compaction stage.
|
2015-02-17 18:13:52 +00:00
|
|
|
TEST_SYNC_POINT("DBTest::ThreadStatusSingleCompaction:2");
|
2015-01-13 08:04:08 +00:00
|
|
|
|
|
|
|
// repeat the test with disabling thread tracking.
|
|
|
|
options.enable_thread_tracking = false;
|
2015-07-07 21:00:03 +00:00
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
2015-01-13 08:04:08 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-08-05 05:19:07 +00:00
|
|
|
TEST_P(DBTestWithParam, PreShutdownManualCompaction) {
|
2015-03-11 17:31:02 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.max_background_flushes = 0;
|
2015-08-05 05:19:07 +00:00
|
|
|
options.num_subcompactions = num_subcompactions_;
|
2015-03-11 17:31:02 +00:00
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
|
|
|
|
// iter - 0 with 7 levels
|
|
|
|
// iter - 1 with 3 levels
|
|
|
|
for (int iter = 0; iter < 2; ++iter) {
|
|
|
|
MakeTables(3, "p", "q", 1);
|
|
|
|
ASSERT_EQ("1,1,1", FilesPerLevel(1));
|
|
|
|
|
|
|
|
// Compaction range falls before files
|
|
|
|
Compact(1, "", "c");
|
|
|
|
ASSERT_EQ("1,1,1", FilesPerLevel(1));
|
|
|
|
|
|
|
|
// Compaction range falls after files
|
|
|
|
Compact(1, "r", "z");
|
|
|
|
ASSERT_EQ("1,1,1", FilesPerLevel(1));
|
|
|
|
|
|
|
|
// Compaction range overlaps files
|
|
|
|
Compact(1, "p1", "p9");
|
|
|
|
ASSERT_EQ("0,0,1", FilesPerLevel(1));
|
|
|
|
|
|
|
|
// Populate a different range
|
|
|
|
MakeTables(3, "c", "e", 1);
|
|
|
|
ASSERT_EQ("1,1,2", FilesPerLevel(1));
|
|
|
|
|
|
|
|
// Compact just the new range
|
|
|
|
Compact(1, "b", "f");
|
|
|
|
ASSERT_EQ("0,0,2", FilesPerLevel(1));
|
|
|
|
|
|
|
|
// Compact all
|
|
|
|
MakeTables(1, "a", "z", 1);
|
2015-07-17 19:02:52 +00:00
|
|
|
ASSERT_EQ("1,0,2", FilesPerLevel(1));
|
2015-03-11 17:31:02 +00:00
|
|
|
CancelAllBackgroundWork(db_);
|
2015-06-17 21:36:14 +00:00
|
|
|
db_->CompactRange(CompactRangeOptions(), handles_[1], nullptr, nullptr);
|
2015-07-17 19:02:52 +00:00
|
|
|
ASSERT_EQ("1,0,2", FilesPerLevel(1));
|
2015-03-11 17:31:02 +00:00
|
|
|
|
|
|
|
if (iter == 0) {
|
|
|
|
options = CurrentOptions();
|
|
|
|
options.max_background_flushes = 0;
|
|
|
|
options.num_levels = 3;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-06-25 21:43:25 +00:00
|
|
|
TEST_F(DBTest, PreShutdownFlush) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.max_background_flushes = 0;
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
ASSERT_OK(Put(1, "key", "value"));
|
|
|
|
CancelAllBackgroundWork(db_);
|
|
|
|
Status s =
|
|
|
|
db_->CompactRange(CompactRangeOptions(), handles_[1], nullptr, nullptr);
|
|
|
|
ASSERT_TRUE(s.IsShutdownInProgress());
|
|
|
|
}
|
|
|
|
|
2015-08-05 05:19:07 +00:00
|
|
|
TEST_P(DBTestWithParam, PreShutdownMultipleCompaction) {
|
2015-03-11 17:31:02 +00:00
|
|
|
const int kTestKeySize = 16;
|
|
|
|
const int kTestValueSize = 984;
|
|
|
|
const int kEntrySize = kTestKeySize + kTestValueSize;
|
2015-04-23 15:35:02 +00:00
|
|
|
const int kEntriesPerBuffer = 40;
|
2015-03-11 17:31:02 +00:00
|
|
|
const int kNumL0Files = 4;
|
|
|
|
|
|
|
|
const int kHighPriCount = 3;
|
|
|
|
const int kLowPriCount = 5;
|
|
|
|
env_->SetBackgroundThreads(kHighPriCount, Env::HIGH);
|
|
|
|
env_->SetBackgroundThreads(kLowPriCount, Env::LOW);
|
|
|
|
|
|
|
|
Options options;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.write_buffer_size = kEntrySize * kEntriesPerBuffer;
|
|
|
|
options.compaction_style = kCompactionStyleLevel;
|
|
|
|
options.target_file_size_base = options.write_buffer_size;
|
|
|
|
options.max_bytes_for_level_base =
|
|
|
|
options.target_file_size_base * kNumL0Files;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
options.env = env_;
|
|
|
|
options.enable_thread_tracking = true;
|
|
|
|
options.level0_file_num_compaction_trigger = kNumL0Files;
|
|
|
|
options.max_bytes_for_level_multiplier = 2;
|
|
|
|
options.max_background_compactions = kLowPriCount;
|
2015-03-13 21:51:40 +00:00
|
|
|
options.level0_stop_writes_trigger = 1 << 10;
|
|
|
|
options.level0_slowdown_writes_trigger = 1 << 10;
|
2015-08-05 05:19:07 +00:00
|
|
|
options.num_subcompactions = num_subcompactions_;
|
2015-03-11 17:31:02 +00:00
|
|
|
|
|
|
|
TryReopen(options);
|
|
|
|
Random rnd(301);
|
|
|
|
|
|
|
|
std::vector<ThreadStatus> thread_list;
|
|
|
|
// Delay both flush and compaction
|
|
|
|
rocksdb::SyncPoint::GetInstance()->LoadDependency(
|
2015-03-14 15:00:06 +00:00
|
|
|
{{"FlushJob::FlushJob()", "CompactionJob::Run():Start"},
|
|
|
|
{"CompactionJob::Run():Start",
|
2015-03-11 17:31:02 +00:00
|
|
|
"DBTest::PreShutdownMultipleCompaction:Preshutdown"},
|
2015-04-23 15:35:02 +00:00
|
|
|
{"CompactionJob::Run():Start",
|
|
|
|
"DBTest::PreShutdownMultipleCompaction:VerifyCompaction"},
|
2015-03-11 17:31:02 +00:00
|
|
|
{"DBTest::PreShutdownMultipleCompaction:Preshutdown",
|
2015-03-14 15:00:06 +00:00
|
|
|
"CompactionJob::Run():End"},
|
|
|
|
{"CompactionJob::Run():End",
|
2015-03-11 17:31:02 +00:00
|
|
|
"DBTest::PreShutdownMultipleCompaction:VerifyPreshutdown"}});
|
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
// Make rocksdb busy
|
|
|
|
int key = 0;
|
|
|
|
// check how many threads are doing compaction using GetThreadList
|
|
|
|
int operation_count[ThreadStatus::NUM_OP_TYPES] = {0};
|
2015-04-23 15:35:02 +00:00
|
|
|
for (int file = 0; file < 16 * kNumL0Files; ++file) {
|
2015-03-11 17:31:02 +00:00
|
|
|
for (int k = 0; k < kEntriesPerBuffer; ++k) {
|
|
|
|
ASSERT_OK(Put(ToString(key++), RandomString(&rnd, kTestValueSize)));
|
|
|
|
}
|
|
|
|
|
|
|
|
Status s = env_->GetThreadList(&thread_list);
|
|
|
|
for (auto thread : thread_list) {
|
|
|
|
operation_count[thread.operation_type]++;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Speed up the test
|
2015-04-23 15:35:02 +00:00
|
|
|
if (operation_count[ThreadStatus::OP_FLUSH] > 1 &&
|
|
|
|
operation_count[ThreadStatus::OP_COMPACTION] >
|
2015-03-11 17:31:02 +00:00
|
|
|
0.6 * options.max_background_compactions) {
|
|
|
|
break;
|
|
|
|
}
|
2015-04-23 15:35:02 +00:00
|
|
|
if (file == 15 * kNumL0Files) {
|
|
|
|
TEST_SYNC_POINT("DBTest::PreShutdownMultipleCompaction:Preshutdown");
|
|
|
|
}
|
2015-03-11 17:31:02 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
TEST_SYNC_POINT("DBTest::PreShutdownMultipleCompaction:Preshutdown");
|
2015-04-23 15:35:02 +00:00
|
|
|
ASSERT_GE(operation_count[ThreadStatus::OP_COMPACTION], 1);
|
2015-03-11 17:31:02 +00:00
|
|
|
CancelAllBackgroundWork(db_);
|
|
|
|
TEST_SYNC_POINT("DBTest::PreShutdownMultipleCompaction:VerifyPreshutdown");
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
// Record the number of compactions at a time.
|
|
|
|
for (int i = 0; i < ThreadStatus::NUM_OP_TYPES; ++i) {
|
|
|
|
operation_count[i] = 0;
|
|
|
|
}
|
|
|
|
Status s = env_->GetThreadList(&thread_list);
|
|
|
|
for (auto thread : thread_list) {
|
|
|
|
operation_count[thread.operation_type]++;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(operation_count[ThreadStatus::OP_COMPACTION], 0);
|
|
|
|
}
|
|
|
|
|
2015-08-05 05:19:07 +00:00
|
|
|
TEST_P(DBTestWithParam, PreShutdownCompactionMiddle) {
|
2015-03-11 17:31:02 +00:00
|
|
|
const int kTestKeySize = 16;
|
|
|
|
const int kTestValueSize = 984;
|
|
|
|
const int kEntrySize = kTestKeySize + kTestValueSize;
|
2015-04-23 15:35:02 +00:00
|
|
|
const int kEntriesPerBuffer = 40;
|
2015-03-11 17:31:02 +00:00
|
|
|
const int kNumL0Files = 4;
|
|
|
|
|
|
|
|
const int kHighPriCount = 3;
|
|
|
|
const int kLowPriCount = 5;
|
|
|
|
env_->SetBackgroundThreads(kHighPriCount, Env::HIGH);
|
|
|
|
env_->SetBackgroundThreads(kLowPriCount, Env::LOW);
|
|
|
|
|
|
|
|
Options options;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.write_buffer_size = kEntrySize * kEntriesPerBuffer;
|
|
|
|
options.compaction_style = kCompactionStyleLevel;
|
|
|
|
options.target_file_size_base = options.write_buffer_size;
|
|
|
|
options.max_bytes_for_level_base =
|
|
|
|
options.target_file_size_base * kNumL0Files;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
options.env = env_;
|
|
|
|
options.enable_thread_tracking = true;
|
|
|
|
options.level0_file_num_compaction_trigger = kNumL0Files;
|
|
|
|
options.max_bytes_for_level_multiplier = 2;
|
|
|
|
options.max_background_compactions = kLowPriCount;
|
2015-03-13 21:51:40 +00:00
|
|
|
options.level0_stop_writes_trigger = 1 << 10;
|
|
|
|
options.level0_slowdown_writes_trigger = 1 << 10;
|
2015-08-05 05:19:07 +00:00
|
|
|
options.num_subcompactions = num_subcompactions_;
|
2015-03-11 17:31:02 +00:00
|
|
|
|
|
|
|
TryReopen(options);
|
|
|
|
Random rnd(301);
|
|
|
|
|
|
|
|
std::vector<ThreadStatus> thread_list;
|
|
|
|
// Delay both flush and compaction
|
|
|
|
rocksdb::SyncPoint::GetInstance()->LoadDependency(
|
2015-04-23 15:35:02 +00:00
|
|
|
{{"DBTest::PreShutdownCompactionMiddle:Preshutdown",
|
2015-03-14 15:21:53 +00:00
|
|
|
"CompactionJob::Run():Inprogress"},
|
2015-04-23 15:35:02 +00:00
|
|
|
{"CompactionJob::Run():Start",
|
|
|
|
"DBTest::PreShutdownCompactionMiddle:VerifyCompaction"},
|
2015-03-14 15:21:53 +00:00
|
|
|
{"CompactionJob::Run():Inprogress", "CompactionJob::Run():End"},
|
|
|
|
{"CompactionJob::Run():End",
|
2015-04-23 15:35:02 +00:00
|
|
|
"DBTest::PreShutdownCompactionMiddle:VerifyPreshutdown"}});
|
2015-03-11 17:31:02 +00:00
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
// Make rocksdb busy
|
|
|
|
int key = 0;
|
|
|
|
// check how many threads are doing compaction using GetThreadList
|
|
|
|
int operation_count[ThreadStatus::NUM_OP_TYPES] = {0};
|
2015-04-23 15:35:02 +00:00
|
|
|
for (int file = 0; file < 16 * kNumL0Files; ++file) {
|
2015-03-11 17:31:02 +00:00
|
|
|
for (int k = 0; k < kEntriesPerBuffer; ++k) {
|
|
|
|
ASSERT_OK(Put(ToString(key++), RandomString(&rnd, kTestValueSize)));
|
|
|
|
}
|
|
|
|
|
|
|
|
Status s = env_->GetThreadList(&thread_list);
|
|
|
|
for (auto thread : thread_list) {
|
|
|
|
operation_count[thread.operation_type]++;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Speed up the test
|
2015-04-23 15:35:02 +00:00
|
|
|
if (operation_count[ThreadStatus::OP_FLUSH] > 1 &&
|
|
|
|
operation_count[ThreadStatus::OP_COMPACTION] >
|
2015-03-11 17:31:02 +00:00
|
|
|
0.6 * options.max_background_compactions) {
|
|
|
|
break;
|
|
|
|
}
|
2015-04-23 15:35:02 +00:00
|
|
|
if (file == 15 * kNumL0Files) {
|
|
|
|
TEST_SYNC_POINT("DBTest::PreShutdownCompactionMiddle:VerifyCompaction");
|
|
|
|
}
|
2015-03-11 17:31:02 +00:00
|
|
|
}
|
|
|
|
|
2015-04-23 15:35:02 +00:00
|
|
|
ASSERT_GE(operation_count[ThreadStatus::OP_COMPACTION], 1);
|
2015-03-11 17:31:02 +00:00
|
|
|
CancelAllBackgroundWork(db_);
|
2015-04-23 15:35:02 +00:00
|
|
|
TEST_SYNC_POINT("DBTest::PreShutdownCompactionMiddle:Preshutdown");
|
|
|
|
TEST_SYNC_POINT("DBTest::PreShutdownCompactionMiddle:VerifyPreshutdown");
|
2015-03-11 17:31:02 +00:00
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
// Record the number of compactions at a time.
|
|
|
|
for (int i = 0; i < ThreadStatus::NUM_OP_TYPES; ++i) {
|
|
|
|
operation_count[i] = 0;
|
|
|
|
}
|
|
|
|
Status s = env_->GetThreadList(&thread_list);
|
|
|
|
for (auto thread : thread_list) {
|
|
|
|
operation_count[thread.operation_type]++;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(operation_count[ThreadStatus::OP_COMPACTION], 0);
|
|
|
|
}
|
|
|
|
|
2014-11-20 18:49:32 +00:00
|
|
|
#endif // ROCKSDB_USING_THREAD_STATUS
|
|
|
|
|
2015-06-09 17:39:49 +00:00
|
|
|
TEST_F(DBTest, FlushOnDestroy) {
|
|
|
|
WriteOptions wo;
|
|
|
|
wo.disableWAL = true;
|
|
|
|
ASSERT_OK(Put("foo", "v1", wo));
|
|
|
|
CancelAllBackgroundWork(db_);
|
|
|
|
}
|
|
|
|
|
2015-06-04 02:57:01 +00:00
|
|
|
namespace {
|
|
|
|
class OnFileDeletionListener : public EventListener {
|
|
|
|
public:
|
|
|
|
OnFileDeletionListener() :
|
|
|
|
matched_count_(0),
|
|
|
|
expected_file_name_("") {}
|
|
|
|
|
|
|
|
void SetExpectedFileName(
|
|
|
|
const std::string file_name) {
|
|
|
|
expected_file_name_ = file_name;
|
|
|
|
}
|
|
|
|
|
|
|
|
void VerifyMatchedCount(size_t expected_value) {
|
|
|
|
ASSERT_EQ(matched_count_, expected_value);
|
|
|
|
}
|
|
|
|
|
|
|
|
void OnTableFileDeleted(
|
|
|
|
const TableFileDeletionInfo& info) override {
|
|
|
|
if (expected_file_name_ != "") {
|
|
|
|
ASSERT_EQ(expected_file_name_, info.file_path);
|
|
|
|
expected_file_name_ = "";
|
|
|
|
matched_count_++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
size_t matched_count_;
|
|
|
|
std::string expected_file_name_;
|
|
|
|
};
|
|
|
|
|
|
|
|
} // namespace
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DynamicLevelCompressionPerLevel) {
|
2015-04-06 19:50:44 +00:00
|
|
|
if (!Snappy_Supported()) {
|
2015-03-13 18:18:35 +00:00
|
|
|
return;
|
|
|
|
}
|
2015-03-10 19:35:15 +00:00
|
|
|
const int kNKeys = 120;
|
|
|
|
int keys[kNKeys];
|
|
|
|
for (int i = 0; i < kNKeys; i++) {
|
|
|
|
keys[i] = i;
|
|
|
|
}
|
|
|
|
std::random_shuffle(std::begin(keys), std::end(keys));
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
Options options;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.db_write_buffer_size = 20480;
|
|
|
|
options.write_buffer_size = 20480;
|
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
options.level0_file_num_compaction_trigger = 2;
|
|
|
|
options.level0_slowdown_writes_trigger = 2;
|
|
|
|
options.level0_stop_writes_trigger = 2;
|
|
|
|
options.target_file_size_base = 2048;
|
|
|
|
options.level_compaction_dynamic_level_bytes = true;
|
|
|
|
options.max_bytes_for_level_base = 102400;
|
|
|
|
options.max_bytes_for_level_multiplier = 4;
|
|
|
|
options.max_background_compactions = 1;
|
|
|
|
options.num_levels = 5;
|
|
|
|
|
|
|
|
options.compression_per_level.resize(3);
|
|
|
|
options.compression_per_level[0] = kNoCompression;
|
|
|
|
options.compression_per_level[1] = kNoCompression;
|
|
|
|
options.compression_per_level[2] = kSnappyCompression;
|
|
|
|
|
2015-06-04 02:57:01 +00:00
|
|
|
OnFileDeletionListener* listener = new OnFileDeletionListener();
|
|
|
|
options.listeners.emplace_back(listener);
|
|
|
|
|
2015-03-10 19:35:15 +00:00
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
// Insert more than 80K. L4 should be base level. Neither L0 nor L4 should
|
|
|
|
// be compressed, so total data size should be more than 80K.
|
|
|
|
for (int i = 0; i < 20; i++) {
|
|
|
|
ASSERT_OK(Put(Key(keys[i]), CompressibleString(&rnd, 4000)));
|
|
|
|
}
|
|
|
|
Flush();
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(1), 0);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(2), 0);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(3), 0);
|
|
|
|
ASSERT_GT(SizeAtLevel(0) + SizeAtLevel(4), 20U * 4000U);
|
|
|
|
|
|
|
|
// Insert 400KB. Some data will be compressed
|
|
|
|
for (int i = 21; i < 120; i++) {
|
|
|
|
ASSERT_OK(Put(Key(keys[i]), CompressibleString(&rnd, 4000)));
|
|
|
|
}
|
|
|
|
Flush();
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(1), 0);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(2), 0);
|
|
|
|
ASSERT_LT(SizeAtLevel(0) + SizeAtLevel(3) + SizeAtLevel(4), 120U * 4000U);
|
|
|
|
// Make sure data in files in L3 is not compacted by removing all files
|
|
|
|
// in L4 and calculate number of rows
|
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
|
|
|
{"disable_auto_compactions", "true"},
|
|
|
|
}));
|
|
|
|
ColumnFamilyMetaData cf_meta;
|
|
|
|
db_->GetColumnFamilyMetaData(&cf_meta);
|
|
|
|
for (auto file : cf_meta.levels[4].files) {
|
2015-06-04 02:57:01 +00:00
|
|
|
listener->SetExpectedFileName(dbname_ + file.name);
|
2015-03-10 19:35:15 +00:00
|
|
|
ASSERT_OK(dbfull()->DeleteFile(file.name));
|
|
|
|
}
|
2015-06-04 02:57:01 +00:00
|
|
|
listener->VerifyMatchedCount(cf_meta.levels[4].files.size());
|
|
|
|
|
2015-03-10 19:35:15 +00:00
|
|
|
int num_keys = 0;
|
|
|
|
std::unique_ptr<Iterator> iter(db_->NewIterator(ReadOptions()));
|
|
|
|
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
|
|
|
|
num_keys++;
|
|
|
|
}
|
|
|
|
ASSERT_OK(iter->status());
|
|
|
|
ASSERT_GT(SizeAtLevel(0) + SizeAtLevel(3), num_keys * 4000U);
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DynamicLevelCompressionPerLevel2) {
|
2015-06-18 21:55:05 +00:00
|
|
|
if (!Snappy_Supported() || !LZ4_Supported() || !Zlib_Supported()) {
|
|
|
|
return;
|
|
|
|
}
|
2015-03-10 19:35:15 +00:00
|
|
|
const int kNKeys = 500;
|
|
|
|
int keys[kNKeys];
|
|
|
|
for (int i = 0; i < kNKeys; i++) {
|
|
|
|
keys[i] = i;
|
|
|
|
}
|
|
|
|
std::random_shuffle(std::begin(keys), std::end(keys));
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
Options options;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.db_write_buffer_size = 6000;
|
|
|
|
options.write_buffer_size = 6000;
|
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
options.level0_file_num_compaction_trigger = 2;
|
|
|
|
options.level0_slowdown_writes_trigger = 2;
|
|
|
|
options.level0_stop_writes_trigger = 2;
|
2015-05-15 22:52:51 +00:00
|
|
|
options.soft_rate_limit = 1.1;
|
2015-03-10 19:35:15 +00:00
|
|
|
|
|
|
|
// Use file size to distinguish levels
|
|
|
|
// L1: 10, L2: 20, L3 40, L4 80
|
|
|
|
// L0 is less than 30
|
|
|
|
options.target_file_size_base = 10;
|
|
|
|
options.target_file_size_multiplier = 2;
|
|
|
|
|
|
|
|
options.level_compaction_dynamic_level_bytes = true;
|
|
|
|
options.max_bytes_for_level_base = 200;
|
|
|
|
options.max_bytes_for_level_multiplier = 8;
|
|
|
|
options.max_background_compactions = 1;
|
|
|
|
options.num_levels = 5;
|
|
|
|
std::shared_ptr<mock::MockTableFactory> mtf(new mock::MockTableFactory);
|
|
|
|
options.table_factory = mtf;
|
|
|
|
|
|
|
|
options.compression_per_level.resize(3);
|
|
|
|
options.compression_per_level[0] = kNoCompression;
|
|
|
|
options.compression_per_level[1] = kLZ4Compression;
|
|
|
|
options.compression_per_level[2] = kZlibCompression;
|
|
|
|
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
// When base level is L4, L4 is LZ4.
|
2015-04-14 23:53:19 +00:00
|
|
|
std::atomic<int> num_zlib(0);
|
|
|
|
std::atomic<int> num_lz4(0);
|
|
|
|
std::atomic<int> num_no(0);
|
2015-04-14 08:55:19 +00:00
|
|
|
rocksdb::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
|
|
|
|
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
|
|
|
|
if (compaction->output_level() == 4) {
|
2015-07-08 22:21:10 +00:00
|
|
|
ASSERT_TRUE(compaction->output_compression() == kLZ4Compression);
|
2015-04-14 23:53:19 +00:00
|
|
|
num_lz4.fetch_add(1);
|
2015-04-14 08:55:19 +00:00
|
|
|
}
|
|
|
|
});
|
2015-04-14 23:53:19 +00:00
|
|
|
rocksdb::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"FlushJob::WriteLevel0Table:output_compression", [&](void* arg) {
|
|
|
|
auto* compression = reinterpret_cast<CompressionType*>(arg);
|
|
|
|
ASSERT_TRUE(*compression == kNoCompression);
|
|
|
|
num_no.fetch_add(1);
|
|
|
|
});
|
2015-04-14 08:55:19 +00:00
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
2015-03-10 19:35:15 +00:00
|
|
|
for (int i = 0; i < 100; i++) {
|
|
|
|
ASSERT_OK(Put(Key(keys[i]), RandomString(&rnd, 200)));
|
|
|
|
}
|
|
|
|
Flush();
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
2015-04-14 08:55:19 +00:00
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
rocksdb::SyncPoint::GetInstance()->ClearAllCallBacks();
|
2015-03-10 19:35:15 +00:00
|
|
|
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(1), 0);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(2), 0);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(3), 0);
|
2015-04-14 08:55:19 +00:00
|
|
|
ASSERT_GT(NumTableFilesAtLevel(4), 0);
|
2015-04-14 23:53:19 +00:00
|
|
|
ASSERT_GT(num_no.load(), 2);
|
|
|
|
ASSERT_GT(num_lz4.load(), 0);
|
2015-04-14 08:55:19 +00:00
|
|
|
int prev_num_files_l4 = NumTableFilesAtLevel(4);
|
2015-03-10 19:35:15 +00:00
|
|
|
|
|
|
|
// After base level turn L4->L3, L3 becomes LZ4 and L4 becomes Zlib
|
2015-04-14 23:53:19 +00:00
|
|
|
num_lz4.store(0);
|
|
|
|
num_no.store(0);
|
2015-04-14 08:55:19 +00:00
|
|
|
rocksdb::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"LevelCompactionPicker::PickCompaction:Return", [&](void* arg) {
|
|
|
|
Compaction* compaction = reinterpret_cast<Compaction*>(arg);
|
|
|
|
if (compaction->output_level() == 4 && compaction->start_level() == 3) {
|
2015-07-08 22:21:10 +00:00
|
|
|
ASSERT_TRUE(compaction->output_compression() == kZlibCompression);
|
2015-04-14 23:53:19 +00:00
|
|
|
num_zlib.fetch_add(1);
|
2015-04-14 08:55:19 +00:00
|
|
|
} else {
|
2015-07-08 22:21:10 +00:00
|
|
|
ASSERT_TRUE(compaction->output_compression() == kLZ4Compression);
|
2015-04-14 23:53:19 +00:00
|
|
|
num_lz4.fetch_add(1);
|
2015-04-14 08:55:19 +00:00
|
|
|
}
|
|
|
|
});
|
2015-04-14 23:53:19 +00:00
|
|
|
rocksdb::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"FlushJob::WriteLevel0Table:output_compression", [&](void* arg) {
|
|
|
|
auto* compression = reinterpret_cast<CompressionType*>(arg);
|
|
|
|
ASSERT_TRUE(*compression == kNoCompression);
|
|
|
|
num_no.fetch_add(1);
|
|
|
|
});
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
2015-04-14 08:55:19 +00:00
|
|
|
|
2015-03-10 19:35:15 +00:00
|
|
|
for (int i = 101; i < 500; i++) {
|
|
|
|
ASSERT_OK(Put(Key(keys[i]), RandomString(&rnd, 200)));
|
|
|
|
if (i % 100 == 99) {
|
|
|
|
Flush();
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
}
|
|
|
|
}
|
2015-04-14 08:55:19 +00:00
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->ClearAllCallBacks();
|
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
2015-03-10 19:35:15 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(1), 0);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(2), 0);
|
2015-04-14 08:55:19 +00:00
|
|
|
ASSERT_GT(NumTableFilesAtLevel(3), 0);
|
|
|
|
ASSERT_GT(NumTableFilesAtLevel(4), prev_num_files_l4);
|
2015-04-14 23:53:19 +00:00
|
|
|
ASSERT_GT(num_no.load(), 2);
|
|
|
|
ASSERT_GT(num_lz4.load(), 0);
|
|
|
|
ASSERT_GT(num_zlib.load(), 0);
|
2015-03-10 19:35:15 +00:00
|
|
|
}
|
|
|
|
|
2015-08-05 05:19:07 +00:00
|
|
|
TEST_P(DBTestWithParam, DynamicCompactionOptions) {
|
2014-10-17 00:21:31 +00:00
|
|
|
// minimum write buffer size is enforced at 64KB
|
|
|
|
const uint64_t k32KB = 1 << 15;
|
2014-10-01 23:19:16 +00:00
|
|
|
const uint64_t k64KB = 1 << 16;
|
|
|
|
const uint64_t k128KB = 1 << 17;
|
2014-10-30 00:02:21 +00:00
|
|
|
const uint64_t k1MB = 1 << 20;
|
2014-10-17 00:21:31 +00:00
|
|
|
const uint64_t k4KB = 1 << 12;
|
2014-10-01 23:19:16 +00:00
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.compression = kNoCompression;
|
2015-05-15 22:52:51 +00:00
|
|
|
options.soft_rate_limit = 1.1;
|
2014-10-17 00:21:31 +00:00
|
|
|
options.write_buffer_size = k64KB;
|
2014-10-01 23:19:16 +00:00
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
// Compaction related options
|
|
|
|
options.level0_file_num_compaction_trigger = 3;
|
2014-10-17 00:14:17 +00:00
|
|
|
options.level0_slowdown_writes_trigger = 4;
|
|
|
|
options.level0_stop_writes_trigger = 8;
|
2014-10-01 23:19:16 +00:00
|
|
|
options.max_grandparent_overlap_factor = 10;
|
|
|
|
options.expanded_compaction_factor = 25;
|
|
|
|
options.source_compaction_factor = 1;
|
2014-10-17 00:21:31 +00:00
|
|
|
options.target_file_size_base = k64KB;
|
2014-10-01 23:19:16 +00:00
|
|
|
options.target_file_size_multiplier = 1;
|
2014-10-17 00:21:31 +00:00
|
|
|
options.max_bytes_for_level_base = k128KB;
|
2014-10-01 23:19:16 +00:00
|
|
|
options.max_bytes_for_level_multiplier = 4;
|
2015-08-05 05:19:07 +00:00
|
|
|
options.num_subcompactions = num_subcompactions_;
|
2014-10-17 00:14:17 +00:00
|
|
|
|
|
|
|
// Block flush thread and disable compaction thread
|
|
|
|
env_->SetBackgroundThreads(1, Env::LOW);
|
|
|
|
env_->SetBackgroundThreads(1, Env::HIGH);
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-10-01 23:19:16 +00:00
|
|
|
|
2014-10-17 21:46:40 +00:00
|
|
|
auto gen_l0_kb = [this](int start, int size, int stride) {
|
2014-10-01 23:19:16 +00:00
|
|
|
Random rnd(301);
|
|
|
|
for (int i = 0; i < size; i++) {
|
2014-10-16 23:57:59 +00:00
|
|
|
ASSERT_OK(Put(Key(start + stride * i), RandomString(&rnd, 1024)));
|
2014-10-01 23:19:16 +00:00
|
|
|
}
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable();
|
|
|
|
};
|
|
|
|
|
2014-10-23 22:35:10 +00:00
|
|
|
// Write 3 files that have the same key range.
|
|
|
|
// Since level0_file_num_compaction_trigger is 3, compaction should be
|
|
|
|
// triggered. The compaction should result in one L1 file
|
2014-10-17 21:46:40 +00:00
|
|
|
gen_l0_kb(0, 64, 1);
|
2014-10-01 23:19:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 1);
|
2014-10-17 21:46:40 +00:00
|
|
|
gen_l0_kb(0, 64, 1);
|
2014-10-01 23:19:16 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 2);
|
2014-10-17 21:46:40 +00:00
|
|
|
gen_l0_kb(0, 64, 1);
|
2014-10-01 23:19:16 +00:00
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
ASSERT_EQ("0,1", FilesPerLevel());
|
|
|
|
std::vector<LiveFileMetaData> metadata;
|
|
|
|
db_->GetLiveFilesMetaData(&metadata);
|
2014-10-02 08:05:59 +00:00
|
|
|
ASSERT_EQ(1U, metadata.size());
|
2014-10-17 00:21:31 +00:00
|
|
|
ASSERT_LE(metadata[0].size, k64KB + k4KB);
|
|
|
|
ASSERT_GE(metadata[0].size, k64KB - k4KB);
|
2014-10-01 23:19:16 +00:00
|
|
|
|
2014-10-17 00:21:31 +00:00
|
|
|
// Test compaction trigger and target_file_size_base
|
2014-10-23 22:35:10 +00:00
|
|
|
// Reduce compaction trigger to 2, and reduce L1 file size to 32KB.
|
|
|
|
// Writing to 64KB L0 files should trigger a compaction. Since these
|
|
|
|
// 2 L0 files have the same key range, compaction merge them and should
|
|
|
|
// result in 2 32KB L1 files.
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-10-01 23:19:16 +00:00
|
|
|
{"level0_file_num_compaction_trigger", "2"},
|
2014-11-25 04:44:49 +00:00
|
|
|
{"target_file_size_base", ToString(k32KB) }
|
2014-10-01 23:19:16 +00:00
|
|
|
}));
|
|
|
|
|
2014-10-17 21:46:40 +00:00
|
|
|
gen_l0_kb(0, 64, 1);
|
2014-10-01 23:19:16 +00:00
|
|
|
ASSERT_EQ("1,1", FilesPerLevel());
|
2014-10-17 21:46:40 +00:00
|
|
|
gen_l0_kb(0, 64, 1);
|
2014-10-01 23:19:16 +00:00
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
ASSERT_EQ("0,2", FilesPerLevel());
|
|
|
|
metadata.clear();
|
|
|
|
db_->GetLiveFilesMetaData(&metadata);
|
2014-10-02 08:05:59 +00:00
|
|
|
ASSERT_EQ(2U, metadata.size());
|
2014-10-17 00:21:31 +00:00
|
|
|
ASSERT_LE(metadata[0].size, k32KB + k4KB);
|
|
|
|
ASSERT_GE(metadata[0].size, k32KB - k4KB);
|
2014-10-23 22:35:10 +00:00
|
|
|
ASSERT_LE(metadata[1].size, k32KB + k4KB);
|
|
|
|
ASSERT_GE(metadata[1].size, k32KB - k4KB);
|
2014-10-01 23:19:16 +00:00
|
|
|
|
2014-10-17 00:21:31 +00:00
|
|
|
// Test max_bytes_for_level_base
|
2014-10-23 22:35:10 +00:00
|
|
|
// Increase level base size to 256KB and write enough data that will
|
|
|
|
// fill L1 and L2. L1 size should be around 256KB while L2 size should be
|
|
|
|
// around 256KB x 4.
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-11-25 04:44:49 +00:00
|
|
|
{"max_bytes_for_level_base", ToString(k1MB) }
|
2014-10-17 00:21:31 +00:00
|
|
|
}));
|
2014-10-01 23:19:16 +00:00
|
|
|
|
2014-10-30 00:02:21 +00:00
|
|
|
// writing 96 x 64KB => 6 * 1024KB
|
|
|
|
// (L1 + L2) = (1 + 4) * 1024KB
|
|
|
|
for (int i = 0; i < 96; ++i) {
|
|
|
|
gen_l0_kb(i, 64, 96);
|
2014-10-01 23:19:16 +00:00
|
|
|
}
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
2014-11-04 19:33:57 +00:00
|
|
|
ASSERT_GT(SizeAtLevel(1), k1MB / 2);
|
|
|
|
ASSERT_LT(SizeAtLevel(1), k1MB + k1MB / 2);
|
|
|
|
|
|
|
|
// Within (0.5, 1.5) of 4MB.
|
|
|
|
ASSERT_GT(SizeAtLevel(2), 2 * k1MB);
|
|
|
|
ASSERT_LT(SizeAtLevel(2), 6 * k1MB);
|
2014-10-01 23:19:16 +00:00
|
|
|
|
2014-10-17 00:21:31 +00:00
|
|
|
// Test max_bytes_for_level_multiplier and
|
2014-10-23 22:35:10 +00:00
|
|
|
// max_bytes_for_level_base. Now, reduce both mulitplier and level base,
|
|
|
|
// After filling enough data that can fit in L1 - L3, we should see L1 size
|
|
|
|
// reduces to 128KB from 256KB which was asserted previously. Same for L2.
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-10-01 23:19:16 +00:00
|
|
|
{"max_bytes_for_level_multiplier", "2"},
|
2014-11-25 04:44:49 +00:00
|
|
|
{"max_bytes_for_level_base", ToString(k128KB) }
|
2014-10-01 23:19:16 +00:00
|
|
|
}));
|
|
|
|
|
2014-10-17 00:21:31 +00:00
|
|
|
// writing 20 x 64KB = 10 x 128KB
|
|
|
|
// (L1 + L2 + L3) = (1 + 2 + 4) * 128KB
|
|
|
|
for (int i = 0; i < 20; ++i) {
|
|
|
|
gen_l0_kb(i, 64, 32);
|
2014-10-01 23:19:16 +00:00
|
|
|
}
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
2014-10-30 00:02:21 +00:00
|
|
|
uint64_t total_size =
|
|
|
|
SizeAtLevel(1) + SizeAtLevel(2) + SizeAtLevel(3);
|
|
|
|
ASSERT_TRUE(total_size < k128KB * 7 * 1.5);
|
2014-10-17 00:14:17 +00:00
|
|
|
|
2014-10-23 22:35:10 +00:00
|
|
|
// Test level0_stop_writes_trigger.
|
|
|
|
// Clean up memtable and L0. Block compaction threads. If continue to write
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
// and flush memtables. We should see put stop after 8 memtable flushes
|
2014-10-23 22:35:10 +00:00
|
|
|
// since level0_stop_writes_trigger = 8
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2014-10-17 00:14:17 +00:00
|
|
|
// Block compaction
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
SleepingBackgroundTask sleeping_task_low;
|
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_low,
|
2014-10-17 00:14:17 +00:00
|
|
|
Env::Priority::LOW);
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::DelayWrite:Wait",
|
|
|
|
[&](void* arg) { sleeping_task_low.WakeUp(); });
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
2014-10-17 00:14:17 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 0);
|
|
|
|
int count = 0;
|
|
|
|
Random rnd(301);
|
|
|
|
WriteOptions wo;
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
while (count < 64) {
|
|
|
|
ASSERT_OK(Put(Key(count), RandomString(&rnd, 1024), wo));
|
|
|
|
if (sleeping_task_low.WokenUp()) {
|
|
|
|
break;
|
|
|
|
}
|
2014-10-17 00:14:17 +00:00
|
|
|
dbfull()->TEST_FlushMemTable(true);
|
|
|
|
count++;
|
|
|
|
}
|
2014-10-17 00:21:31 +00:00
|
|
|
// Stop trigger = 8
|
2014-10-17 00:14:17 +00:00
|
|
|
ASSERT_EQ(count, 8);
|
|
|
|
// Unblock
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
sleeping_task_low.WaitUntilDone();
|
2014-10-17 00:14:17 +00:00
|
|
|
|
2014-10-23 22:35:10 +00:00
|
|
|
// Now reduce level0_stop_writes_trigger to 6. Clear up memtables and L0.
|
|
|
|
// Block compaction thread again. Perform the put and memtable flushes
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
// until we see the stop after 6 memtable flushes.
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-10-17 00:14:17 +00:00
|
|
|
{"level0_stop_writes_trigger", "6"}
|
|
|
|
}));
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2014-10-17 00:14:17 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 0);
|
|
|
|
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
// Block compaction again
|
|
|
|
sleeping_task_low.Reset();
|
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_low,
|
2014-10-17 00:14:17 +00:00
|
|
|
Env::Priority::LOW);
|
|
|
|
count = 0;
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
while (count < 64) {
|
|
|
|
ASSERT_OK(Put(Key(count), RandomString(&rnd, 1024), wo));
|
|
|
|
if (sleeping_task_low.WokenUp()) {
|
|
|
|
break;
|
|
|
|
}
|
2014-10-17 00:14:17 +00:00
|
|
|
dbfull()->TEST_FlushMemTable(true);
|
|
|
|
count++;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(count, 6);
|
|
|
|
// Unblock
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
sleeping_task_low.WaitUntilDone();
|
2014-10-17 00:14:17 +00:00
|
|
|
|
|
|
|
// Test disable_auto_compactions
|
2014-10-23 22:35:10 +00:00
|
|
|
// Compaction thread is unblocked but auto compaction is disabled. Write
|
|
|
|
// 4 L0 files and compaction should be triggered. If auto compaction is
|
|
|
|
// disabled, then TEST_WaitForCompact will be waiting for nothing. Number of
|
|
|
|
// L0 files do not change after the call.
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-10-17 00:14:17 +00:00
|
|
|
{"disable_auto_compactions", "true"}
|
|
|
|
}));
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2014-10-17 00:14:17 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 0);
|
|
|
|
|
|
|
|
for (int i = 0; i < 4; ++i) {
|
|
|
|
ASSERT_OK(Put(Key(i), RandomString(&rnd, 1024)));
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
// Wait for compaction so that put won't stop
|
2014-10-17 00:14:17 +00:00
|
|
|
dbfull()->TEST_FlushMemTable(true);
|
|
|
|
}
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 4);
|
|
|
|
|
2014-10-23 22:35:10 +00:00
|
|
|
// Enable auto compaction and perform the same test, # of L0 files should be
|
|
|
|
// reduced after compaction.
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-10-17 00:14:17 +00:00
|
|
|
{"disable_auto_compactions", "false"}
|
|
|
|
}));
|
2015-06-17 21:36:14 +00:00
|
|
|
dbfull()->CompactRange(CompactRangeOptions(), nullptr, nullptr);
|
2014-10-17 00:14:17 +00:00
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 0);
|
|
|
|
|
|
|
|
for (int i = 0; i < 4; ++i) {
|
|
|
|
ASSERT_OK(Put(Key(i), RandomString(&rnd, 1024)));
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
// Wait for compaction so that put won't stop
|
2014-10-17 00:14:17 +00:00
|
|
|
dbfull()->TEST_FlushMemTable(true);
|
|
|
|
}
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
ASSERT_LT(NumTableFilesAtLevel(0), 4);
|
2014-10-17 00:21:31 +00:00
|
|
|
|
Deprecate WriteOptions::timeout_hint_us
Summary:
In one of our recent meetings, we discussed deprecating features that are not being actively used. One of those features, at least within Facebook, is timeout_hint. The feature is really nicely implemented, but if nobody needs it, we should remove it from our code-base (until we get a valid use-case). Some arguments:
* Less code == better icache hit rate, smaller builds, simpler code
* The motivation for adding timeout_hint_us was to work-around RocksDB's stall issue. However, we're currently addressing the stall issue itself (see @sdong's recent work on stall write_rate), so we should never see sharp lock-ups in the future.
* Nobody is using the feature within Facebook's code-base. Googling for `timeout_hint_us` also doesn't yield any users.
Test Plan: make check
Reviewers: anthony, kradhakrishnan, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D41937
2015-07-14 07:35:48 +00:00
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
2014-10-01 23:19:16 +00:00
|
|
|
}
|
2014-09-05 22:20:05 +00:00
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, FileCreationRandomFailure) {
|
2014-10-28 21:27:26 +00:00
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.write_buffer_size = 100000; // Small write buffer
|
|
|
|
options.target_file_size_base = 200000;
|
|
|
|
options.max_bytes_for_level_base = 1000000;
|
|
|
|
options.max_bytes_for_level_multiplier = 2;
|
|
|
|
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-10-28 21:27:26 +00:00
|
|
|
Random rnd(301);
|
|
|
|
|
2015-07-21 10:05:57 +00:00
|
|
|
const int kCDTKeysPerBuffer = 4;
|
2014-10-28 21:27:26 +00:00
|
|
|
const int kTestSize = kCDTKeysPerBuffer * 4096;
|
|
|
|
const int kTotalIteration = 100;
|
|
|
|
// the second half of the test involves in random failure
|
|
|
|
// of file creation.
|
|
|
|
const int kRandomFailureTest = kTotalIteration / 2;
|
|
|
|
std::vector<std::string> values;
|
|
|
|
for (int i = 0; i < kTestSize; ++i) {
|
|
|
|
values.push_back("NOT_FOUND");
|
|
|
|
}
|
|
|
|
for (int j = 0; j < kTotalIteration; ++j) {
|
|
|
|
if (j == kRandomFailureTest) {
|
|
|
|
env_->non_writeable_rate_.store(90);
|
|
|
|
}
|
|
|
|
for (int k = 0; k < kTestSize; ++k) {
|
|
|
|
// here we expect some of the Put fails.
|
|
|
|
std::string value = RandomString(&rnd, 100);
|
|
|
|
Status s = Put(Key(k), Slice(value));
|
|
|
|
if (s.ok()) {
|
|
|
|
// update the latest successful put
|
|
|
|
values[k] = value;
|
|
|
|
}
|
|
|
|
// But everything before we simulate the failure-test should succeed.
|
|
|
|
if (j < kRandomFailureTest) {
|
|
|
|
ASSERT_OK(s);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
// If rocksdb does not do the correct job, internal assert will fail here.
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable();
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
|
|
|
|
// verify we have the latest successful update
|
|
|
|
for (int k = 0; k < kTestSize; ++k) {
|
|
|
|
auto v = Get(Key(k));
|
|
|
|
ASSERT_EQ(v, values[k]);
|
|
|
|
}
|
|
|
|
|
|
|
|
// reopen and reverify we have the latest successful update
|
|
|
|
env_->non_writeable_rate_.store(0);
|
DBTest: options clean up - part 1
Summary:
DBTest has several functions (Reopen(), TryReopen(), ChangeOptins(), etc
that takes a pointer to options), depending on if it is nullptr, it uses
different options underneath. This makes it really hard to track what
options is used in different test case. We should just kill the default
value and make it being passed into explicitly. It is going to be very
hairy. I will start with simple ones.
Test Plan:
make db_test
stacked diffs, will run test with full stack
Reviewers: sdong, yhchiang, rven, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D27687
2014-10-29 18:58:09 +00:00
|
|
|
Reopen(options);
|
2014-10-28 21:27:26 +00:00
|
|
|
for (int k = 0; k < kTestSize; ++k) {
|
|
|
|
auto v = Get(Key(k));
|
|
|
|
ASSERT_EQ(v, values[k]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DynamicMiscOptions) {
|
2014-10-23 22:34:21 +00:00
|
|
|
// Test max_sequential_skip_in_iterations
|
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.max_sequential_skip_in_iterations = 16;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
2014-10-29 18:59:18 +00:00
|
|
|
DestroyAndReopen(options);
|
2014-10-23 22:34:21 +00:00
|
|
|
|
|
|
|
auto assert_reseek_count = [this, &options](int key_start, int num_reseek) {
|
|
|
|
int key0 = key_start;
|
|
|
|
int key1 = key_start + 1;
|
|
|
|
int key2 = key_start + 2;
|
|
|
|
Random rnd(301);
|
|
|
|
ASSERT_OK(Put(Key(key0), RandomString(&rnd, 8)));
|
|
|
|
for (int i = 0; i < 10; ++i) {
|
|
|
|
ASSERT_OK(Put(Key(key1), RandomString(&rnd, 8)));
|
|
|
|
}
|
|
|
|
ASSERT_OK(Put(Key(key2), RandomString(&rnd, 8)));
|
|
|
|
std::unique_ptr<Iterator> iter(db_->NewIterator(ReadOptions()));
|
|
|
|
iter->Seek(Key(key1));
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(Key(key1)), 0);
|
|
|
|
iter->Next();
|
|
|
|
ASSERT_TRUE(iter->Valid());
|
|
|
|
ASSERT_EQ(iter->key().compare(Key(key2)), 0);
|
|
|
|
ASSERT_EQ(num_reseek,
|
|
|
|
TestGetTickerCount(options, NUMBER_OF_RESEEKS_IN_ITERATION));
|
|
|
|
};
|
|
|
|
// No reseek
|
|
|
|
assert_reseek_count(100, 0);
|
|
|
|
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-10-23 22:34:21 +00:00
|
|
|
{"max_sequential_skip_in_iterations", "4"}
|
|
|
|
}));
|
|
|
|
// Clear memtable and make new option effective
|
|
|
|
dbfull()->TEST_FlushMemTable(true);
|
|
|
|
// Trigger reseek
|
|
|
|
assert_reseek_count(200, 1);
|
|
|
|
|
2014-11-05 00:23:05 +00:00
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
2014-10-23 22:34:21 +00:00
|
|
|
{"max_sequential_skip_in_iterations", "16"}
|
|
|
|
}));
|
|
|
|
// Clear memtable and make new option effective
|
|
|
|
dbfull()->TEST_FlushMemTable(true);
|
|
|
|
// No reseek
|
|
|
|
assert_reseek_count(300, 1);
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DontDeletePendingOutputs) {
|
2014-11-07 19:50:34 +00:00
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
// Every time we write to a table file, call FOF/POF with full DB scan. This
|
|
|
|
// will make sure our pending_outputs_ protection work correctly
|
|
|
|
std::function<void()> purge_obsolete_files_function = [&]() {
|
2015-02-12 17:54:48 +00:00
|
|
|
JobContext job_context(0);
|
2014-11-07 19:50:34 +00:00
|
|
|
dbfull()->TEST_LockMutex();
|
|
|
|
dbfull()->FindObsoleteFiles(&job_context, true /*force*/);
|
|
|
|
dbfull()->TEST_UnlockMutex();
|
|
|
|
dbfull()->PurgeObsoleteFiles(job_context);
|
2015-07-07 19:10:10 +00:00
|
|
|
job_context.Clean();
|
2014-11-07 19:50:34 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
env_->table_write_callback_ = &purge_obsolete_files_function;
|
|
|
|
|
|
|
|
for (int i = 0; i < 2; ++i) {
|
|
|
|
ASSERT_OK(Put("a", "begin"));
|
|
|
|
ASSERT_OK(Put("z", "end"));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
}
|
|
|
|
|
|
|
|
// If pending output guard does not work correctly, PurgeObsoleteFiles() will
|
|
|
|
// delete the file that Compaction is trying to create, causing this: error
|
|
|
|
// db/db_test.cc:975: IO error:
|
|
|
|
// /tmp/rocksdbtest-1552237650/db_test/000009.sst: No such file or directory
|
|
|
|
Compact("a", "b");
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DontDeleteMovedFile) {
|
2014-12-22 11:04:45 +00:00
|
|
|
// This test triggers move compaction and verifies that the file is not
|
|
|
|
// deleted when it's part of move compaction
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.env = env_;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.max_bytes_for_level_base = 1024 * 1024; // 1 MB
|
|
|
|
options.level0_file_num_compaction_trigger =
|
|
|
|
2; // trigger compaction when we have 2 files
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
// Create two 1MB sst files
|
|
|
|
for (int i = 0; i < 2; ++i) {
|
|
|
|
// Create 1MB sst file
|
|
|
|
for (int j = 0; j < 100; ++j) {
|
|
|
|
ASSERT_OK(Put(Key(i * 50 + j), RandomString(&rnd, 10 * 1024)));
|
|
|
|
}
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
}
|
|
|
|
// this should execute both L0->L1 and L1->(move)->L2 compactions
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
ASSERT_EQ("0,0,1", FilesPerLevel(0));
|
|
|
|
|
|
|
|
// If the moved file is actually deleted (the move-safeguard in
|
|
|
|
// ~Version::Version() is not there), we get this failure:
|
|
|
|
// Corruption: Can't access /000009.sst
|
|
|
|
Reopen(options);
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, OptimizeFiltersForHits) {
|
2015-02-17 16:03:45 +00:00
|
|
|
Options options = CurrentOptions();
|
2015-03-11 00:53:22 +00:00
|
|
|
options.write_buffer_size = 256 * 1024;
|
|
|
|
options.target_file_size_base = 256 * 1024;
|
2015-02-17 16:03:45 +00:00
|
|
|
options.level0_file_num_compaction_trigger = 2;
|
|
|
|
options.level0_slowdown_writes_trigger = 2;
|
|
|
|
options.level0_stop_writes_trigger = 4;
|
2015-03-11 00:53:22 +00:00
|
|
|
options.max_bytes_for_level_base = 256 * 1024;
|
2015-02-17 16:03:45 +00:00
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
options.max_background_compactions = 8;
|
|
|
|
options.max_background_flushes = 8;
|
|
|
|
options.compaction_style = kCompactionStyleLevel;
|
|
|
|
BlockBasedTableOptions bbto;
|
|
|
|
bbto.filter_policy.reset(NewBloomFilterPolicy(10, true));
|
|
|
|
bbto.whole_key_filtering = true;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(bbto));
|
|
|
|
options.optimize_filters_for_hits = true;
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
CreateAndReopenWithCF({"mypikachu"}, options);
|
|
|
|
|
|
|
|
int numkeys = 200000;
|
|
|
|
for (int i = 0; i < 20; i += 2) {
|
|
|
|
for (int j = i; j < numkeys; j += 20) {
|
|
|
|
ASSERT_OK(Put(1, Key(j), "val"));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
|
|
|
|
for (int i = 1; i < numkeys; i += 2) {
|
|
|
|
ASSERT_EQ(Get(1, Key(i)), "NOT_FOUND");
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_EQ(0, TestGetTickerCount(options, GET_HIT_L0));
|
|
|
|
ASSERT_EQ(0, TestGetTickerCount(options, GET_HIT_L1));
|
|
|
|
ASSERT_EQ(0, TestGetTickerCount(options, GET_HIT_L2_AND_UP));
|
|
|
|
|
|
|
|
// When the skip_filters_on_last_level is ON, the last level which has
|
|
|
|
// most of the keys does not use bloom filters. We end up using
|
|
|
|
// bloom filters in a very small number of cases. Without the flag.
|
|
|
|
// this number would be close to 150000 (all the key at the last level) +
|
|
|
|
// some use in the upper levels
|
|
|
|
//
|
|
|
|
ASSERT_GT(90000, TestGetTickerCount(options, BLOOM_FILTER_USEFUL));
|
|
|
|
|
|
|
|
for (int i = 0; i < numkeys; i += 2) {
|
|
|
|
ASSERT_EQ(Get(1, Key(i)), "val");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, L0L1L2AndUpHitCounter) {
|
2015-02-09 22:53:58 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.write_buffer_size = 32 * 1024;
|
|
|
|
options.target_file_size_base = 32 * 1024;
|
|
|
|
options.level0_file_num_compaction_trigger = 2;
|
|
|
|
options.level0_slowdown_writes_trigger = 2;
|
|
|
|
options.level0_stop_writes_trigger = 4;
|
|
|
|
options.max_bytes_for_level_base = 64 * 1024;
|
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
options.max_background_compactions = 8;
|
|
|
|
options.max_background_flushes = 8;
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
CreateAndReopenWithCF({"mypikachu"}, options);
|
|
|
|
|
|
|
|
int numkeys = 20000;
|
|
|
|
for (int i = 0; i < numkeys; i++) {
|
|
|
|
ASSERT_OK(Put(1, Key(i), "val"));
|
|
|
|
}
|
|
|
|
ASSERT_EQ(0, TestGetTickerCount(options, GET_HIT_L0));
|
|
|
|
ASSERT_EQ(0, TestGetTickerCount(options, GET_HIT_L1));
|
|
|
|
ASSERT_EQ(0, TestGetTickerCount(options, GET_HIT_L2_AND_UP));
|
|
|
|
|
|
|
|
ASSERT_OK(Flush(1));
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
|
|
|
|
for (int i = 0; i < numkeys; i++) {
|
|
|
|
ASSERT_EQ(Get(1, Key(i)), "val");
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_GT(TestGetTickerCount(options, GET_HIT_L0), 100);
|
|
|
|
ASSERT_GT(TestGetTickerCount(options, GET_HIT_L1), 100);
|
|
|
|
ASSERT_GT(TestGetTickerCount(options, GET_HIT_L2_AND_UP), 100);
|
|
|
|
|
|
|
|
ASSERT_EQ(numkeys, TestGetTickerCount(options, GET_HIT_L0) +
|
|
|
|
TestGetTickerCount(options, GET_HIT_L1) +
|
|
|
|
TestGetTickerCount(options, GET_HIT_L2_AND_UP));
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, EncodeDecompressedBlockSizeTest) {
|
2015-01-15 00:24:24 +00:00
|
|
|
// iter 0 -- zlib
|
|
|
|
// iter 1 -- bzip2
|
|
|
|
// iter 2 -- lz4
|
|
|
|
// iter 3 -- lz4HC
|
|
|
|
CompressionType compressions[] = {kZlibCompression, kBZip2Compression,
|
|
|
|
kLZ4Compression, kLZ4HCCompression};
|
|
|
|
for (int iter = 0; iter < 4; ++iter) {
|
2015-06-18 21:55:05 +00:00
|
|
|
if (!CompressionTypeSupported(compressions[iter])) {
|
|
|
|
continue;
|
|
|
|
}
|
2015-01-15 00:24:24 +00:00
|
|
|
// first_table_version 1 -- generate with table_version == 1, read with
|
|
|
|
// table_version == 2
|
|
|
|
// first_table_version 2 -- generate with table_version == 2, read with
|
|
|
|
// table_version == 1
|
|
|
|
for (int first_table_version = 1; first_table_version <= 2;
|
|
|
|
++first_table_version) {
|
|
|
|
BlockBasedTableOptions table_options;
|
|
|
|
table_options.format_version = first_table_version;
|
|
|
|
table_options.filter_policy.reset(NewBloomFilterPolicy(10));
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.compression = compressions[iter];
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
int kNumKeysWritten = 100000;
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
for (int i = 0; i < kNumKeysWritten; ++i) {
|
|
|
|
// compressible string
|
|
|
|
ASSERT_OK(Put(Key(i), RandomString(&rnd, 128) + std::string(128, 'a')));
|
|
|
|
}
|
|
|
|
|
|
|
|
table_options.format_version = first_table_version == 1 ? 2 : 1;
|
|
|
|
options.table_factory.reset(NewBlockBasedTableFactory(table_options));
|
|
|
|
Reopen(options);
|
|
|
|
for (int i = 0; i < kNumKeysWritten; ++i) {
|
|
|
|
auto r = Get(Key(i));
|
|
|
|
ASSERT_EQ(r.substr(128), std::string(128, 'a'));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, MutexWaitStats) {
|
2015-02-05 05:39:45 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
const int64_t kMutexWaitDelay = 100;
|
|
|
|
ThreadStatusUtil::TEST_SetStateDelay(
|
|
|
|
ThreadStatus::STATE_MUTEX_WAIT, kMutexWaitDelay);
|
|
|
|
ASSERT_OK(Put("hello", "rocksdb"));
|
|
|
|
ASSERT_GE(TestGetTickerCount(
|
|
|
|
options, DB_MUTEX_WAIT_MICROS), kMutexWaitDelay);
|
|
|
|
ThreadStatusUtil::TEST_SetStateDelay(
|
|
|
|
ThreadStatus::STATE_MUTEX_WAIT, 0);
|
|
|
|
}
|
|
|
|
|
2015-02-10 01:38:32 +00:00
|
|
|
// This reproduces a bug where we don't delete a file because when it was
|
|
|
|
// supposed to be deleted, it was blocked by pending_outputs
|
|
|
|
// Consider:
|
|
|
|
// 1. current file_number is 13
|
|
|
|
// 2. compaction (1) starts, blocks deletion of all files starting with 13
|
|
|
|
// (pending outputs)
|
|
|
|
// 3. file 13 is created by compaction (2)
|
|
|
|
// 4. file 13 is consumed by compaction (3) and file 15 was created. Since file
|
|
|
|
// 13 has no references, it is put into VersionSet::obsolete_files_
|
|
|
|
// 5. FindObsoleteFiles() gets file 13 from VersionSet::obsolete_files_. File 13
|
|
|
|
// is deleted from obsolete_files_ set.
|
|
|
|
// 6. PurgeObsoleteFiles() tries to delete file 13, but this file is blocked by
|
|
|
|
// pending outputs since compaction (1) is still running. It is not deleted and
|
|
|
|
// it is not present in obsolete_files_ anymore. Therefore, we never delete it.
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, DeleteObsoleteFilesPendingOutputs) {
|
2015-02-10 01:38:32 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.env = env_;
|
|
|
|
options.write_buffer_size = 2 * 1024 * 1024; // 2 MB
|
|
|
|
options.max_bytes_for_level_base = 1024 * 1024; // 1 MB
|
|
|
|
options.level0_file_num_compaction_trigger =
|
|
|
|
2; // trigger compaction when we have 2 files
|
|
|
|
options.max_background_flushes = 2;
|
|
|
|
options.max_background_compactions = 2;
|
2015-06-04 02:57:01 +00:00
|
|
|
|
|
|
|
OnFileDeletionListener* listener = new OnFileDeletionListener();
|
|
|
|
options.listeners.emplace_back(listener);
|
|
|
|
|
2015-02-10 01:38:32 +00:00
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
// Create two 1MB sst files
|
|
|
|
for (int i = 0; i < 2; ++i) {
|
|
|
|
// Create 1MB sst file
|
|
|
|
for (int j = 0; j < 100; ++j) {
|
|
|
|
ASSERT_OK(Put(Key(i * 50 + j), RandomString(&rnd, 10 * 1024)));
|
|
|
|
}
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
}
|
|
|
|
// this should execute both L0->L1 and L1->(move)->L2 compactions
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
ASSERT_EQ("0,0,1", FilesPerLevel(0));
|
|
|
|
|
|
|
|
SleepingBackgroundTask blocking_thread;
|
|
|
|
port::Mutex mutex_;
|
|
|
|
bool already_blocked(false);
|
|
|
|
|
|
|
|
// block the flush
|
|
|
|
std::function<void()> block_first_time = [&]() {
|
|
|
|
bool blocking = false;
|
|
|
|
{
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
if (!already_blocked) {
|
|
|
|
blocking = true;
|
|
|
|
already_blocked = true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (blocking) {
|
|
|
|
blocking_thread.DoSleep();
|
|
|
|
}
|
|
|
|
};
|
|
|
|
env_->table_write_callback_ = &block_first_time;
|
|
|
|
// Create 1MB sst file
|
|
|
|
for (int j = 0; j < 256; ++j) {
|
|
|
|
ASSERT_OK(Put(Key(j), RandomString(&rnd, 10 * 1024)));
|
|
|
|
}
|
|
|
|
// this should trigger a flush, which is blocked with block_first_time
|
|
|
|
// pending_file is protecting all the files created after
|
|
|
|
|
|
|
|
ASSERT_OK(dbfull()->TEST_CompactRange(2, nullptr, nullptr));
|
|
|
|
|
|
|
|
ASSERT_EQ("0,0,0,1", FilesPerLevel(0));
|
|
|
|
std::vector<LiveFileMetaData> metadata;
|
|
|
|
db_->GetLiveFilesMetaData(&metadata);
|
|
|
|
ASSERT_EQ(metadata.size(), 1U);
|
|
|
|
auto file_on_L2 = metadata[0].name;
|
2015-06-04 02:57:01 +00:00
|
|
|
listener->SetExpectedFileName(dbname_ + file_on_L2);
|
2015-02-10 01:38:32 +00:00
|
|
|
|
Allowing L0 -> L1 trivial move on sorted data
Summary:
This diff updates the logic of how we do trivial move, now trivial move can run on any number of files in input level as long as they are not overlapping
The conditions for trivial move have been updated
Introduced conditions:
- Trivial move cannot happen if we have a compaction filter (except if the compaction is not manual)
- Input level files cannot be overlapping
Removed conditions:
- Trivial move only run when the compaction is not manual
- Input level should can contain only 1 file
More context on what tests failed because of Trivial move
```
DBTest.CompactionsGenerateMultipleFiles
This test is expecting compaction on a file in L0 to generate multiple files in L1, this test will fail with trivial move because we end up with one file in L1
```
```
DBTest.NoSpaceCompactRange
This test expect compaction to fail when we force environment to report running out of space, of course this is not valid in trivial move situation
because trivial move does not need any extra space, and did not check for that
```
```
DBTest.DropWrites
Similar to DBTest.NoSpaceCompactRange
```
```
DBTest.DeleteObsoleteFilesPendingOutputs
This test expect that a file in L2 is deleted after it's moved to L3, this is not valid with trivial move because although the file was moved it is now used by L3
```
```
CuckooTableDBTest.CompactionIntoMultipleFiles
Same as DBTest.CompactionsGenerateMultipleFiles
```
This diff is based on a work by @sdong https://reviews.facebook.net/D34149
Test Plan: make -j64 check
Reviewers: rven, sdong, igor
Reviewed By: igor
Subscribers: yhchiang, ott, march, dhruba, sdong
Differential Revision: https://reviews.facebook.net/D34797
2015-06-04 23:51:25 +00:00
|
|
|
ASSERT_OK(dbfull()->TEST_CompactRange(3, nullptr, nullptr, nullptr,
|
|
|
|
true /* disallow trivial move */));
|
2015-02-10 01:38:32 +00:00
|
|
|
ASSERT_EQ("0,0,0,0,1", FilesPerLevel(0));
|
|
|
|
|
|
|
|
// finish the flush!
|
|
|
|
blocking_thread.WakeUp();
|
|
|
|
blocking_thread.WaitUntilDone();
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable();
|
|
|
|
ASSERT_EQ("1,0,0,0,1", FilesPerLevel(0));
|
|
|
|
|
|
|
|
metadata.clear();
|
|
|
|
db_->GetLiveFilesMetaData(&metadata);
|
|
|
|
ASSERT_EQ(metadata.size(), 2U);
|
|
|
|
|
Allowing L0 -> L1 trivial move on sorted data
Summary:
This diff updates the logic of how we do trivial move, now trivial move can run on any number of files in input level as long as they are not overlapping
The conditions for trivial move have been updated
Introduced conditions:
- Trivial move cannot happen if we have a compaction filter (except if the compaction is not manual)
- Input level files cannot be overlapping
Removed conditions:
- Trivial move only run when the compaction is not manual
- Input level should can contain only 1 file
More context on what tests failed because of Trivial move
```
DBTest.CompactionsGenerateMultipleFiles
This test is expecting compaction on a file in L0 to generate multiple files in L1, this test will fail with trivial move because we end up with one file in L1
```
```
DBTest.NoSpaceCompactRange
This test expect compaction to fail when we force environment to report running out of space, of course this is not valid in trivial move situation
because trivial move does not need any extra space, and did not check for that
```
```
DBTest.DropWrites
Similar to DBTest.NoSpaceCompactRange
```
```
DBTest.DeleteObsoleteFilesPendingOutputs
This test expect that a file in L2 is deleted after it's moved to L3, this is not valid with trivial move because although the file was moved it is now used by L3
```
```
CuckooTableDBTest.CompactionIntoMultipleFiles
Same as DBTest.CompactionsGenerateMultipleFiles
```
This diff is based on a work by @sdong https://reviews.facebook.net/D34149
Test Plan: make -j64 check
Reviewers: rven, sdong, igor
Reviewed By: igor
Subscribers: yhchiang, ott, march, dhruba, sdong
Differential Revision: https://reviews.facebook.net/D34797
2015-06-04 23:51:25 +00:00
|
|
|
// This file should have been deleted during last compaction
|
2015-07-21 00:20:40 +00:00
|
|
|
ASSERT_EQ(Status::NotFound(), env_->FileExists(dbname_ + file_on_L2));
|
2015-06-04 02:57:01 +00:00
|
|
|
listener->VerifyMatchedCount(1);
|
2015-02-10 01:38:32 +00:00
|
|
|
}
|
|
|
|
|
2015-03-17 21:08:00 +00:00
|
|
|
TEST_F(DBTest, CloseSpeedup) {
|
2015-03-17 01:49:14 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.compaction_style = kCompactionStyleLevel;
|
|
|
|
options.write_buffer_size = 100 << 10; // 100KB
|
|
|
|
options.level0_file_num_compaction_trigger = 2;
|
|
|
|
options.num_levels = 4;
|
|
|
|
options.max_bytes_for_level_base = 400 * 1024;
|
|
|
|
options.max_write_buffer_number = 16;
|
|
|
|
|
|
|
|
// Block background threads
|
|
|
|
env_->SetBackgroundThreads(1, Env::LOW);
|
|
|
|
env_->SetBackgroundThreads(1, Env::HIGH);
|
|
|
|
SleepingBackgroundTask sleeping_task_low;
|
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_low,
|
|
|
|
Env::Priority::LOW);
|
|
|
|
SleepingBackgroundTask sleeping_task_high;
|
|
|
|
env_->Schedule(&SleepingBackgroundTask::DoSleepTask, &sleeping_task_high,
|
|
|
|
Env::Priority::HIGH);
|
|
|
|
|
|
|
|
std::vector<std::string> filenames;
|
|
|
|
env_->GetChildren(dbname_, &filenames);
|
|
|
|
// Delete archival files.
|
|
|
|
for (size_t i = 0; i < filenames.size(); ++i) {
|
|
|
|
env_->DeleteFile(dbname_ + "/" + filenames[i]);
|
|
|
|
}
|
|
|
|
env_->DeleteDir(dbname_);
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
env_->SetBackgroundThreads(1, Env::LOW);
|
|
|
|
env_->SetBackgroundThreads(1, Env::HIGH);
|
|
|
|
Random rnd(301);
|
|
|
|
int key_idx = 0;
|
|
|
|
|
|
|
|
// First three 110KB files are not going to level 2
|
|
|
|
// After that, (100K, 200K)
|
|
|
|
for (int num = 0; num < 5; num++) {
|
|
|
|
GenerateNewFile(&rnd, &key_idx, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_EQ(0, GetSstFileCount(dbname_));
|
|
|
|
|
|
|
|
Close();
|
|
|
|
ASSERT_EQ(0, GetSstFileCount(dbname_));
|
|
|
|
|
|
|
|
// Unblock background threads
|
|
|
|
sleeping_task_high.WakeUp();
|
|
|
|
sleeping_task_high.WaitUntilDone();
|
|
|
|
sleeping_task_low.WakeUp();
|
|
|
|
sleeping_task_low.WaitUntilDone();
|
|
|
|
|
|
|
|
Destroy(options);
|
|
|
|
}
|
2015-03-03 18:59:36 +00:00
|
|
|
|
|
|
|
class DelayedMergeOperator : public AssociativeMergeOperator {
|
|
|
|
private:
|
|
|
|
DBTest* db_test_;
|
|
|
|
|
|
|
|
public:
|
|
|
|
explicit DelayedMergeOperator(DBTest* d) : db_test_(d) {}
|
|
|
|
virtual bool Merge(const Slice& key, const Slice* existing_value,
|
|
|
|
const Slice& value, std::string* new_value,
|
|
|
|
Logger* logger) const override {
|
2015-05-15 22:52:51 +00:00
|
|
|
db_test_->env_->addon_time_.fetch_add(1000);
|
2015-03-03 18:59:36 +00:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
virtual const char* Name() const override { return "DelayedMergeOperator"; }
|
|
|
|
};
|
|
|
|
|
|
|
|
TEST_F(DBTest, MergeTestTime) {
|
|
|
|
std::string one, two, three;
|
|
|
|
PutFixed64(&one, 1);
|
|
|
|
PutFixed64(&two, 2);
|
|
|
|
PutFixed64(&three, 3);
|
|
|
|
|
|
|
|
// Enable time profiling
|
|
|
|
SetPerfLevel(kEnableTime);
|
2015-05-15 22:52:51 +00:00
|
|
|
this->env_->addon_time_.store(0);
|
2015-03-03 18:59:36 +00:00
|
|
|
Options options;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
options.merge_operator.reset(new DelayedMergeOperator(this));
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, MERGE_OPERATION_TOTAL_TIME), 0);
|
|
|
|
db_->Put(WriteOptions(), "foo", one);
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
ASSERT_OK(db_->Merge(WriteOptions(), "foo", two));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
ASSERT_OK(db_->Merge(WriteOptions(), "foo", three));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
ReadOptions opt;
|
|
|
|
opt.verify_checksums = true;
|
|
|
|
opt.snapshot = nullptr;
|
|
|
|
std::string result;
|
|
|
|
db_->Get(opt, "foo", &result);
|
|
|
|
|
2015-03-30 21:04:21 +00:00
|
|
|
ASSERT_LT(TestGetTickerCount(options, MERGE_OPERATION_TOTAL_TIME), 2800000);
|
|
|
|
ASSERT_GT(TestGetTickerCount(options, MERGE_OPERATION_TOTAL_TIME), 1200000);
|
2015-03-03 18:59:36 +00:00
|
|
|
|
|
|
|
ReadOptions read_options;
|
|
|
|
std::unique_ptr<Iterator> iter(db_->NewIterator(read_options));
|
|
|
|
int count = 0;
|
|
|
|
for (iter->SeekToFirst(); iter->Valid(); iter->Next()) {
|
|
|
|
ASSERT_OK(iter->status());
|
|
|
|
++count;
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_EQ(1, count);
|
|
|
|
|
2015-03-30 21:04:21 +00:00
|
|
|
ASSERT_LT(TestGetTickerCount(options, MERGE_OPERATION_TOTAL_TIME), 6000000);
|
|
|
|
ASSERT_GT(TestGetTickerCount(options, MERGE_OPERATION_TOTAL_TIME), 3200000);
|
2015-03-03 18:59:36 +00:00
|
|
|
}
|
|
|
|
|
2015-08-05 05:19:07 +00:00
|
|
|
TEST_P(DBTestWithParam, MergeCompactionTimeTest) {
|
2015-03-03 18:59:36 +00:00
|
|
|
SetPerfLevel(kEnableTime);
|
|
|
|
Options options;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
options.compaction_filter_factory = std::make_shared<KeepFilterFactory>();
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
options.merge_operator.reset(new DelayedMergeOperator(this));
|
|
|
|
options.compaction_style = kCompactionStyleUniversal;
|
2015-08-05 05:19:07 +00:00
|
|
|
options.num_subcompactions = num_subcompactions_;
|
2015-03-03 18:59:36 +00:00
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
for (int i = 0; i < 1000; i++) {
|
|
|
|
ASSERT_OK(db_->Merge(WriteOptions(), "foo", "TEST"));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
}
|
|
|
|
dbfull()->TEST_WaitForFlushMemTable();
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
|
|
|
|
ASSERT_NE(TestGetTickerCount(options, MERGE_OPERATION_TOTAL_TIME), 0);
|
|
|
|
}
|
|
|
|
|
2015-08-05 05:19:07 +00:00
|
|
|
TEST_P(DBTestWithParam, FilterCompactionTimeTest) {
|
2015-03-03 18:59:36 +00:00
|
|
|
Options options;
|
|
|
|
options.compaction_filter_factory =
|
|
|
|
std::make_shared<DelayFilterFactory>(this);
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
2015-08-05 05:19:07 +00:00
|
|
|
options.num_subcompactions = num_subcompactions_;
|
2015-03-03 18:59:36 +00:00
|
|
|
options = CurrentOptions(options);
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
// put some data
|
|
|
|
for (int table = 0; table < 4; ++table) {
|
|
|
|
for (int i = 0; i < 10 + table; ++i) {
|
|
|
|
Put(ToString(table * 100 + i), "val");
|
|
|
|
}
|
|
|
|
Flush();
|
|
|
|
}
|
|
|
|
|
2015-06-17 21:36:14 +00:00
|
|
|
ASSERT_OK(db_->CompactRange(CompactRangeOptions(), nullptr, nullptr));
|
2015-03-03 18:59:36 +00:00
|
|
|
ASSERT_EQ(0U, CountLiveFiles());
|
|
|
|
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
Iterator* itr = db_->NewIterator(ReadOptions());
|
|
|
|
itr->SeekToFirst();
|
|
|
|
ASSERT_NE(TestGetTickerCount(options, FILTER_OPERATION_TOTAL_TIME), 0);
|
|
|
|
delete itr;
|
|
|
|
}
|
|
|
|
|
2015-03-30 19:04:10 +00:00
|
|
|
TEST_F(DBTest, TestLogCleanup) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.write_buffer_size = 64 * 1024; // very small
|
|
|
|
// only two memtables allowed ==> only two log files
|
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
for (int i = 0; i < 100000; ++i) {
|
|
|
|
Put(Key(i), "val");
|
|
|
|
// only 2 memtables will be alive, so logs_to_free needs to always be below
|
|
|
|
// 2
|
|
|
|
ASSERT_LT(dbfull()->TEST_LogsToFreeSize(), static_cast<size_t>(3));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-04-01 23:55:08 +00:00
|
|
|
TEST_F(DBTest, EmptyCompactedDB) {
|
|
|
|
Options options;
|
|
|
|
options.max_open_files = -1;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
Close();
|
|
|
|
ASSERT_OK(ReadOnlyReopen(options));
|
|
|
|
Status s = Put("new", "value");
|
|
|
|
ASSERT_TRUE(s.IsNotSupported());
|
|
|
|
Close();
|
|
|
|
}
|
|
|
|
|
2015-06-04 19:03:40 +00:00
|
|
|
class CountingDeleteTabPropCollector : public TablePropertiesCollector {
|
|
|
|
public:
|
|
|
|
const char* Name() const override { return "CountingDeleteTabPropCollector"; }
|
|
|
|
|
|
|
|
Status AddUserKey(const Slice& user_key, const Slice& value, EntryType type,
|
|
|
|
SequenceNumber seq, uint64_t file_size) override {
|
|
|
|
if (type == kEntryDelete) {
|
|
|
|
num_deletes_++;
|
|
|
|
}
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
|
|
|
bool NeedCompact() const override { return num_deletes_ > 10; }
|
|
|
|
|
|
|
|
UserCollectedProperties GetReadableProperties() const override {
|
|
|
|
return UserCollectedProperties{};
|
|
|
|
}
|
|
|
|
|
|
|
|
Status Finish(UserCollectedProperties* properties) override {
|
|
|
|
*properties =
|
|
|
|
UserCollectedProperties{{"num_delete", ToString(num_deletes_)}};
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
uint32_t num_deletes_ = 0;
|
|
|
|
};
|
|
|
|
|
|
|
|
class CountingDeleteTabPropCollectorFactory
|
|
|
|
: public TablePropertiesCollectorFactory {
|
|
|
|
public:
|
|
|
|
virtual TablePropertiesCollector* CreateTablePropertiesCollector() override {
|
|
|
|
return new CountingDeleteTabPropCollector();
|
|
|
|
}
|
|
|
|
const char* Name() const override {
|
|
|
|
return "CountingDeleteTabPropCollectorFactory";
|
|
|
|
}
|
|
|
|
};
|
|
|
|
|
|
|
|
TEST_F(DBTest, TablePropertiesNeedCompactTest) {
|
|
|
|
Random rnd(301);
|
|
|
|
|
|
|
|
Options options;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.write_buffer_size = 4096;
|
|
|
|
options.max_write_buffer_number = 8;
|
|
|
|
options.level0_file_num_compaction_trigger = 2;
|
|
|
|
options.level0_slowdown_writes_trigger = 2;
|
|
|
|
options.level0_stop_writes_trigger = 4;
|
|
|
|
options.target_file_size_base = 2048;
|
|
|
|
options.max_bytes_for_level_base = 10240;
|
|
|
|
options.max_bytes_for_level_multiplier = 4;
|
2015-05-15 22:52:51 +00:00
|
|
|
options.soft_rate_limit = 1.1;
|
2015-06-04 19:03:40 +00:00
|
|
|
options.num_levels = 8;
|
|
|
|
|
|
|
|
std::shared_ptr<TablePropertiesCollectorFactory> collector_factory(
|
|
|
|
new CountingDeleteTabPropCollectorFactory);
|
|
|
|
options.table_properties_collector_factories.resize(1);
|
|
|
|
options.table_properties_collector_factories[0] = collector_factory;
|
|
|
|
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
const int kMaxKey = 1000;
|
|
|
|
for (int i = 0; i < kMaxKey; i++) {
|
|
|
|
ASSERT_OK(Put(Key(i), RandomString(&rnd, 102)));
|
|
|
|
ASSERT_OK(Put(Key(kMaxKey + i), RandomString(&rnd, 102)));
|
|
|
|
}
|
|
|
|
Flush();
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
if (NumTableFilesAtLevel(0) == 1) {
|
|
|
|
// Clear Level 0 so that when later flush a file with deletions,
|
|
|
|
// we don't trigger an organic compaction.
|
|
|
|
ASSERT_OK(Put(Key(0), ""));
|
|
|
|
ASSERT_OK(Put(Key(kMaxKey * 2), ""));
|
|
|
|
Flush();
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
}
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0), 0);
|
|
|
|
|
|
|
|
{
|
|
|
|
int c = 0;
|
|
|
|
std::unique_ptr<Iterator> iter(db_->NewIterator(ReadOptions()));
|
|
|
|
iter->Seek(Key(kMaxKey - 100));
|
|
|
|
while (iter->Valid() && iter->key().compare(Key(kMaxKey + 100)) < 0) {
|
|
|
|
iter->Next();
|
|
|
|
++c;
|
|
|
|
}
|
|
|
|
ASSERT_EQ(c, 200);
|
|
|
|
}
|
|
|
|
|
|
|
|
Delete(Key(0));
|
|
|
|
for (int i = kMaxKey - 100; i < kMaxKey + 100; i++) {
|
|
|
|
Delete(Key(i));
|
|
|
|
}
|
|
|
|
Delete(Key(kMaxKey * 2));
|
|
|
|
|
|
|
|
Flush();
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
|
|
|
|
{
|
|
|
|
SetPerfLevel(kEnableCount);
|
|
|
|
perf_context.Reset();
|
|
|
|
int c = 0;
|
|
|
|
std::unique_ptr<Iterator> iter(db_->NewIterator(ReadOptions()));
|
|
|
|
iter->Seek(Key(kMaxKey - 100));
|
|
|
|
while (iter->Valid() && iter->key().compare(Key(kMaxKey + 100)) < 0) {
|
|
|
|
iter->Next();
|
|
|
|
}
|
|
|
|
ASSERT_EQ(c, 0);
|
|
|
|
ASSERT_LT(perf_context.internal_delete_skipped_count, 30u);
|
|
|
|
ASSERT_LT(perf_context.internal_key_skipped_count, 30u);
|
|
|
|
SetPerfLevel(kDisable);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
TEST_F(DBTest, SuggestCompactRangeTest) {
|
2015-05-09 02:37:02 +00:00
|
|
|
class CompactionFilterFactoryGetContext : public CompactionFilterFactory {
|
|
|
|
public:
|
|
|
|
virtual std::unique_ptr<CompactionFilter> CreateCompactionFilter(
|
2015-05-09 18:03:36 +00:00
|
|
|
const CompactionFilter::Context& context) override {
|
2015-05-09 02:37:02 +00:00
|
|
|
saved_context = context;
|
|
|
|
std::unique_ptr<CompactionFilter> empty_filter;
|
|
|
|
return empty_filter;
|
|
|
|
}
|
|
|
|
const char* Name() const override {
|
|
|
|
return "CompactionFilterFactoryGetContext";
|
|
|
|
}
|
|
|
|
static bool IsManual(CompactionFilterFactory* compaction_filter_factory) {
|
|
|
|
return reinterpret_cast<CompactionFilterFactoryGetContext*>(
|
2015-06-04 19:03:40 +00:00
|
|
|
compaction_filter_factory)->saved_context.is_manual_compaction;
|
2015-05-09 02:37:02 +00:00
|
|
|
}
|
|
|
|
CompactionFilter::Context saved_context;
|
|
|
|
};
|
|
|
|
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.compaction_style = kCompactionStyleLevel;
|
2015-05-09 02:37:02 +00:00
|
|
|
options.compaction_filter_factory.reset(
|
|
|
|
new CompactionFilterFactoryGetContext());
|
|
|
|
options.write_buffer_size = 110 << 10;
|
|
|
|
options.level0_file_num_compaction_trigger = 4;
|
2015-04-29 20:35:48 +00:00
|
|
|
options.num_levels = 4;
|
2015-05-09 02:37:02 +00:00
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.max_bytes_for_level_base = 450 << 10;
|
|
|
|
options.target_file_size_base = 98 << 10;
|
|
|
|
options.max_grandparent_overlap_factor = 1 << 20; // inf
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
|
|
|
|
Reopen(options);
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
|
|
|
|
for (int num = 0; num < 3; num++) {
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
}
|
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
|
|
|
ASSERT_EQ("0,4", FilesPerLevel(0));
|
|
|
|
ASSERT_TRUE(!CompactionFilterFactoryGetContext::IsManual(
|
|
|
|
options.compaction_filter_factory.get()));
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
ASSERT_EQ("1,4", FilesPerLevel(0));
|
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
|
|
|
ASSERT_EQ("2,4", FilesPerLevel(0));
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
|
|
|
ASSERT_EQ("3,4", FilesPerLevel(0));
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
|
|
|
ASSERT_EQ("0,4,4", FilesPerLevel(0));
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
ASSERT_EQ("1,4,4", FilesPerLevel(0));
|
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
|
|
|
ASSERT_EQ("2,4,4", FilesPerLevel(0));
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
|
|
|
ASSERT_EQ("3,4,4", FilesPerLevel(0));
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
|
|
|
ASSERT_EQ("0,4,8", FilesPerLevel(0));
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
ASSERT_EQ("1,4,8", FilesPerLevel(0));
|
|
|
|
|
|
|
|
// compact it three times
|
|
|
|
for (int i = 0; i < 3; ++i) {
|
|
|
|
ASSERT_OK(experimental::SuggestCompactRange(db_, nullptr, nullptr));
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
}
|
|
|
|
|
|
|
|
ASSERT_EQ("0,0,13", FilesPerLevel(0));
|
|
|
|
|
2015-05-09 02:37:02 +00:00
|
|
|
GenerateNewRandomFile(&rnd);
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
ASSERT_EQ("1,0,13", FilesPerLevel(0));
|
|
|
|
|
|
|
|
// nonoverlapping with the file on level 0
|
|
|
|
Slice start("a"), end("b");
|
|
|
|
ASSERT_OK(experimental::SuggestCompactRange(db_, &start, &end));
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
|
|
|
|
|
|
|
// should not compact the level 0 file
|
|
|
|
ASSERT_EQ("1,0,13", FilesPerLevel(0));
|
|
|
|
|
|
|
|
start = Slice("j");
|
|
|
|
end = Slice("m");
|
|
|
|
ASSERT_OK(experimental::SuggestCompactRange(db_, &start, &end));
|
|
|
|
dbfull()->TEST_WaitForCompact();
|
2015-05-09 02:37:02 +00:00
|
|
|
ASSERT_TRUE(CompactionFilterFactoryGetContext::IsManual(
|
|
|
|
options.compaction_filter_factory.get()));
|
Add experimental API MarkForCompaction()
Summary:
Some Mongo+Rocks datasets in Parse's environment are not doing compactions very frequently. During the quiet period (with no IO), we'd like to schedule compactions so that our reads become faster. Also, aggressively compacting during quiet periods helps when write bursts happen. In addition, we also want to compact files that are containing deleted key ranges (like old oplog keys).
All of this is currently not possible with CompactRange() because it's single-threaded and blocks all other compactions from happening. Running CompactRange() risks an issue of blocking writes because we generate too much Level 0 files before the compaction is over. Stopping writes is very dangerous because they hold transaction locks. We tried running manual compaction once on Mongo+Rocks and everything fell apart.
MarkForCompaction() solves all of those problems. This is very light-weight manual compaction. It is lower priority than automatic compactions, which means it shouldn't interfere with background process keeping the LSM tree clean. However, if no automatic compactions need to be run (or we have extra background threads available), we will start compacting files that are marked for compaction.
Test Plan: added a new unit test
Reviewers: yhchiang, rven, MarkCallaghan, sdong
Reviewed By: sdong
Subscribers: yoshinorim, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D37083
2015-04-17 23:44:45 +00:00
|
|
|
|
|
|
|
// now it should compact the level 0 file
|
|
|
|
ASSERT_EQ("0,1,13", FilesPerLevel(0));
|
|
|
|
}
|
|
|
|
|
Implement DB::PromoteL0 method
Summary:
This diff implements a new `DB` method `PromoteL0` which moves all files in L0
to a given level skipping compaction, provided that the files have disjoint
ranges and all levels up to the target level are empty.
This method provides finer-grain control for trivial compactions, and it is
useful for bulk-loading pre-sorted keys. Compared to D34797, it does not change
the semantics of an existing operation, which can impact existing code.
PromoteL0 is designed to work well in combination with the proposed
`GetSstFileWriter`/`AddFile` interface, enabling to "design" the level structure
by populating one level at a time. Such fine-grained control can be very useful
for static or mostly-static databases.
Test Plan: `make check`
Reviewers: IslamAbdelRahman, philipp, MarkCallaghan, yhchiang, igor, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D37107
2015-04-23 19:10:36 +00:00
|
|
|
TEST_F(DBTest, PromoteL0) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.write_buffer_size = 10 * 1024 * 1024;
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
// non overlapping ranges
|
|
|
|
std::vector<std::pair<int32_t, int32_t>> ranges = {
|
|
|
|
{81, 160}, {0, 80}, {161, 240}, {241, 320}};
|
|
|
|
|
|
|
|
int32_t value_size = 10 * 1024; // 10 KB
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
std::map<int32_t, std::string> values;
|
|
|
|
for (const auto& range : ranges) {
|
|
|
|
for (int32_t j = range.first; j < range.second; j++) {
|
|
|
|
values[j] = RandomString(&rnd, value_size);
|
|
|
|
ASSERT_OK(Put(Key(j), values[j]));
|
|
|
|
}
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
}
|
|
|
|
|
|
|
|
int32_t level0_files = NumTableFilesAtLevel(0, 0);
|
|
|
|
ASSERT_EQ(level0_files, ranges.size());
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(1, 0), 0); // No files in L1
|
|
|
|
|
|
|
|
// Promote L0 level to L2.
|
|
|
|
ASSERT_OK(experimental::PromoteL0(db_, db_->DefaultColumnFamily(), 2));
|
|
|
|
// We expect that all the files were trivially moved from L0 to L2
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(0, 0), 0);
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(2, 0), level0_files);
|
|
|
|
|
|
|
|
for (const auto& kv : values) {
|
|
|
|
ASSERT_EQ(Get(Key(kv.first)), kv.second);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(DBTest, PromoteL0Failure) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.write_buffer_size = 10 * 1024 * 1024;
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
// Produce two L0 files with overlapping ranges.
|
|
|
|
ASSERT_OK(Put(Key(0), ""));
|
|
|
|
ASSERT_OK(Put(Key(3), ""));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
ASSERT_OK(Put(Key(1), ""));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
Status status;
|
|
|
|
// Fails because L0 has overlapping files.
|
|
|
|
status = experimental::PromoteL0(db_, db_->DefaultColumnFamily());
|
|
|
|
ASSERT_TRUE(status.IsInvalidArgument());
|
|
|
|
|
2015-06-17 21:36:14 +00:00
|
|
|
ASSERT_OK(db_->CompactRange(CompactRangeOptions(), nullptr, nullptr));
|
Implement DB::PromoteL0 method
Summary:
This diff implements a new `DB` method `PromoteL0` which moves all files in L0
to a given level skipping compaction, provided that the files have disjoint
ranges and all levels up to the target level are empty.
This method provides finer-grain control for trivial compactions, and it is
useful for bulk-loading pre-sorted keys. Compared to D34797, it does not change
the semantics of an existing operation, which can impact existing code.
PromoteL0 is designed to work well in combination with the proposed
`GetSstFileWriter`/`AddFile` interface, enabling to "design" the level structure
by populating one level at a time. Such fine-grained control can be very useful
for static or mostly-static databases.
Test Plan: `make check`
Reviewers: IslamAbdelRahman, philipp, MarkCallaghan, yhchiang, igor, sdong
Reviewed By: sdong
Subscribers: dhruba
Differential Revision: https://reviews.facebook.net/D37107
2015-04-23 19:10:36 +00:00
|
|
|
// Now there is a file in L1.
|
|
|
|
ASSERT_GE(NumTableFilesAtLevel(1, 0), 1);
|
|
|
|
|
|
|
|
ASSERT_OK(Put(Key(5), ""));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
// Fails because L1 is non-empty.
|
|
|
|
status = experimental::PromoteL0(db_, db_->DefaultColumnFamily());
|
|
|
|
ASSERT_TRUE(status.IsInvalidArgument());
|
|
|
|
}
|
|
|
|
|
2015-04-29 17:52:31 +00:00
|
|
|
// Github issue #596
|
|
|
|
TEST_F(DBTest, HugeNumberOfLevels) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.write_buffer_size = 2 * 1024 * 1024; // 2MB
|
|
|
|
options.max_bytes_for_level_base = 2 * 1024 * 1024; // 2MB
|
|
|
|
options.num_levels = 12;
|
|
|
|
options.max_background_compactions = 10;
|
|
|
|
options.max_bytes_for_level_multiplier = 2;
|
|
|
|
options.level_compaction_dynamic_level_bytes = true;
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
for (int i = 0; i < 300000; ++i) {
|
|
|
|
ASSERT_OK(Put(Key(i), RandomString(&rnd, 1024)));
|
|
|
|
}
|
|
|
|
|
2015-06-17 21:36:14 +00:00
|
|
|
ASSERT_OK(db_->CompactRange(CompactRangeOptions(), nullptr, nullptr));
|
2015-04-29 17:52:31 +00:00
|
|
|
}
|
|
|
|
|
2015-05-01 22:41:50 +00:00
|
|
|
// Github issue #595
|
|
|
|
// Large write batch with column families
|
|
|
|
TEST_F(DBTest, LargeBatchWithColumnFamilies) {
|
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
options.write_buffer_size = 100000; // Small write buffer
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
int64_t j = 0;
|
|
|
|
for (int i = 0; i < 5; i++) {
|
|
|
|
for (int pass = 1; pass <= 3; pass++) {
|
|
|
|
WriteBatch batch;
|
|
|
|
size_t write_size = 1024 * 1024 * (5 + i);
|
|
|
|
fprintf(stderr, "prepare: %ld MB, pass:%d\n", (write_size / 1024 / 1024),
|
|
|
|
pass);
|
|
|
|
for (;;) {
|
|
|
|
std::string data(3000, j++ % 127 + 20);
|
2015-06-08 18:43:55 +00:00
|
|
|
data += ToString(j);
|
2015-05-01 22:41:50 +00:00
|
|
|
batch.Put(handles_[0], Slice(data), Slice(data));
|
|
|
|
if (batch.GetDataSize() > write_size) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
fprintf(stderr, "write: %ld MB\n", (batch.GetDataSize() / 1024 / 1024));
|
|
|
|
ASSERT_OK(dbfull()->Write(WriteOptions(), &batch));
|
|
|
|
fprintf(stderr, "done\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
// make sure we can re-open it.
|
|
|
|
ASSERT_OK(TryReopenWithColumnFamilies({"default", "pikachu"}, options));
|
|
|
|
}
|
|
|
|
|
2015-05-18 22:34:33 +00:00
|
|
|
// Make sure that Flushes can proceed in parallel with CompactRange()
|
|
|
|
TEST_F(DBTest, FlushesInParallelWithCompactRange) {
|
|
|
|
// iter == 0 -- leveled
|
|
|
|
// iter == 1 -- leveled, but throw in a flush between two levels compacting
|
|
|
|
// iter == 2 -- universal
|
|
|
|
for (int iter = 0; iter < 3; ++iter) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
if (iter < 2) {
|
|
|
|
options.compaction_style = kCompactionStyleLevel;
|
|
|
|
} else {
|
|
|
|
options.compaction_style = kCompactionStyleUniversal;
|
|
|
|
}
|
|
|
|
options.write_buffer_size = 110 << 10;
|
|
|
|
options.level0_file_num_compaction_trigger = 4;
|
|
|
|
options.num_levels = 4;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.max_bytes_for_level_base = 450 << 10;
|
|
|
|
options.target_file_size_base = 98 << 10;
|
|
|
|
options.max_write_buffer_number = 2;
|
|
|
|
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
Random rnd(301);
|
|
|
|
for (int num = 0; num < 14; num++) {
|
|
|
|
GenerateNewRandomFile(&rnd);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (iter == 1) {
|
|
|
|
rocksdb::SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"DBImpl::RunManualCompaction()::1",
|
|
|
|
"DBTest::FlushesInParallelWithCompactRange:1"},
|
|
|
|
{"DBTest::FlushesInParallelWithCompactRange:2",
|
|
|
|
"DBImpl::RunManualCompaction()::2"}});
|
|
|
|
} else {
|
|
|
|
rocksdb::SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{{"CompactionJob::Run():Start",
|
|
|
|
"DBTest::FlushesInParallelWithCompactRange:1"},
|
|
|
|
{"DBTest::FlushesInParallelWithCompactRange:2",
|
|
|
|
"CompactionJob::Run():End"}});
|
|
|
|
}
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
std::vector<std::thread> threads;
|
|
|
|
threads.emplace_back([&]() { Compact("a", "z"); });
|
|
|
|
|
|
|
|
TEST_SYNC_POINT("DBTest::FlushesInParallelWithCompactRange:1");
|
|
|
|
|
|
|
|
// this has to start a flush. if flushes are blocked, this will try to
|
|
|
|
// create
|
|
|
|
// 3 memtables, and that will fail because max_write_buffer_number is 2
|
|
|
|
for (int num = 0; num < 3; num++) {
|
|
|
|
GenerateNewRandomFile(&rnd, /* nowait */ true);
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_SYNC_POINT("DBTest::FlushesInParallelWithCompactRange:2");
|
|
|
|
|
|
|
|
for (auto& t : threads) {
|
|
|
|
t.join();
|
|
|
|
}
|
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-05-15 22:52:51 +00:00
|
|
|
TEST_F(DBTest, DelayedWriteRate) {
|
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
env_->no_sleep_ = true;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
options.write_buffer_size = 100000; // Small write buffer
|
|
|
|
options.max_write_buffer_number = 256;
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.level0_file_num_compaction_trigger = 3;
|
|
|
|
options.level0_slowdown_writes_trigger = 3;
|
|
|
|
options.level0_stop_writes_trigger = 999999;
|
|
|
|
options.delayed_write_rate = 200000; // About 200KB/s limited rate
|
|
|
|
|
|
|
|
CreateAndReopenWithCF({"pikachu"}, options);
|
|
|
|
|
|
|
|
for (int i = 0; i < 3; i++) {
|
|
|
|
Put(Key(i), std::string(10000, 'x'));
|
|
|
|
Flush();
|
|
|
|
}
|
|
|
|
|
|
|
|
// These writes will be slowed down to 1KB/s
|
|
|
|
size_t estimated_total_size = 0;
|
|
|
|
Random rnd(301);
|
|
|
|
for (int i = 0; i < 3000; i++) {
|
|
|
|
auto rand_num = rnd.Uniform(20);
|
|
|
|
// Spread the size range to more.
|
|
|
|
size_t entry_size = rand_num * rand_num * rand_num;
|
|
|
|
WriteOptions wo;
|
|
|
|
Put(Key(i), std::string(entry_size, 'x'), wo);
|
|
|
|
estimated_total_size += entry_size + 20;
|
|
|
|
// Ocassionally sleep a while
|
|
|
|
if (rnd.Uniform(20) == 6) {
|
|
|
|
env_->SleepForMicroseconds(2666);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
uint64_t estimated_sleep_time =
|
|
|
|
estimated_total_size / options.delayed_write_rate * 1000000U;
|
|
|
|
ASSERT_GT(env_->addon_time_.load(), estimated_sleep_time * 0.8);
|
|
|
|
ASSERT_LT(env_->addon_time_.load(), estimated_sleep_time * 1.1);
|
|
|
|
|
|
|
|
env_->no_sleep_ = false;
|
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
|
|
|
TEST_F(DBTest, SoftLimit) {
|
|
|
|
Options options;
|
|
|
|
options.env = env_;
|
|
|
|
options = CurrentOptions(options);
|
|
|
|
options.write_buffer_size = 100000; // Small write buffer
|
|
|
|
options.max_write_buffer_number = 256;
|
|
|
|
options.level0_file_num_compaction_trigger = 3;
|
|
|
|
options.level0_slowdown_writes_trigger = 3;
|
|
|
|
options.level0_stop_writes_trigger = 999999;
|
|
|
|
options.delayed_write_rate = 200000; // About 200KB/s limited rate
|
|
|
|
options.soft_rate_limit = 1.1;
|
|
|
|
options.target_file_size_base = 99999999; // All into one file
|
|
|
|
options.max_bytes_for_level_base = 50000;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
|
|
|
|
Reopen(options);
|
|
|
|
Put(Key(0), "");
|
|
|
|
|
|
|
|
// Only allow two compactions
|
|
|
|
port::Mutex mut;
|
|
|
|
port::CondVar cv(&mut);
|
|
|
|
std::atomic<int> compaction_cnt(0);
|
|
|
|
rocksdb::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"VersionSet::LogAndApply:WriteManifest", [&](void* arg) {
|
|
|
|
// Three flushes and the first compaction,
|
|
|
|
// three flushes and the second compaction go through.
|
|
|
|
MutexLock l(&mut);
|
|
|
|
while (compaction_cnt.load() >= 8) {
|
|
|
|
cv.Wait();
|
|
|
|
}
|
|
|
|
compaction_cnt.fetch_add(1);
|
|
|
|
});
|
|
|
|
|
|
|
|
std::atomic<int> sleep_count(0);
|
|
|
|
rocksdb::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DBImpl::DelayWrite:Sleep", [&](void* arg) { sleep_count.fetch_add(1); });
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
for (int i = 0; i < 3; i++) {
|
|
|
|
Put(Key(i), std::string(5000, 'x'));
|
|
|
|
Put(Key(100 - i), std::string(5000, 'x'));
|
|
|
|
Flush();
|
|
|
|
}
|
|
|
|
while (compaction_cnt.load() < 4 || NumTableFilesAtLevel(0) > 0) {
|
|
|
|
env_->SleepForMicroseconds(1000);
|
|
|
|
}
|
|
|
|
// Now there is one L1 file but doesn't trigger soft_rate_limit
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(1), 1);
|
|
|
|
ASSERT_EQ(sleep_count.load(), 0);
|
|
|
|
|
|
|
|
for (int i = 0; i < 3; i++) {
|
|
|
|
Put(Key(10 + i), std::string(5000, 'x'));
|
|
|
|
Put(Key(90 - i), std::string(5000, 'x'));
|
|
|
|
Flush();
|
|
|
|
}
|
|
|
|
while (compaction_cnt.load() < 8 || NumTableFilesAtLevel(0) > 0) {
|
|
|
|
env_->SleepForMicroseconds(1000);
|
|
|
|
}
|
|
|
|
ASSERT_EQ(NumTableFilesAtLevel(1), 1);
|
|
|
|
ASSERT_EQ(sleep_count.load(), 0);
|
|
|
|
|
|
|
|
// Slowdown is triggered now
|
|
|
|
for (int i = 0; i < 10; i++) {
|
|
|
|
Put(Key(i), std::string(100, 'x'));
|
|
|
|
}
|
|
|
|
ASSERT_GT(sleep_count.load(), 0);
|
|
|
|
|
|
|
|
{
|
|
|
|
MutexLock l(&mut);
|
|
|
|
compaction_cnt.store(7);
|
|
|
|
cv.SignalAll();
|
|
|
|
}
|
|
|
|
while (NumTableFilesAtLevel(1) > 0) {
|
|
|
|
env_->SleepForMicroseconds(1000);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Slowdown is not triggered any more.
|
|
|
|
sleep_count.store(0);
|
|
|
|
// Slowdown is not triggered now
|
|
|
|
for (int i = 0; i < 10; i++) {
|
|
|
|
Put(Key(i), std::string(100, 'x'));
|
|
|
|
}
|
|
|
|
ASSERT_EQ(sleep_count.load(), 0);
|
|
|
|
|
|
|
|
// shrink level base so L2 will hit soft limit easier.
|
|
|
|
ASSERT_OK(dbfull()->SetOptions({
|
|
|
|
{"max_bytes_for_level_base", "5000"},
|
|
|
|
}));
|
|
|
|
compaction_cnt.store(7);
|
|
|
|
Flush();
|
|
|
|
|
|
|
|
while (NumTableFilesAtLevel(0) == 0) {
|
|
|
|
env_->SleepForMicroseconds(1000);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Slowdown is triggered now
|
|
|
|
for (int i = 0; i < 10; i++) {
|
|
|
|
Put(Key(i), std::string(100, 'x'));
|
|
|
|
}
|
|
|
|
ASSERT_GT(sleep_count.load(), 0);
|
|
|
|
|
|
|
|
{
|
|
|
|
MutexLock l(&mut);
|
|
|
|
compaction_cnt.store(7);
|
|
|
|
cv.SignalAll();
|
|
|
|
}
|
|
|
|
|
|
|
|
while (NumTableFilesAtLevel(2) != 0) {
|
|
|
|
env_->SleepForMicroseconds(1000);
|
|
|
|
}
|
|
|
|
|
|
|
|
// Slowdown is not triggered anymore
|
|
|
|
sleep_count.store(0);
|
|
|
|
for (int i = 0; i < 10; i++) {
|
|
|
|
Put(Key(i), std::string(100, 'x'));
|
|
|
|
}
|
|
|
|
ASSERT_EQ(sleep_count.load(), 0);
|
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
2015-06-18 21:55:05 +00:00
|
|
|
TEST_F(DBTest, FailWhenCompressionNotSupportedTest) {
|
|
|
|
CompressionType compressions[] = {kZlibCompression, kBZip2Compression,
|
|
|
|
kLZ4Compression, kLZ4HCCompression};
|
|
|
|
for (int iter = 0; iter < 4; ++iter) {
|
|
|
|
if (!CompressionTypeSupported(compressions[iter])) {
|
|
|
|
// not supported, we should fail the Open()
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.compression = compressions[iter];
|
|
|
|
ASSERT_TRUE(!TryReopen(options).ok());
|
|
|
|
// Try if CreateColumnFamily also fails
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
ColumnFamilyOptions cf_options(options);
|
|
|
|
cf_options.compression = compressions[iter];
|
|
|
|
ColumnFamilyHandle* handle;
|
|
|
|
ASSERT_TRUE(!db_->CreateColumnFamily(cf_options, "name", &handle).ok());
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-06-23 17:25:45 +00:00
|
|
|
TEST_F(DBTest, RowCache) {
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.statistics = rocksdb::CreateDBStatistics();
|
|
|
|
options.row_cache = NewLRUCache(8192);
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
ASSERT_OK(Put("foo", "bar"));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, ROW_CACHE_HIT), 0);
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, ROW_CACHE_MISS), 0);
|
|
|
|
ASSERT_EQ(Get("foo"), "bar");
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, ROW_CACHE_HIT), 0);
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, ROW_CACHE_MISS), 1);
|
|
|
|
ASSERT_EQ(Get("foo"), "bar");
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, ROW_CACHE_HIT), 1);
|
|
|
|
ASSERT_EQ(TestGetTickerCount(options, ROW_CACHE_MISS), 1);
|
|
|
|
}
|
|
|
|
|
2015-07-07 18:36:24 +00:00
|
|
|
// TODO(3.13): fix the issue of Seek() + Prev() which might not necessary
|
|
|
|
// return the biggest key which is smaller than the seek key.
|
2015-08-04 23:50:40 +00:00
|
|
|
TEST_F(DBTest, PrevAfterMerge) {
|
Fix a comparison in DBIter::FindPrevUserKey()
Summary:
When seek target is a merge key (`kTypeMerge`), `DBIter::FindNextUserEntry()`
advances the underlying iterator _past_ the current key (`saved_key_`); see
`MergeValuesNewToOld()`. However, `FindPrevUserKey()` assumes that `iter_`
points to an entry with the same user key as `saved_key_`. As a result,
`it->Seek(key) && it->Prev()` can cause the iterator to be positioned at the
_next_, instead of the previous, entry (new test, written by @lovro, reproduces
the bug).
This diff changes `FindPrevUserKey()` to also skip keys that are _greater_ than
`saved_key_`.
Test Plan: db_test
Reviewers: igor, sdong
Reviewed By: sdong
Subscribers: leveldb, dhruba, lovro
Differential Revision: https://reviews.facebook.net/D40791
2015-06-26 20:18:27 +00:00
|
|
|
Options options;
|
|
|
|
options.create_if_missing = true;
|
|
|
|
options.merge_operator = MergeOperators::CreatePutOperator();
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
// write three entries with different keys using Merge()
|
|
|
|
WriteOptions wopts;
|
|
|
|
db_->Merge(wopts, "1", "data1");
|
|
|
|
db_->Merge(wopts, "2", "data2");
|
|
|
|
db_->Merge(wopts, "3", "data3");
|
|
|
|
|
|
|
|
std::unique_ptr<Iterator> it(db_->NewIterator(ReadOptions()));
|
|
|
|
|
|
|
|
it->Seek("2");
|
|
|
|
ASSERT_TRUE(it->Valid());
|
|
|
|
ASSERT_EQ("2", it->key().ToString());
|
|
|
|
|
|
|
|
it->Prev();
|
|
|
|
ASSERT_TRUE(it->Valid());
|
|
|
|
ASSERT_EQ("1", it->key().ToString());
|
|
|
|
}
|
|
|
|
|
2015-07-02 21:27:00 +00:00
|
|
|
TEST_F(DBTest, DeletingOldWalAfterDrop) {
|
|
|
|
rocksdb::SyncPoint::GetInstance()->LoadDependency(
|
|
|
|
{ { "Test:AllowFlushes", "DBImpl::BGWorkFlush" },
|
|
|
|
{ "DBImpl::BGWorkFlush:done", "Test:WaitForFlush"} });
|
|
|
|
rocksdb::SyncPoint::GetInstance()->ClearTrace();
|
|
|
|
|
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.max_total_wal_size = 8192;
|
|
|
|
options.compression = kNoCompression;
|
|
|
|
options.write_buffer_size = 1 << 20;
|
|
|
|
options.level0_file_num_compaction_trigger = (1<<30);
|
|
|
|
options.level0_slowdown_writes_trigger = (1<<30);
|
|
|
|
options.level0_stop_writes_trigger = (1<<30);
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
CreateColumnFamilies({"cf1", "cf2"}, options);
|
|
|
|
ASSERT_OK(Put(0, "key1", DummyString(8192)));
|
|
|
|
ASSERT_OK(Put(0, "key2", DummyString(8192)));
|
|
|
|
// the oldest wal should now be getting_flushed
|
|
|
|
ASSERT_OK(db_->DropColumnFamily(handles_[0]));
|
|
|
|
// all flushes should now do nothing because their CF is dropped
|
|
|
|
TEST_SYNC_POINT("Test:AllowFlushes");
|
|
|
|
TEST_SYNC_POINT("Test:WaitForFlush");
|
|
|
|
uint64_t lognum1 = dbfull()->TEST_LogfileNumber();
|
|
|
|
ASSERT_OK(Put(1, "key3", DummyString(8192)));
|
|
|
|
ASSERT_OK(Put(1, "key4", DummyString(8192)));
|
|
|
|
// new wal should have been created
|
|
|
|
uint64_t lognum2 = dbfull()->TEST_LogfileNumber();
|
|
|
|
EXPECT_GT(lognum2, lognum1);
|
|
|
|
}
|
|
|
|
|
2015-08-05 03:45:27 +00:00
|
|
|
TEST_F(DBTest, RateLimitedDelete) {
|
|
|
|
rocksdb::SyncPoint::GetInstance()->LoadDependency({
|
|
|
|
{"DBTest::RateLimitedDelete:1",
|
|
|
|
"DeleteSchedulerImpl::BackgroundEmptyTrash"},
|
|
|
|
});
|
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
env_->no_sleep_ = true;
|
|
|
|
options.env = env_;
|
|
|
|
|
|
|
|
std::string trash_dir = test::TmpDir(env_) + "/trash";
|
|
|
|
int64_t rate_bytes_per_sec = 1024 * 10; // 10 Kbs / Sec
|
|
|
|
Status s;
|
|
|
|
options.delete_scheduler.reset(NewDeleteScheduler(
|
|
|
|
env_, trash_dir, rate_bytes_per_sec, nullptr, false, &s));
|
|
|
|
ASSERT_OK(s);
|
|
|
|
|
|
|
|
Destroy(last_options_);
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
ASSERT_OK(TryReopen(options));
|
|
|
|
// Create 4 files in L0
|
|
|
|
for (char v = 'a'; v <= 'd'; v++) {
|
|
|
|
ASSERT_OK(Put("Key2", DummyString(1024, v)));
|
|
|
|
ASSERT_OK(Put("Key3", DummyString(1024, v)));
|
|
|
|
ASSERT_OK(Put("Key4", DummyString(1024, v)));
|
|
|
|
ASSERT_OK(Put("Key1", DummyString(1024, v)));
|
|
|
|
ASSERT_OK(Put("Key4", DummyString(1024, v)));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
}
|
|
|
|
// We created 4 sst files in L0
|
|
|
|
ASSERT_EQ("4", FilesPerLevel(0));
|
|
|
|
|
|
|
|
uint64_t total_files_size = 0;
|
|
|
|
std::vector<LiveFileMetaData> metadata;
|
|
|
|
db_->GetLiveFilesMetaData(&metadata);
|
|
|
|
for (const auto& meta : metadata) {
|
|
|
|
total_files_size += meta.size;
|
|
|
|
}
|
|
|
|
|
|
|
|
// Compaction will move the 4 files in L0 to trash and create 1 L1 file
|
|
|
|
ASSERT_OK(db_->CompactRange(CompactRangeOptions(), nullptr, nullptr));
|
|
|
|
ASSERT_EQ("0,1", FilesPerLevel(0));
|
|
|
|
|
|
|
|
// Hold BackgroundEmptyTrash
|
|
|
|
TEST_SYNC_POINT("DBTest::RateLimitedDelete:1");
|
|
|
|
|
|
|
|
uint64_t delete_start_time = env_->NowMicros();
|
|
|
|
reinterpret_cast<DeleteSchedulerImpl*>(options.delete_scheduler.get())
|
|
|
|
->TEST_WaitForEmptyTrash();
|
|
|
|
uint64_t time_spent_deleting = env_->NowMicros() - delete_start_time;
|
|
|
|
uint64_t expected_delete_time =
|
|
|
|
((total_files_size * 1000000) / rate_bytes_per_sec);
|
|
|
|
ASSERT_GT(time_spent_deleting, expected_delete_time * 0.9);
|
|
|
|
ASSERT_LT(time_spent_deleting, expected_delete_time * 1.1);
|
|
|
|
printf("Delete time = %" PRIu64 ", Expected delete time = %" PRIu64
|
|
|
|
", Ratio %f\n",
|
|
|
|
time_spent_deleting, expected_delete_time,
|
|
|
|
static_cast<double>(time_spent_deleting) / expected_delete_time);
|
|
|
|
|
|
|
|
env_->no_sleep_ = false;
|
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
|
|
|
// Create a DB with 2 db_paths, and generate multiple files in the 2
|
|
|
|
// db_paths using CompactRangeOptions, make sure that files that were
|
|
|
|
// deleted from first db_path were deleted using DeleteScheduler and
|
|
|
|
// files in the second path were not.
|
|
|
|
TEST_F(DBTest, DeleteSchedulerMultipleDBPaths) {
|
|
|
|
int bg_delete_file = 0;
|
|
|
|
rocksdb::SyncPoint::GetInstance()->SetCallBack(
|
|
|
|
"DeleteSchedulerImpl::DeleteTrashFile:DeleteFile",
|
|
|
|
[&](void* arg) { bg_delete_file++; });
|
|
|
|
rocksdb::SyncPoint::GetInstance()->EnableProcessing();
|
|
|
|
|
|
|
|
Options options = CurrentOptions();
|
|
|
|
options.disable_auto_compactions = true;
|
|
|
|
options.db_paths.emplace_back(dbname_, 1024 * 100);
|
|
|
|
options.db_paths.emplace_back(dbname_ + "_2", 1024 * 100);
|
|
|
|
env_->no_sleep_ = true;
|
|
|
|
options.env = env_;
|
|
|
|
|
|
|
|
std::string trash_dir = test::TmpDir(env_) + "/trash";
|
|
|
|
int64_t rate_bytes_per_sec = 1024 * 1024; // 1 Mb / Sec
|
|
|
|
Status s;
|
|
|
|
options.delete_scheduler.reset(NewDeleteScheduler(
|
|
|
|
env_, trash_dir, rate_bytes_per_sec, nullptr, false, &s));
|
|
|
|
ASSERT_OK(s);
|
|
|
|
|
|
|
|
DestroyAndReopen(options);
|
|
|
|
|
|
|
|
// Create 4 files in L0
|
|
|
|
for (int i = 0; i < 4; i++) {
|
|
|
|
ASSERT_OK(Put("Key" + ToString(i), DummyString(1024, 'A')));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
}
|
|
|
|
// We created 4 sst files in L0
|
|
|
|
ASSERT_EQ("4", FilesPerLevel(0));
|
|
|
|
// Compaction will delete files from L0 in first db path and generate a new
|
|
|
|
// file in L1 in second db path
|
|
|
|
CompactRangeOptions compact_options;
|
|
|
|
compact_options.target_path_id = 1;
|
|
|
|
Slice begin("Key0");
|
|
|
|
Slice end("Key3");
|
|
|
|
ASSERT_OK(db_->CompactRange(compact_options, &begin, &end));
|
|
|
|
ASSERT_EQ("0,1", FilesPerLevel(0));
|
|
|
|
|
|
|
|
// Create 4 files in L0
|
|
|
|
for (int i = 4; i < 8; i++) {
|
|
|
|
ASSERT_OK(Put("Key" + ToString(i), DummyString(1024, 'B')));
|
|
|
|
ASSERT_OK(Flush());
|
|
|
|
}
|
|
|
|
ASSERT_EQ("4,1", FilesPerLevel(0));
|
|
|
|
|
|
|
|
// Compaction will delete files from L0 in first db path and generate a new
|
|
|
|
// file in L1 in second db path
|
|
|
|
begin = "Key4";
|
|
|
|
end = "Key7";
|
|
|
|
ASSERT_OK(db_->CompactRange(compact_options, &begin, &end));
|
|
|
|
ASSERT_EQ("0,2", FilesPerLevel(0));
|
|
|
|
|
|
|
|
reinterpret_cast<DeleteSchedulerImpl*>(options.delete_scheduler.get())
|
|
|
|
->TEST_WaitForEmptyTrash();
|
|
|
|
ASSERT_EQ(bg_delete_file, 8);
|
|
|
|
|
|
|
|
compact_options.bottommost_level_compaction =
|
|
|
|
BottommostLevelCompaction::kForce;
|
|
|
|
ASSERT_OK(db_->CompactRange(compact_options, nullptr, nullptr));
|
|
|
|
ASSERT_EQ("0,1", FilesPerLevel(0));
|
|
|
|
|
|
|
|
reinterpret_cast<DeleteSchedulerImpl*>(options.delete_scheduler.get())
|
|
|
|
->TEST_WaitForEmptyTrash();
|
|
|
|
ASSERT_EQ(bg_delete_file, 8);
|
|
|
|
|
|
|
|
env_->no_sleep_ = false;
|
|
|
|
rocksdb::SyncPoint::GetInstance()->DisableProcessing();
|
|
|
|
}
|
|
|
|
|
[wal changes 3/3] method in DB to sync WAL without blocking writers
Summary:
Subj. We really need this feature.
Previous diff D40899 has most of the changes to make this possible, this diff just adds the method.
Test Plan: `make check`, the new test fails without this diff; ran with ASAN, TSAN and valgrind.
Reviewers: igor, rven, IslamAbdelRahman, anthony, kradhakrishnan, tnovak, yhchiang, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, maykov, hermanlee4, yoshinorim, tnovak, dhruba
Differential Revision: https://reviews.facebook.net/D40905
2015-08-05 13:06:39 +00:00
|
|
|
TEST_F(DBTest, UnsupportedManualSync) {
|
|
|
|
DestroyAndReopen(CurrentOptions());
|
|
|
|
env_->is_wal_sync_thread_safe_.store(false);
|
|
|
|
Status s = db_->SyncWAL();
|
|
|
|
ASSERT_TRUE(s.IsNotSupported());
|
|
|
|
}
|
|
|
|
|
2015-08-05 05:19:07 +00:00
|
|
|
INSTANTIATE_TEST_CASE_P(DBTestWithParam, DBTestWithParam,
|
|
|
|
::testing::Values(1, 4));
|
[wal changes 3/3] method in DB to sync WAL without blocking writers
Summary:
Subj. We really need this feature.
Previous diff D40899 has most of the changes to make this possible, this diff just adds the method.
Test Plan: `make check`, the new test fails without this diff; ran with ASAN, TSAN and valgrind.
Reviewers: igor, rven, IslamAbdelRahman, anthony, kradhakrishnan, tnovak, yhchiang, sdong
Reviewed By: sdong
Subscribers: MarkCallaghan, maykov, hermanlee4, yoshinorim, tnovak, dhruba
Differential Revision: https://reviews.facebook.net/D40905
2015-08-05 13:06:39 +00:00
|
|
|
|
2013-10-04 04:49:15 +00:00
|
|
|
} // namespace rocksdb
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2015-07-01 23:13:49 +00:00
|
|
|
#endif
|
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
int main(int argc, char** argv) {
|
2015-07-01 23:13:49 +00:00
|
|
|
#if !(defined NDEBUG) || !defined(OS_WIN)
|
Make Compaction class easier to use
Summary:
The goal of this diff is to make Compaction class easier to use. This should also make new compaction algorithms easier to write (like CompactFiles from @yhchiang and dynamic leveled and multi-leveled universal from @sdong).
Here are couple of things demonstrating that Compaction class is hard to use:
1. we have two constructors of Compaction class
2. there's this thing called grandparents_, but it appears to only be setup for leveled compaction and not compactfiles
3. it's easy to introduce a subtle and dangerous bug like this: D36225
4. SetupBottomMostLevel() is hard to understand and it shouldn't be. See this comment: https://github.com/facebook/rocksdb/blob/afbafeaeaebfd27a0f3e992fee8e0c57d07658fa/db/compaction.cc#L236-L241. It also made it harder for @yhchiang to write CompactFiles, as evidenced by this: https://github.com/facebook/rocksdb/blob/afbafeaeaebfd27a0f3e992fee8e0c57d07658fa/db/compaction_picker.cc#L204-L210
The problem is that we create Compaction object, which holds a lot of state, and then pass it around to some functions. After those functions are done mutating, then we call couple of functions on Compaction object, like SetupBottommostLevel() and MarkFilesBeingCompacted(). It is very hard to see what's happening with all that Compaction's state while it's travelling across different functions. If you're writing a new PickCompaction() function you need to try really hard to understand what are all the functions you need to run on Compaction object and what state you need to setup.
My proposed solution is to make important parts of Compaction immutable after construction. PickCompaction() should calculate compaction inputs and then pass them onto Compaction object once they are finalized. That makes it easy to create a new compaction -- just provide all the parameters to the constructor and you're done. No need to call confusing functions after you created your object.
This diff doesn't fully achieve that goal, but it comes pretty close. Here are some of the changes:
* have one Compaction constructor instead of two.
* inputs_ is constant after construction
* MarkFilesBeingCompacted() is now private to Compaction class and automatically called on construction/destruction.
* SetupBottommostLevel() is gone. Compaction figures it out on its own based on the input.
* CompactionPicker's functions are not passing around Compaction object anymore. They are only passing around the state that they need.
Test Plan:
make check
make asan_check
make valgrind_check
Reviewers: rven, anthony, sdong, yhchiang
Reviewed By: yhchiang
Subscribers: sdong, yhchiang, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D36687
2015-04-10 22:01:54 +00:00
|
|
|
rocksdb::port::InstallStackTraceHandler();
|
2015-03-17 21:08:00 +00:00
|
|
|
::testing::InitGoogleTest(&argc, argv);
|
|
|
|
return RUN_ALL_TESTS();
|
2015-07-01 23:13:49 +00:00
|
|
|
#else
|
|
|
|
return 0;
|
|
|
|
#endif
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|