2016-02-09 23:12:00 +00:00
|
|
|
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
2017-07-15 23:03:42 +00:00
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
2013-10-16 21:59:46 +00:00
|
|
|
//
|
2011-03-18 22:37:00 +00:00
|
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
|
2013-10-05 05:32:05 +00:00
|
|
|
#pragma once
|
2015-07-17 16:27:24 +00:00
|
|
|
#include <algorithm>
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
#include <deque>
|
2014-01-27 21:53:22 +00:00
|
|
|
#include <string>
|
2015-07-17 16:27:24 +00:00
|
|
|
#include <vector>
|
|
|
|
|
Introduce a new storage specific Env API (#5761)
Summary:
The current Env API encompasses both storage/file operations, as well as OS related operations. Most of the APIs return a Status, which does not have enough metadata about an error, such as whether its retry-able or not, scope (i.e fault domain) of the error etc., that may be required in order to properly handle a storage error. The file APIs also do not provide enough control over the IO SLA, such as timeout, prioritization, hinting about placement and redundancy etc.
This PR separates out the file/storage APIs from Env into a new FileSystem class. The APIs are updated to return an IOStatus with metadata about the error, as well as to take an IOOptions structure as input in order to allow more control over the IO.
The user can set both ```options.env``` and ```options.file_system``` to specify that RocksDB should use the former for OS related operations and the latter for storage operations. Internally, a ```CompositeEnvWrapper``` has been introduced that inherits from ```Env``` and redirects individual methods to either an ```Env``` implementation or the ```FileSystem``` as appropriate. When options are sanitized during ```DB::Open```, ```options.env``` is replaced with a newly allocated ```CompositeEnvWrapper``` instance if both env and file_system have been specified. This way, the rest of the RocksDB code can continue to function as before.
This PR also ports PosixEnv to the new API by splitting it into two - PosixEnv and PosixFileSystem. PosixEnv is defined as a sub-class of CompositeEnvWrapper, and threading/time functions are overridden with Posix specific implementations in order to avoid an extra level of indirection.
The ```CompositeEnvWrapper``` translates ```IOStatus``` return code to ```Status```, and sets the severity to ```kSoftError``` if the io_status is retryable. The error handling code in RocksDB can then recover the DB automatically.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5761
Differential Revision: D18868376
Pulled By: anand1976
fbshipit-source-id: 39efe18a162ea746fabac6360ff529baba48486f
2019-12-13 22:47:08 +00:00
|
|
|
#include "env/composite_env_wrapper.h"
|
|
|
|
#include "file/writable_file_writer.h"
|
Compaction filter on merge operands
Summary:
Since Andres' internship is over, I took over https://reviews.facebook.net/D42555 and rebased and simplified it a bit.
The behavior in this diff is a bit simpler than in D42555:
* only merge operators are passed through FilterMergeValue(). If fitler function returns true, the merge operator is ignored
* compaction filter is *not* called on: 1) results of merge operations and 2) base values that are getting merged with merge operands (the second case was also true in previous diff)
Do we also need a compaction filter to get called on merge results?
Test Plan: make && make check
Reviewers: lovro, tnovak, rven, yhchiang, sdong
Reviewed By: sdong
Subscribers: noetzli, kolmike, leveldb, dhruba, sdong
Differential Revision: https://reviews.facebook.net/D47847
2015-10-07 16:30:03 +00:00
|
|
|
#include "rocksdb/compaction_filter.h"
|
2013-08-23 15:38:13 +00:00
|
|
|
#include "rocksdb/env.h"
|
2015-07-17 16:27:24 +00:00
|
|
|
#include "rocksdb/iterator.h"
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
#include "rocksdb/merge_operator.h"
|
|
|
|
#include "rocksdb/options.h"
|
2013-08-23 15:38:13 +00:00
|
|
|
#include "rocksdb/slice.h"
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
#include "rocksdb/table.h"
|
2015-10-12 22:06:38 +00:00
|
|
|
#include "table/internal_iterator.h"
|
LogAndApply() should fail if the column family has been dropped
Summary:
This patch finally fixes the ColumnFamilyTest.ReadDroppedColumnFamily test. The test has been failing very sporadically and it was hard to repro. However, I managed to write a new tests that reproes the failure deterministically.
Here's what happens:
1. We start the flush for the column family
2. We check if the column family was dropped here: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/flush_job.cc#L149
3. This check goes through, ends up in InstallMemtableFlushResults() and it goes into LogAndApply()
4. At about this time, we start dropping the column family. Dropping the column family process gets to LogAndApply() at about the same time as LogAndApply() from flush process
5. Drop column family goes through LogAndApply() first, marking the column family as dropped.
6. Flush process gets woken up and gets a chance to write to the MANIFEST. However, this is where it gets stuck: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/version_set.cc#L1975
7. We see that the column family was dropped, so there is no need to write to the MANIFEST. We return OK.
8. Flush gets OK back from LogAndApply() and it deletes the memtable, thinking that the data is now safely persisted to sst file.
The fix is pretty simple. Instead of OK, we return ShutdownInProgress. This is not really true, but we have been using this status code to also mean "this operation was canceled because the column family has been dropped".
The fix is only one LOC. All other code is related to tests. I added a new test that reproes the failure. I also moved SleepingBackgroundTask to util/testutil.h (because I needed it in column_family_test for my new test). There's plenty of other places where we reimplement SleepingBackgroundTask, but I'll address that in a separate commit.
Test Plan:
1. new test
2. make check
3. Make sure the ColumnFamilyTest.ReadDroppedColumnFamily doesn't fail on Travis: https://travis-ci.org/facebook/rocksdb/jobs/79952386
Reviewers: yhchiang, anthony, IslamAbdelRahman, kradhakrishnan, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D46773
2015-09-15 18:28:44 +00:00
|
|
|
#include "util/mutexlock.h"
|
2011-03-18 22:37:00 +00:00
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
namespace ROCKSDB_NAMESPACE {
|
2020-07-09 21:33:42 +00:00
|
|
|
class Random;
|
Move rate_limiter, write buffering, most perf context instrumentation and most random kill out of Env
Summary: We want to keep Env a think layer for better portability. Less platform dependent codes should be moved out of Env. In this patch, I create a wrapper of file readers and writers, and put rate limiting, write buffering, as well as most perf context instrumentation and random kill out of Env. It will make it easier to maintain multiple Env in the future.
Test Plan: Run all existing unit tests.
Reviewers: anthony, kradhakrishnan, IslamAbdelRahman, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D42321
2015-07-17 23:16:11 +00:00
|
|
|
class SequentialFile;
|
|
|
|
class SequentialFileReader;
|
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
namespace test {
|
|
|
|
|
2018-06-05 02:59:44 +00:00
|
|
|
extern const uint32_t kDefaultFormatVersion;
|
|
|
|
extern const uint32_t kLatestFormatVersion;
|
|
|
|
|
2011-03-18 22:37:00 +00:00
|
|
|
// Return a random key with the specified length that may contain interesting
|
|
|
|
// characters (e.g. \x00, \xff, etc.).
|
2015-10-13 21:24:45 +00:00
|
|
|
enum RandomKeyType : char { RANDOM, LARGEST, SMALLEST, MIDDLE };
|
|
|
|
extern std::string RandomKey(Random* rnd, int len,
|
|
|
|
RandomKeyType type = RandomKeyType::RANDOM);
|
2011-03-18 22:37:00 +00:00
|
|
|
|
|
|
|
// Store in *dst a string of length "len" that will compress to
|
|
|
|
// "N*compressed_fraction" bytes and return a Slice that references
|
|
|
|
// the generated data.
|
|
|
|
extern Slice CompressibleString(Random* rnd, double compressed_fraction,
|
|
|
|
int len, std::string* dst);
|
|
|
|
|
|
|
|
// A wrapper that allows injection of errors.
|
|
|
|
class ErrorEnv : public EnvWrapper {
|
|
|
|
public:
|
|
|
|
bool writable_file_error_;
|
|
|
|
int num_writable_file_errors_;
|
|
|
|
|
2020-11-21 02:38:47 +00:00
|
|
|
ErrorEnv(Env* _target)
|
|
|
|
: EnvWrapper(_target),
|
|
|
|
writable_file_error_(false),
|
|
|
|
num_writable_file_errors_(0) {}
|
2011-03-18 22:37:00 +00:00
|
|
|
|
|
|
|
virtual Status NewWritableFile(const std::string& fname,
|
2018-11-09 19:17:34 +00:00
|
|
|
std::unique_ptr<WritableFile>* result,
|
2015-02-26 19:28:41 +00:00
|
|
|
const EnvOptions& soptions) override {
|
2013-01-20 10:07:13 +00:00
|
|
|
result->reset();
|
2011-03-18 22:37:00 +00:00
|
|
|
if (writable_file_error_) {
|
|
|
|
++num_writable_file_errors_;
|
|
|
|
return Status::IOError(fname, "fake error");
|
|
|
|
}
|
2013-03-15 00:00:04 +00:00
|
|
|
return target()->NewWritableFile(fname, result, soptions);
|
2011-03-18 22:37:00 +00:00
|
|
|
}
|
|
|
|
};
|
|
|
|
|
2017-09-11 18:53:22 +00:00
|
|
|
#ifndef NDEBUG
|
2014-01-27 21:53:22 +00:00
|
|
|
// An internal comparator that just forward comparing results from the
|
|
|
|
// user comparator in it. Can be used to test entities that have no dependency
|
|
|
|
// on internal key structure but consumes InternalKeyComparator, like
|
|
|
|
// BlockBasedTable.
|
|
|
|
class PlainInternalKeyComparator : public InternalKeyComparator {
|
|
|
|
public:
|
|
|
|
explicit PlainInternalKeyComparator(const Comparator* c)
|
|
|
|
: InternalKeyComparator(c) {}
|
|
|
|
|
|
|
|
virtual ~PlainInternalKeyComparator() {}
|
|
|
|
|
|
|
|
virtual int Compare(const Slice& a, const Slice& b) const override {
|
|
|
|
return user_comparator()->Compare(a, b);
|
|
|
|
}
|
|
|
|
};
|
2017-09-11 18:53:22 +00:00
|
|
|
#endif
|
2014-01-27 21:53:22 +00:00
|
|
|
|
2014-10-29 20:49:45 +00:00
|
|
|
// A test comparator which compare two strings in this way:
|
|
|
|
// (1) first compare prefix of 8 bytes in alphabet order,
|
|
|
|
// (2) if two strings share the same prefix, sort the other part of the string
|
|
|
|
// in the reverse alphabet order.
|
|
|
|
// This helps simulate the case of compounded key of [entity][timestamp] and
|
|
|
|
// latest timestamp first.
|
|
|
|
class SimpleSuffixReverseComparator : public Comparator {
|
|
|
|
public:
|
|
|
|
SimpleSuffixReverseComparator() {}
|
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual const char* Name() const override {
|
|
|
|
return "SimpleSuffixReverseComparator";
|
|
|
|
}
|
2014-10-29 20:49:45 +00:00
|
|
|
|
2015-02-26 19:28:41 +00:00
|
|
|
virtual int Compare(const Slice& a, const Slice& b) const override {
|
2014-10-29 20:49:45 +00:00
|
|
|
Slice prefix_a = Slice(a.data(), 8);
|
|
|
|
Slice prefix_b = Slice(b.data(), 8);
|
|
|
|
int prefix_comp = prefix_a.compare(prefix_b);
|
|
|
|
if (prefix_comp != 0) {
|
|
|
|
return prefix_comp;
|
|
|
|
} else {
|
|
|
|
Slice suffix_a = Slice(a.data() + 8, a.size() - 8);
|
|
|
|
Slice suffix_b = Slice(b.data() + 8, b.size() - 8);
|
|
|
|
return -(suffix_a.compare(suffix_b));
|
|
|
|
}
|
|
|
|
}
|
2018-03-05 21:08:17 +00:00
|
|
|
virtual void FindShortestSeparator(std::string* /*start*/,
|
|
|
|
const Slice& /*limit*/) const override {}
|
2014-10-29 20:49:45 +00:00
|
|
|
|
2018-03-05 21:08:17 +00:00
|
|
|
virtual void FindShortSuccessor(std::string* /*key*/) const override {}
|
2014-10-29 20:49:45 +00:00
|
|
|
};
|
|
|
|
|
2014-08-27 17:39:31 +00:00
|
|
|
// Returns a user key comparator that can be used for comparing two uint64_t
|
|
|
|
// slices. Instead of comparing slices byte-wise, it compares all the 8 bytes
|
|
|
|
// at once. Assumes same endian-ness is used though the database's lifetime.
|
|
|
|
// Symantics of comparison would differ from Bytewise comparator in little
|
|
|
|
// endian machines.
|
|
|
|
extern const Comparator* Uint64Comparator();
|
|
|
|
|
2015-07-17 16:27:24 +00:00
|
|
|
// Iterator over a vector of keys/values
|
2015-10-12 22:06:38 +00:00
|
|
|
class VectorIterator : public InternalIterator {
|
2015-07-17 16:27:24 +00:00
|
|
|
public:
|
|
|
|
explicit VectorIterator(const std::vector<std::string>& keys)
|
|
|
|
: keys_(keys), current_(keys.size()) {
|
|
|
|
std::sort(keys_.begin(), keys_.end());
|
|
|
|
values_.resize(keys.size());
|
|
|
|
}
|
|
|
|
|
|
|
|
VectorIterator(const std::vector<std::string>& keys,
|
|
|
|
const std::vector<std::string>& values)
|
|
|
|
: keys_(keys), values_(values), current_(keys.size()) {
|
|
|
|
assert(keys_.size() == values_.size());
|
|
|
|
}
|
|
|
|
|
|
|
|
virtual bool Valid() const override { return current_ < keys_.size(); }
|
|
|
|
|
|
|
|
virtual void SeekToFirst() override { current_ = 0; }
|
|
|
|
virtual void SeekToLast() override { current_ = keys_.size() - 1; }
|
|
|
|
|
|
|
|
virtual void Seek(const Slice& target) override {
|
|
|
|
current_ = std::lower_bound(keys_.begin(), keys_.end(), target.ToString()) -
|
|
|
|
keys_.begin();
|
|
|
|
}
|
|
|
|
|
2016-09-28 01:20:57 +00:00
|
|
|
virtual void SeekForPrev(const Slice& target) override {
|
|
|
|
current_ = std::upper_bound(keys_.begin(), keys_.end(), target.ToString()) -
|
|
|
|
keys_.begin();
|
|
|
|
if (!Valid()) {
|
|
|
|
SeekToLast();
|
|
|
|
} else {
|
|
|
|
Prev();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2015-07-17 16:27:24 +00:00
|
|
|
virtual void Next() override { current_++; }
|
|
|
|
virtual void Prev() override { current_--; }
|
|
|
|
|
|
|
|
virtual Slice key() const override { return Slice(keys_[current_]); }
|
|
|
|
virtual Slice value() const override { return Slice(values_[current_]); }
|
|
|
|
|
|
|
|
virtual Status status() const override { return Status::OK(); }
|
|
|
|
|
2018-05-21 16:42:49 +00:00
|
|
|
virtual bool IsKeyPinned() const override { return true; }
|
|
|
|
virtual bool IsValuePinned() const override { return true; }
|
|
|
|
|
2015-07-17 16:27:24 +00:00
|
|
|
private:
|
|
|
|
std::vector<std::string> keys_;
|
|
|
|
std::vector<std::string> values_;
|
|
|
|
size_t current_;
|
|
|
|
};
|
2018-08-23 17:04:10 +00:00
|
|
|
extern WritableFileWriter* GetWritableFileWriter(WritableFile* wf,
|
|
|
|
const std::string& fname);
|
Move rate_limiter, write buffering, most perf context instrumentation and most random kill out of Env
Summary: We want to keep Env a think layer for better portability. Less platform dependent codes should be moved out of Env. In this patch, I create a wrapper of file readers and writers, and put rate limiting, write buffering, as well as most perf context instrumentation and random kill out of Env. It will make it easier to maintain multiple Env in the future.
Test Plan: Run all existing unit tests.
Reviewers: anthony, kradhakrishnan, IslamAbdelRahman, yhchiang, igor
Reviewed By: igor
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D42321
2015-07-17 23:16:11 +00:00
|
|
|
|
|
|
|
extern RandomAccessFileReader* GetRandomAccessFileReader(RandomAccessFile* raf);
|
|
|
|
|
2018-06-21 15:34:24 +00:00
|
|
|
extern SequentialFileReader* GetSequentialFileReader(SequentialFile* se,
|
|
|
|
const std::string& fname);
|
2015-07-17 16:27:24 +00:00
|
|
|
|
2015-08-05 14:33:27 +00:00
|
|
|
class StringSink: public WritableFile {
|
|
|
|
public:
|
|
|
|
std::string contents_;
|
|
|
|
|
|
|
|
explicit StringSink(Slice* reader_contents = nullptr) :
|
|
|
|
WritableFile(),
|
|
|
|
contents_(""),
|
|
|
|
reader_contents_(reader_contents),
|
|
|
|
last_flush_(0) {
|
|
|
|
if (reader_contents_ != nullptr) {
|
|
|
|
*reader_contents_ = Slice(contents_.data(), 0);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
const std::string& contents() const { return contents_; }
|
|
|
|
|
2015-09-11 16:57:02 +00:00
|
|
|
virtual Status Truncate(uint64_t size) override {
|
2015-10-19 20:40:44 +00:00
|
|
|
contents_.resize(static_cast<size_t>(size));
|
2015-09-11 16:57:02 +00:00
|
|
|
return Status::OK();
|
|
|
|
}
|
2015-08-05 14:33:27 +00:00
|
|
|
virtual Status Close() override { return Status::OK(); }
|
|
|
|
virtual Status Flush() override {
|
|
|
|
if (reader_contents_ != nullptr) {
|
|
|
|
assert(reader_contents_->size() <= last_flush_);
|
|
|
|
size_t offset = last_flush_ - reader_contents_->size();
|
|
|
|
*reader_contents_ = Slice(
|
|
|
|
contents_.data() + offset,
|
|
|
|
contents_.size() - offset);
|
|
|
|
last_flush_ = contents_.size();
|
|
|
|
}
|
|
|
|
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
virtual Status Sync() override { return Status::OK(); }
|
|
|
|
virtual Status Append(const Slice& slice) override {
|
|
|
|
contents_.append(slice.data(), slice.size());
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
void Drop(size_t bytes) {
|
|
|
|
if (reader_contents_ != nullptr) {
|
|
|
|
contents_.resize(contents_.size() - bytes);
|
|
|
|
*reader_contents_ = Slice(
|
|
|
|
reader_contents_->data(), reader_contents_->size() - bytes);
|
|
|
|
last_flush_ = contents_.size();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
Slice* reader_contents_;
|
|
|
|
size_t last_flush_;
|
|
|
|
};
|
|
|
|
|
2016-10-18 23:59:37 +00:00
|
|
|
// A wrapper around a StringSink to give it a RandomRWFile interface
|
|
|
|
class RandomRWStringSink : public RandomRWFile {
|
|
|
|
public:
|
|
|
|
explicit RandomRWStringSink(StringSink* ss) : ss_(ss) {}
|
|
|
|
|
2018-08-14 22:03:57 +00:00
|
|
|
Status Write(uint64_t offset, const Slice& data) override {
|
2016-10-18 23:59:37 +00:00
|
|
|
if (offset + data.size() > ss_->contents_.size()) {
|
2018-09-06 01:07:53 +00:00
|
|
|
ss_->contents_.resize(static_cast<size_t>(offset) + data.size(), '\0');
|
2016-10-18 23:59:37 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
char* pos = const_cast<char*>(ss_->contents_.data() + offset);
|
|
|
|
memcpy(pos, data.data(), data.size());
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
2018-03-05 21:08:17 +00:00
|
|
|
Status Read(uint64_t offset, size_t n, Slice* result,
|
2018-08-14 22:03:57 +00:00
|
|
|
char* /*scratch*/) const override {
|
2016-10-18 23:59:37 +00:00
|
|
|
*result = Slice(nullptr, 0);
|
|
|
|
if (offset < ss_->contents_.size()) {
|
|
|
|
size_t str_res_sz =
|
|
|
|
std::min(static_cast<size_t>(ss_->contents_.size() - offset), n);
|
|
|
|
*result = Slice(ss_->contents_.data() + offset, str_res_sz);
|
|
|
|
}
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
2018-08-14 22:03:57 +00:00
|
|
|
Status Flush() override { return Status::OK(); }
|
2016-10-18 23:59:37 +00:00
|
|
|
|
2018-08-14 22:03:57 +00:00
|
|
|
Status Sync() override { return Status::OK(); }
|
2016-10-18 23:59:37 +00:00
|
|
|
|
2018-08-14 22:03:57 +00:00
|
|
|
Status Close() override { return Status::OK(); }
|
2016-10-18 23:59:37 +00:00
|
|
|
|
|
|
|
const std::string& contents() const { return ss_->contents(); }
|
|
|
|
|
|
|
|
private:
|
|
|
|
StringSink* ss_;
|
|
|
|
};
|
|
|
|
|
2015-12-11 19:12:03 +00:00
|
|
|
// Like StringSink, this writes into a string. Unlink StringSink, it
|
|
|
|
// has some initial content and overwrites it, just like a recycled
|
|
|
|
// log file.
|
|
|
|
class OverwritingStringSink : public WritableFile {
|
|
|
|
public:
|
|
|
|
explicit OverwritingStringSink(Slice* reader_contents)
|
|
|
|
: WritableFile(),
|
|
|
|
contents_(""),
|
|
|
|
reader_contents_(reader_contents),
|
|
|
|
last_flush_(0) {}
|
|
|
|
|
|
|
|
const std::string& contents() const { return contents_; }
|
|
|
|
|
|
|
|
virtual Status Truncate(uint64_t size) override {
|
|
|
|
contents_.resize(static_cast<size_t>(size));
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
virtual Status Close() override { return Status::OK(); }
|
|
|
|
virtual Status Flush() override {
|
|
|
|
if (last_flush_ < contents_.size()) {
|
|
|
|
assert(reader_contents_->size() >= contents_.size());
|
|
|
|
memcpy((char*)reader_contents_->data() + last_flush_,
|
|
|
|
contents_.data() + last_flush_, contents_.size() - last_flush_);
|
|
|
|
last_flush_ = contents_.size();
|
|
|
|
}
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
virtual Status Sync() override { return Status::OK(); }
|
|
|
|
virtual Status Append(const Slice& slice) override {
|
|
|
|
contents_.append(slice.data(), slice.size());
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
void Drop(size_t bytes) {
|
|
|
|
contents_.resize(contents_.size() - bytes);
|
|
|
|
if (last_flush_ > contents_.size()) last_flush_ = contents_.size();
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
std::string contents_;
|
|
|
|
Slice* reader_contents_;
|
|
|
|
size_t last_flush_;
|
|
|
|
};
|
|
|
|
|
2015-08-05 14:33:27 +00:00
|
|
|
class StringSource: public RandomAccessFile {
|
|
|
|
public:
|
Simplify querying of merge results
Summary:
While working on supporting mixing merge operators with
single deletes ( https://reviews.facebook.net/D43179 ),
I realized that returning and dealing with merge results
can be made simpler. Submitting this as a separate diff
because it is not directly related to single deletes.
Before, callers of merge helper had to retrieve the merge
result in one of two ways depending on whether the merge
was successful or not (success = result of merge was single
kTypeValue). For successful merges, the caller could query
the resulting key/value pair and for unsuccessful merges,
the result could be retrieved in the form of two deques of
keys and values. However, with single deletes, a successful merge
does not return a single key/value pair (if merge
operands are merged with a single delete, we have to generate
a value and keep the original single delete around to make
sure that we are not accidentially producing a key overwrite).
In addition, the two existing call sites of the merge
helper were taking the same actions independently from whether
the merge was successful or not, so this patch simplifies that.
Test Plan: make clean all check
Reviewers: rven, sdong, yhchiang, anthony, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D43353
2015-08-18 00:34:38 +00:00
|
|
|
explicit StringSource(const Slice& contents, uint64_t uniq_id = 0,
|
|
|
|
bool mmap = false)
|
|
|
|
: contents_(contents.data(), contents.size()),
|
|
|
|
uniq_id_(uniq_id),
|
plain table reader: non-mmap mode to keep two recent buffers
Summary: In plain table reader's non-mmap mode, we only keep the most recent read buffer. However, for binary search, it is likely we come back to a location to read. To avoid one pread in such a case, we keep two read buffers. It should cover most of the cases.
Test Plan:
1. run tests
2. check the optimization works through strace when running
./table_reader_bench -mmap_read=false --num_keys2=1 -num_keys1=5000 -table_factory=plain_table --iterator --through_db
Reviewers: anthony, rven, kradhakrishnan, igor, yhchiang, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D51171
2015-12-24 01:30:10 +00:00
|
|
|
mmap_(mmap),
|
|
|
|
total_reads_(0) {}
|
2015-08-05 14:33:27 +00:00
|
|
|
|
|
|
|
virtual ~StringSource() { }
|
|
|
|
|
|
|
|
uint64_t Size() const { return contents_.size(); }
|
|
|
|
|
|
|
|
virtual Status Read(uint64_t offset, size_t n, Slice* result,
|
|
|
|
char* scratch) const override {
|
plain table reader: non-mmap mode to keep two recent buffers
Summary: In plain table reader's non-mmap mode, we only keep the most recent read buffer. However, for binary search, it is likely we come back to a location to read. To avoid one pread in such a case, we keep two read buffers. It should cover most of the cases.
Test Plan:
1. run tests
2. check the optimization works through strace when running
./table_reader_bench -mmap_read=false --num_keys2=1 -num_keys1=5000 -table_factory=plain_table --iterator --through_db
Reviewers: anthony, rven, kradhakrishnan, igor, yhchiang, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D51171
2015-12-24 01:30:10 +00:00
|
|
|
total_reads_++;
|
2015-08-05 14:33:27 +00:00
|
|
|
if (offset > contents_.size()) {
|
|
|
|
return Status::InvalidArgument("invalid Read offset");
|
|
|
|
}
|
|
|
|
if (offset + n > contents_.size()) {
|
2015-10-19 20:40:44 +00:00
|
|
|
n = contents_.size() - static_cast<size_t>(offset);
|
2015-08-05 14:33:27 +00:00
|
|
|
}
|
|
|
|
if (!mmap_) {
|
2015-10-19 20:40:44 +00:00
|
|
|
memcpy(scratch, &contents_[static_cast<size_t>(offset)], n);
|
2015-08-05 14:33:27 +00:00
|
|
|
*result = Slice(scratch, n);
|
|
|
|
} else {
|
2015-10-19 20:40:44 +00:00
|
|
|
*result = Slice(&contents_[static_cast<size_t>(offset)], n);
|
2015-08-05 14:33:27 +00:00
|
|
|
}
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
|
|
|
virtual size_t GetUniqueId(char* id, size_t max_size) const override {
|
|
|
|
if (max_size < 20) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
char* rid = id;
|
|
|
|
rid = EncodeVarint64(rid, uniq_id_);
|
|
|
|
rid = EncodeVarint64(rid, 0);
|
|
|
|
return static_cast<size_t>(rid-id);
|
|
|
|
}
|
|
|
|
|
plain table reader: non-mmap mode to keep two recent buffers
Summary: In plain table reader's non-mmap mode, we only keep the most recent read buffer. However, for binary search, it is likely we come back to a location to read. To avoid one pread in such a case, we keep two read buffers. It should cover most of the cases.
Test Plan:
1. run tests
2. check the optimization works through strace when running
./table_reader_bench -mmap_read=false --num_keys2=1 -num_keys1=5000 -table_factory=plain_table --iterator --through_db
Reviewers: anthony, rven, kradhakrishnan, igor, yhchiang, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D51171
2015-12-24 01:30:10 +00:00
|
|
|
int total_reads() const { return total_reads_; }
|
|
|
|
|
|
|
|
void set_total_reads(int tr) { total_reads_ = tr; }
|
|
|
|
|
2015-08-05 14:33:27 +00:00
|
|
|
private:
|
|
|
|
std::string contents_;
|
|
|
|
uint64_t uniq_id_;
|
|
|
|
bool mmap_;
|
plain table reader: non-mmap mode to keep two recent buffers
Summary: In plain table reader's non-mmap mode, we only keep the most recent read buffer. However, for binary search, it is likely we come back to a location to read. To avoid one pread in such a case, we keep two read buffers. It should cover most of the cases.
Test Plan:
1. run tests
2. check the optimization works through strace when running
./table_reader_bench -mmap_read=false --num_keys2=1 -num_keys1=5000 -table_factory=plain_table --iterator --through_db
Reviewers: anthony, rven, kradhakrishnan, igor, yhchiang, IslamAbdelRahman
Reviewed By: IslamAbdelRahman
Subscribers: leveldb, dhruba
Differential Revision: https://reviews.facebook.net/D51171
2015-12-24 01:30:10 +00:00
|
|
|
mutable int total_reads_;
|
2015-08-05 14:33:27 +00:00
|
|
|
};
|
|
|
|
|
Introduce a new storage specific Env API (#5761)
Summary:
The current Env API encompasses both storage/file operations, as well as OS related operations. Most of the APIs return a Status, which does not have enough metadata about an error, such as whether its retry-able or not, scope (i.e fault domain) of the error etc., that may be required in order to properly handle a storage error. The file APIs also do not provide enough control over the IO SLA, such as timeout, prioritization, hinting about placement and redundancy etc.
This PR separates out the file/storage APIs from Env into a new FileSystem class. The APIs are updated to return an IOStatus with metadata about the error, as well as to take an IOOptions structure as input in order to allow more control over the IO.
The user can set both ```options.env``` and ```options.file_system``` to specify that RocksDB should use the former for OS related operations and the latter for storage operations. Internally, a ```CompositeEnvWrapper``` has been introduced that inherits from ```Env``` and redirects individual methods to either an ```Env``` implementation or the ```FileSystem``` as appropriate. When options are sanitized during ```DB::Open```, ```options.env``` is replaced with a newly allocated ```CompositeEnvWrapper``` instance if both env and file_system have been specified. This way, the rest of the RocksDB code can continue to function as before.
This PR also ports PosixEnv to the new API by splitting it into two - PosixEnv and PosixFileSystem. PosixEnv is defined as a sub-class of CompositeEnvWrapper, and threading/time functions are overridden with Posix specific implementations in order to avoid an extra level of indirection.
The ```CompositeEnvWrapper``` translates ```IOStatus``` return code to ```Status```, and sets the severity to ```kSoftError``` if the io_status is retryable. The error handling code in RocksDB can then recover the DB automatically.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5761
Differential Revision: D18868376
Pulled By: anand1976
fbshipit-source-id: 39efe18a162ea746fabac6360ff529baba48486f
2019-12-13 22:47:08 +00:00
|
|
|
inline StringSink* GetStringSinkFromLegacyWriter(
|
|
|
|
const WritableFileWriter* writer) {
|
|
|
|
LegacyWritableFileWrapper* file =
|
|
|
|
static_cast<LegacyWritableFileWrapper*>(writer->writable_file());
|
|
|
|
return static_cast<StringSink*>(file->target());
|
|
|
|
}
|
|
|
|
|
2015-08-05 14:33:27 +00:00
|
|
|
class NullLogger : public Logger {
|
|
|
|
public:
|
|
|
|
using Logger::Logv;
|
2018-03-05 21:08:17 +00:00
|
|
|
virtual void Logv(const char* /*format*/, va_list /*ap*/) override {}
|
2015-08-05 14:33:27 +00:00
|
|
|
virtual size_t GetLogFileSize() const override { return 0; }
|
|
|
|
};
|
|
|
|
|
Simplify querying of merge results
Summary:
While working on supporting mixing merge operators with
single deletes ( https://reviews.facebook.net/D43179 ),
I realized that returning and dealing with merge results
can be made simpler. Submitting this as a separate diff
because it is not directly related to single deletes.
Before, callers of merge helper had to retrieve the merge
result in one of two ways depending on whether the merge
was successful or not (success = result of merge was single
kTypeValue). For successful merges, the caller could query
the resulting key/value pair and for unsuccessful merges,
the result could be retrieved in the form of two deques of
keys and values. However, with single deletes, a successful merge
does not return a single key/value pair (if merge
operands are merged with a single delete, we have to generate
a value and keep the original single delete around to make
sure that we are not accidentially producing a key overwrite).
In addition, the two existing call sites of the merge
helper were taking the same actions independently from whether
the merge was successful or not, so this patch simplifies that.
Test Plan: make clean all check
Reviewers: rven, sdong, yhchiang, anthony, igor
Reviewed By: igor
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D43353
2015-08-18 00:34:38 +00:00
|
|
|
// Corrupts key by changing the type
|
|
|
|
extern void CorruptKeyType(InternalKey* ikey);
|
|
|
|
|
Support for SingleDelete()
Summary:
This patch fixes #7460559. It introduces SingleDelete as a new database
operation. This operation can be used to delete keys that were never
overwritten (no put following another put of the same key). If an overwritten
key is single deleted the behavior is undefined. Single deletion of a
non-existent key has no effect but multiple consecutive single deletions are
not allowed (see limitations).
In contrast to the conventional Delete() operation, the deletion entry is
removed along with the value when the two are lined up in a compaction. Note:
The semantics are similar to @igor's prototype that allowed to have this
behavior on the granularity of a column family (
https://reviews.facebook.net/D42093 ). This new patch, however, is more
aggressive when it comes to removing tombstones: It removes the SingleDelete
together with the value whenever there is no snapshot between them while the
older patch only did this when the sequence number of the deletion was older
than the earliest snapshot.
Most of the complex additions are in the Compaction Iterator, all other changes
should be relatively straightforward. The patch also includes basic support for
single deletions in db_stress and db_bench.
Limitations:
- Not compatible with cuckoo hash tables
- Single deletions cannot be used in combination with merges and normal
deletions on the same key (other keys are not affected by this)
- Consecutive single deletions are currently not allowed (and older version of
this patch supported this so it could be resurrected if needed)
Test Plan: make all check
Reviewers: yhchiang, sdong, rven, anthony, yoshinorim, igor
Reviewed By: igor
Subscribers: maykov, dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D43179
2015-09-17 18:42:56 +00:00
|
|
|
extern std::string KeyStr(const std::string& user_key,
|
|
|
|
const SequenceNumber& seq, const ValueType& t,
|
|
|
|
bool corrupt = false);
|
|
|
|
|
Allow compaction iterator to perform garbage collection (#7556)
Summary:
Add a threshold timestamp, full_history_ts_low_ of type `std::string*` to
`CompactionIterator`, so that RocksDB can also perform garbage collection during
compaction.
* If full_history_ts_low_ is nullptr, then compaction iterator does not perform
GC, preserving all timestamp history for all keys. Compaction iterator will
treat user key with different timestamps as different user keys.
* If full_history_ts_low_ is not nullptr, then compaction iterator performs
GC. GC will look at keys older than `*full_history_ts_low_` and determine their
eligibility based on factors including snapshots.
Current rules of GC:
* If an internal key is in the same snapshot as a previous counterpart
with the same user key, and this key is eligible for GC, and the key is
not single-delete or merge operand, then this key can be dropped. Note
that the previous internal key cannot be a merge operand either.
* If a tombstone is the most recent one in the earliest snapshot and it
is eligible for GC, and keyNotExistsBeyondLevel() is true, then this
tombstone can be dropped.
* If a tombstone is the most recent one in a snapshot and it is eligible
for GC, and the compaction is at bottommost level, then all other older
internal keys of the same user key must also be eligible for GC, thus
can be dropped
* Single-delete, delete-range and merge are not currently supported.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7556
Test Plan: make check
Reviewed By: ltamasi
Differential Revision: D24507728
Pulled By: riversand963
fbshipit-source-id: 3c09c7301f41eed76dfcf4d1527e68cf6e0a8bb3
2020-10-24 05:58:05 +00:00
|
|
|
extern std::string KeyStr(uint64_t ts, const std::string& user_key,
|
|
|
|
const SequenceNumber& seq, const ValueType& t,
|
|
|
|
bool corrupt = false);
|
|
|
|
|
LogAndApply() should fail if the column family has been dropped
Summary:
This patch finally fixes the ColumnFamilyTest.ReadDroppedColumnFamily test. The test has been failing very sporadically and it was hard to repro. However, I managed to write a new tests that reproes the failure deterministically.
Here's what happens:
1. We start the flush for the column family
2. We check if the column family was dropped here: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/flush_job.cc#L149
3. This check goes through, ends up in InstallMemtableFlushResults() and it goes into LogAndApply()
4. At about this time, we start dropping the column family. Dropping the column family process gets to LogAndApply() at about the same time as LogAndApply() from flush process
5. Drop column family goes through LogAndApply() first, marking the column family as dropped.
6. Flush process gets woken up and gets a chance to write to the MANIFEST. However, this is where it gets stuck: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/version_set.cc#L1975
7. We see that the column family was dropped, so there is no need to write to the MANIFEST. We return OK.
8. Flush gets OK back from LogAndApply() and it deletes the memtable, thinking that the data is now safely persisted to sst file.
The fix is pretty simple. Instead of OK, we return ShutdownInProgress. This is not really true, but we have been using this status code to also mean "this operation was canceled because the column family has been dropped".
The fix is only one LOC. All other code is related to tests. I added a new test that reproes the failure. I also moved SleepingBackgroundTask to util/testutil.h (because I needed it in column_family_test for my new test). There's plenty of other places where we reimplement SleepingBackgroundTask, but I'll address that in a separate commit.
Test Plan:
1. new test
2. make check
3. Make sure the ColumnFamilyTest.ReadDroppedColumnFamily doesn't fail on Travis: https://travis-ci.org/facebook/rocksdb/jobs/79952386
Reviewers: yhchiang, anthony, IslamAbdelRahman, kradhakrishnan, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D46773
2015-09-15 18:28:44 +00:00
|
|
|
class SleepingBackgroundTask {
|
|
|
|
public:
|
|
|
|
SleepingBackgroundTask()
|
2015-09-25 17:29:44 +00:00
|
|
|
: bg_cv_(&mutex_),
|
|
|
|
should_sleep_(true),
|
|
|
|
done_with_sleep_(false),
|
|
|
|
sleeping_(false) {}
|
|
|
|
|
|
|
|
bool IsSleeping() {
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
return sleeping_;
|
|
|
|
}
|
LogAndApply() should fail if the column family has been dropped
Summary:
This patch finally fixes the ColumnFamilyTest.ReadDroppedColumnFamily test. The test has been failing very sporadically and it was hard to repro. However, I managed to write a new tests that reproes the failure deterministically.
Here's what happens:
1. We start the flush for the column family
2. We check if the column family was dropped here: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/flush_job.cc#L149
3. This check goes through, ends up in InstallMemtableFlushResults() and it goes into LogAndApply()
4. At about this time, we start dropping the column family. Dropping the column family process gets to LogAndApply() at about the same time as LogAndApply() from flush process
5. Drop column family goes through LogAndApply() first, marking the column family as dropped.
6. Flush process gets woken up and gets a chance to write to the MANIFEST. However, this is where it gets stuck: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/version_set.cc#L1975
7. We see that the column family was dropped, so there is no need to write to the MANIFEST. We return OK.
8. Flush gets OK back from LogAndApply() and it deletes the memtable, thinking that the data is now safely persisted to sst file.
The fix is pretty simple. Instead of OK, we return ShutdownInProgress. This is not really true, but we have been using this status code to also mean "this operation was canceled because the column family has been dropped".
The fix is only one LOC. All other code is related to tests. I added a new test that reproes the failure. I also moved SleepingBackgroundTask to util/testutil.h (because I needed it in column_family_test for my new test). There's plenty of other places where we reimplement SleepingBackgroundTask, but I'll address that in a separate commit.
Test Plan:
1. new test
2. make check
3. Make sure the ColumnFamilyTest.ReadDroppedColumnFamily doesn't fail on Travis: https://travis-ci.org/facebook/rocksdb/jobs/79952386
Reviewers: yhchiang, anthony, IslamAbdelRahman, kradhakrishnan, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D46773
2015-09-15 18:28:44 +00:00
|
|
|
void DoSleep() {
|
|
|
|
MutexLock l(&mutex_);
|
2015-09-25 17:29:44 +00:00
|
|
|
sleeping_ = true;
|
2015-12-08 02:07:01 +00:00
|
|
|
bg_cv_.SignalAll();
|
LogAndApply() should fail if the column family has been dropped
Summary:
This patch finally fixes the ColumnFamilyTest.ReadDroppedColumnFamily test. The test has been failing very sporadically and it was hard to repro. However, I managed to write a new tests that reproes the failure deterministically.
Here's what happens:
1. We start the flush for the column family
2. We check if the column family was dropped here: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/flush_job.cc#L149
3. This check goes through, ends up in InstallMemtableFlushResults() and it goes into LogAndApply()
4. At about this time, we start dropping the column family. Dropping the column family process gets to LogAndApply() at about the same time as LogAndApply() from flush process
5. Drop column family goes through LogAndApply() first, marking the column family as dropped.
6. Flush process gets woken up and gets a chance to write to the MANIFEST. However, this is where it gets stuck: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/version_set.cc#L1975
7. We see that the column family was dropped, so there is no need to write to the MANIFEST. We return OK.
8. Flush gets OK back from LogAndApply() and it deletes the memtable, thinking that the data is now safely persisted to sst file.
The fix is pretty simple. Instead of OK, we return ShutdownInProgress. This is not really true, but we have been using this status code to also mean "this operation was canceled because the column family has been dropped".
The fix is only one LOC. All other code is related to tests. I added a new test that reproes the failure. I also moved SleepingBackgroundTask to util/testutil.h (because I needed it in column_family_test for my new test). There's plenty of other places where we reimplement SleepingBackgroundTask, but I'll address that in a separate commit.
Test Plan:
1. new test
2. make check
3. Make sure the ColumnFamilyTest.ReadDroppedColumnFamily doesn't fail on Travis: https://travis-ci.org/facebook/rocksdb/jobs/79952386
Reviewers: yhchiang, anthony, IslamAbdelRahman, kradhakrishnan, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D46773
2015-09-15 18:28:44 +00:00
|
|
|
while (should_sleep_) {
|
|
|
|
bg_cv_.Wait();
|
|
|
|
}
|
2015-09-25 17:29:44 +00:00
|
|
|
sleeping_ = false;
|
LogAndApply() should fail if the column family has been dropped
Summary:
This patch finally fixes the ColumnFamilyTest.ReadDroppedColumnFamily test. The test has been failing very sporadically and it was hard to repro. However, I managed to write a new tests that reproes the failure deterministically.
Here's what happens:
1. We start the flush for the column family
2. We check if the column family was dropped here: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/flush_job.cc#L149
3. This check goes through, ends up in InstallMemtableFlushResults() and it goes into LogAndApply()
4. At about this time, we start dropping the column family. Dropping the column family process gets to LogAndApply() at about the same time as LogAndApply() from flush process
5. Drop column family goes through LogAndApply() first, marking the column family as dropped.
6. Flush process gets woken up and gets a chance to write to the MANIFEST. However, this is where it gets stuck: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/version_set.cc#L1975
7. We see that the column family was dropped, so there is no need to write to the MANIFEST. We return OK.
8. Flush gets OK back from LogAndApply() and it deletes the memtable, thinking that the data is now safely persisted to sst file.
The fix is pretty simple. Instead of OK, we return ShutdownInProgress. This is not really true, but we have been using this status code to also mean "this operation was canceled because the column family has been dropped".
The fix is only one LOC. All other code is related to tests. I added a new test that reproes the failure. I also moved SleepingBackgroundTask to util/testutil.h (because I needed it in column_family_test for my new test). There's plenty of other places where we reimplement SleepingBackgroundTask, but I'll address that in a separate commit.
Test Plan:
1. new test
2. make check
3. Make sure the ColumnFamilyTest.ReadDroppedColumnFamily doesn't fail on Travis: https://travis-ci.org/facebook/rocksdb/jobs/79952386
Reviewers: yhchiang, anthony, IslamAbdelRahman, kradhakrishnan, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D46773
2015-09-15 18:28:44 +00:00
|
|
|
done_with_sleep_ = true;
|
|
|
|
bg_cv_.SignalAll();
|
|
|
|
}
|
2015-12-08 02:07:01 +00:00
|
|
|
void WaitUntilSleeping() {
|
|
|
|
MutexLock l(&mutex_);
|
2015-11-19 02:10:20 +00:00
|
|
|
while (!sleeping_ || !should_sleep_) {
|
2015-12-08 02:07:01 +00:00
|
|
|
bg_cv_.Wait();
|
|
|
|
}
|
|
|
|
}
|
2020-02-14 01:25:29 +00:00
|
|
|
// Waits for the status to change to sleeping,
|
|
|
|
// otherwise times out.
|
|
|
|
// wait_time is in microseconds.
|
|
|
|
// Returns true when times out, false otherwise.
|
|
|
|
bool TimedWaitUntilSleeping(uint64_t wait_time) {
|
|
|
|
auto abs_time = Env::Default()->NowMicros() + wait_time;
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
while (!sleeping_ || !should_sleep_) {
|
|
|
|
if (bg_cv_.TimedWait(abs_time)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
LogAndApply() should fail if the column family has been dropped
Summary:
This patch finally fixes the ColumnFamilyTest.ReadDroppedColumnFamily test. The test has been failing very sporadically and it was hard to repro. However, I managed to write a new tests that reproes the failure deterministically.
Here's what happens:
1. We start the flush for the column family
2. We check if the column family was dropped here: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/flush_job.cc#L149
3. This check goes through, ends up in InstallMemtableFlushResults() and it goes into LogAndApply()
4. At about this time, we start dropping the column family. Dropping the column family process gets to LogAndApply() at about the same time as LogAndApply() from flush process
5. Drop column family goes through LogAndApply() first, marking the column family as dropped.
6. Flush process gets woken up and gets a chance to write to the MANIFEST. However, this is where it gets stuck: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/version_set.cc#L1975
7. We see that the column family was dropped, so there is no need to write to the MANIFEST. We return OK.
8. Flush gets OK back from LogAndApply() and it deletes the memtable, thinking that the data is now safely persisted to sst file.
The fix is pretty simple. Instead of OK, we return ShutdownInProgress. This is not really true, but we have been using this status code to also mean "this operation was canceled because the column family has been dropped".
The fix is only one LOC. All other code is related to tests. I added a new test that reproes the failure. I also moved SleepingBackgroundTask to util/testutil.h (because I needed it in column_family_test for my new test). There's plenty of other places where we reimplement SleepingBackgroundTask, but I'll address that in a separate commit.
Test Plan:
1. new test
2. make check
3. Make sure the ColumnFamilyTest.ReadDroppedColumnFamily doesn't fail on Travis: https://travis-ci.org/facebook/rocksdb/jobs/79952386
Reviewers: yhchiang, anthony, IslamAbdelRahman, kradhakrishnan, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D46773
2015-09-15 18:28:44 +00:00
|
|
|
void WakeUp() {
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
should_sleep_ = false;
|
|
|
|
bg_cv_.SignalAll();
|
|
|
|
}
|
|
|
|
void WaitUntilDone() {
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
while (!done_with_sleep_) {
|
|
|
|
bg_cv_.Wait();
|
|
|
|
}
|
|
|
|
}
|
2020-02-14 01:25:29 +00:00
|
|
|
// Similar to TimedWaitUntilSleeping.
|
|
|
|
// Waits until the task is done.
|
|
|
|
bool TimedWaitUntilDone(uint64_t wait_time) {
|
|
|
|
auto abs_time = Env::Default()->NowMicros() + wait_time;
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
while (!done_with_sleep_) {
|
|
|
|
if (bg_cv_.TimedWait(abs_time)) {
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
LogAndApply() should fail if the column family has been dropped
Summary:
This patch finally fixes the ColumnFamilyTest.ReadDroppedColumnFamily test. The test has been failing very sporadically and it was hard to repro. However, I managed to write a new tests that reproes the failure deterministically.
Here's what happens:
1. We start the flush for the column family
2. We check if the column family was dropped here: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/flush_job.cc#L149
3. This check goes through, ends up in InstallMemtableFlushResults() and it goes into LogAndApply()
4. At about this time, we start dropping the column family. Dropping the column family process gets to LogAndApply() at about the same time as LogAndApply() from flush process
5. Drop column family goes through LogAndApply() first, marking the column family as dropped.
6. Flush process gets woken up and gets a chance to write to the MANIFEST. However, this is where it gets stuck: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/version_set.cc#L1975
7. We see that the column family was dropped, so there is no need to write to the MANIFEST. We return OK.
8. Flush gets OK back from LogAndApply() and it deletes the memtable, thinking that the data is now safely persisted to sst file.
The fix is pretty simple. Instead of OK, we return ShutdownInProgress. This is not really true, but we have been using this status code to also mean "this operation was canceled because the column family has been dropped".
The fix is only one LOC. All other code is related to tests. I added a new test that reproes the failure. I also moved SleepingBackgroundTask to util/testutil.h (because I needed it in column_family_test for my new test). There's plenty of other places where we reimplement SleepingBackgroundTask, but I'll address that in a separate commit.
Test Plan:
1. new test
2. make check
3. Make sure the ColumnFamilyTest.ReadDroppedColumnFamily doesn't fail on Travis: https://travis-ci.org/facebook/rocksdb/jobs/79952386
Reviewers: yhchiang, anthony, IslamAbdelRahman, kradhakrishnan, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D46773
2015-09-15 18:28:44 +00:00
|
|
|
bool WokenUp() {
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
return should_sleep_ == false;
|
|
|
|
}
|
|
|
|
|
|
|
|
void Reset() {
|
|
|
|
MutexLock l(&mutex_);
|
|
|
|
should_sleep_ = true;
|
|
|
|
done_with_sleep_ = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void DoSleepTask(void* arg) {
|
|
|
|
reinterpret_cast<SleepingBackgroundTask*>(arg)->DoSleep();
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
port::Mutex mutex_;
|
|
|
|
port::CondVar bg_cv_; // Signalled when background work finishes
|
|
|
|
bool should_sleep_;
|
|
|
|
bool done_with_sleep_;
|
2015-09-25 17:29:44 +00:00
|
|
|
bool sleeping_;
|
LogAndApply() should fail if the column family has been dropped
Summary:
This patch finally fixes the ColumnFamilyTest.ReadDroppedColumnFamily test. The test has been failing very sporadically and it was hard to repro. However, I managed to write a new tests that reproes the failure deterministically.
Here's what happens:
1. We start the flush for the column family
2. We check if the column family was dropped here: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/flush_job.cc#L149
3. This check goes through, ends up in InstallMemtableFlushResults() and it goes into LogAndApply()
4. At about this time, we start dropping the column family. Dropping the column family process gets to LogAndApply() at about the same time as LogAndApply() from flush process
5. Drop column family goes through LogAndApply() first, marking the column family as dropped.
6. Flush process gets woken up and gets a chance to write to the MANIFEST. However, this is where it gets stuck: https://github.com/facebook/rocksdb/blob/a3fc49bfddcdb1ff29409aacd06c04df56c7a1d7/db/version_set.cc#L1975
7. We see that the column family was dropped, so there is no need to write to the MANIFEST. We return OK.
8. Flush gets OK back from LogAndApply() and it deletes the memtable, thinking that the data is now safely persisted to sst file.
The fix is pretty simple. Instead of OK, we return ShutdownInProgress. This is not really true, but we have been using this status code to also mean "this operation was canceled because the column family has been dropped".
The fix is only one LOC. All other code is related to tests. I added a new test that reproes the failure. I also moved SleepingBackgroundTask to util/testutil.h (because I needed it in column_family_test for my new test). There's plenty of other places where we reimplement SleepingBackgroundTask, but I'll address that in a separate commit.
Test Plan:
1. new test
2. make check
3. Make sure the ColumnFamilyTest.ReadDroppedColumnFamily doesn't fail on Travis: https://travis-ci.org/facebook/rocksdb/jobs/79952386
Reviewers: yhchiang, anthony, IslamAbdelRahman, kradhakrishnan, rven, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D46773
2015-09-15 18:28:44 +00:00
|
|
|
};
|
|
|
|
|
Compaction filter on merge operands
Summary:
Since Andres' internship is over, I took over https://reviews.facebook.net/D42555 and rebased and simplified it a bit.
The behavior in this diff is a bit simpler than in D42555:
* only merge operators are passed through FilterMergeValue(). If fitler function returns true, the merge operator is ignored
* compaction filter is *not* called on: 1) results of merge operations and 2) base values that are getting merged with merge operands (the second case was also true in previous diff)
Do we also need a compaction filter to get called on merge results?
Test Plan: make && make check
Reviewers: lovro, tnovak, rven, yhchiang, sdong
Reviewed By: sdong
Subscribers: noetzli, kolmike, leveldb, dhruba, sdong
Differential Revision: https://reviews.facebook.net/D47847
2015-10-07 16:30:03 +00:00
|
|
|
// Filters merge operands and values that are equal to `num`.
|
|
|
|
class FilterNumber : public CompactionFilter {
|
|
|
|
public:
|
|
|
|
explicit FilterNumber(uint64_t num) : num_(num) {}
|
|
|
|
|
|
|
|
std::string last_merge_operand_key() { return last_merge_operand_key_; }
|
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
bool Filter(int /*level*/, const ROCKSDB_NAMESPACE::Slice& /*key*/,
|
|
|
|
const ROCKSDB_NAMESPACE::Slice& value, std::string* /*new_value*/,
|
2018-03-05 21:08:17 +00:00
|
|
|
bool* /*value_changed*/) const override {
|
Compaction filter on merge operands
Summary:
Since Andres' internship is over, I took over https://reviews.facebook.net/D42555 and rebased and simplified it a bit.
The behavior in this diff is a bit simpler than in D42555:
* only merge operators are passed through FilterMergeValue(). If fitler function returns true, the merge operator is ignored
* compaction filter is *not* called on: 1) results of merge operations and 2) base values that are getting merged with merge operands (the second case was also true in previous diff)
Do we also need a compaction filter to get called on merge results?
Test Plan: make && make check
Reviewers: lovro, tnovak, rven, yhchiang, sdong
Reviewed By: sdong
Subscribers: noetzli, kolmike, leveldb, dhruba, sdong
Differential Revision: https://reviews.facebook.net/D47847
2015-10-07 16:30:03 +00:00
|
|
|
if (value.size() == sizeof(uint64_t)) {
|
|
|
|
return num_ == DecodeFixed64(value.data());
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
bool FilterMergeOperand(
|
|
|
|
int /*level*/, const ROCKSDB_NAMESPACE::Slice& key,
|
|
|
|
const ROCKSDB_NAMESPACE::Slice& value) const override {
|
Compaction filter on merge operands
Summary:
Since Andres' internship is over, I took over https://reviews.facebook.net/D42555 and rebased and simplified it a bit.
The behavior in this diff is a bit simpler than in D42555:
* only merge operators are passed through FilterMergeValue(). If fitler function returns true, the merge operator is ignored
* compaction filter is *not* called on: 1) results of merge operations and 2) base values that are getting merged with merge operands (the second case was also true in previous diff)
Do we also need a compaction filter to get called on merge results?
Test Plan: make && make check
Reviewers: lovro, tnovak, rven, yhchiang, sdong
Reviewed By: sdong
Subscribers: noetzli, kolmike, leveldb, dhruba, sdong
Differential Revision: https://reviews.facebook.net/D47847
2015-10-07 16:30:03 +00:00
|
|
|
last_merge_operand_key_ = key.ToString();
|
|
|
|
if (value.size() == sizeof(uint64_t)) {
|
|
|
|
return num_ == DecodeFixed64(value.data());
|
|
|
|
}
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
const char* Name() const override { return "FilterBadMergeOperand"; }
|
|
|
|
|
|
|
|
private:
|
|
|
|
mutable std::string last_merge_operand_key_;
|
|
|
|
uint64_t num_;
|
|
|
|
};
|
|
|
|
|
|
|
|
inline std::string EncodeInt(uint64_t x) {
|
|
|
|
std::string result;
|
|
|
|
PutFixed64(&result, x);
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
class SeqStringSource : public SequentialFile {
|
|
|
|
public:
|
2020-02-07 23:16:29 +00:00
|
|
|
SeqStringSource(const std::string& data, std::atomic<int>* read_count)
|
|
|
|
: data_(data), offset_(0), read_count_(read_count) {}
|
2019-07-17 01:18:07 +00:00
|
|
|
~SeqStringSource() override {}
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
Status Read(size_t n, Slice* result, char* scratch) override {
|
|
|
|
std::string output;
|
|
|
|
if (offset_ < data_.size()) {
|
|
|
|
n = std::min(data_.size() - offset_, n);
|
|
|
|
memcpy(scratch, data_.data() + offset_, n);
|
|
|
|
offset_ += n;
|
|
|
|
*result = Slice(scratch, n);
|
|
|
|
} else {
|
|
|
|
return Status::InvalidArgument(
|
|
|
|
"Attemp to read when it already reached eof.");
|
|
|
|
}
|
2020-02-07 23:16:29 +00:00
|
|
|
(*read_count_)++;
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
Status Skip(uint64_t n) override {
|
|
|
|
if (offset_ >= data_.size()) {
|
|
|
|
return Status::InvalidArgument(
|
|
|
|
"Attemp to read when it already reached eof.");
|
|
|
|
}
|
|
|
|
// TODO(yhchiang): Currently doesn't handle the overflow case.
|
2018-09-06 01:07:53 +00:00
|
|
|
offset_ += static_cast<size_t>(n);
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
|
|
|
|
private:
|
|
|
|
std::string data_;
|
|
|
|
size_t offset_;
|
2020-02-07 23:16:29 +00:00
|
|
|
std::atomic<int>* read_count_;
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
};
|
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
class StringEnv : public EnvWrapper {
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
public:
|
2019-07-17 01:18:07 +00:00
|
|
|
class StringSink : public WritableFile {
|
|
|
|
public:
|
|
|
|
explicit StringSink(std::string* contents)
|
|
|
|
: WritableFile(), contents_(contents) {}
|
|
|
|
virtual Status Truncate(uint64_t size) override {
|
|
|
|
contents_->resize(static_cast<size_t>(size));
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
virtual Status Close() override { return Status::OK(); }
|
|
|
|
virtual Status Flush() override { return Status::OK(); }
|
|
|
|
virtual Status Sync() override { return Status::OK(); }
|
|
|
|
virtual Status Append(const Slice& slice) override {
|
|
|
|
contents_->append(slice.data(), slice.size());
|
|
|
|
return Status::OK();
|
|
|
|
}
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
private:
|
|
|
|
std::string* contents_;
|
|
|
|
};
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
explicit StringEnv(Env* t) : EnvWrapper(t) {}
|
|
|
|
~StringEnv() override {}
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
const std::string& GetContent(const std::string& f) { return files_[f]; }
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
const Status WriteToNewFile(const std::string& file_name,
|
|
|
|
const std::string& content) {
|
|
|
|
std::unique_ptr<WritableFile> r;
|
|
|
|
auto s = NewWritableFile(file_name, &r, EnvOptions());
|
2020-05-08 19:36:49 +00:00
|
|
|
if (s.ok()) {
|
|
|
|
s = r->Append(content);
|
2019-07-17 01:18:07 +00:00
|
|
|
}
|
2020-05-08 19:36:49 +00:00
|
|
|
if (s.ok()) {
|
|
|
|
s = r->Flush();
|
|
|
|
}
|
|
|
|
if (s.ok()) {
|
|
|
|
s = r->Close();
|
|
|
|
}
|
|
|
|
assert(!s.ok() || files_[file_name] == content);
|
|
|
|
return s;
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
}
|
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
// The following text is boilerplate that forwards all methods to target()
|
|
|
|
Status NewSequentialFile(const std::string& f,
|
|
|
|
std::unique_ptr<SequentialFile>* r,
|
|
|
|
const EnvOptions& /*options*/) override {
|
|
|
|
auto iter = files_.find(f);
|
|
|
|
if (iter == files_.end()) {
|
|
|
|
return Status::NotFound("The specified file does not exist", f);
|
|
|
|
}
|
2020-02-07 23:16:29 +00:00
|
|
|
r->reset(new SeqStringSource(iter->second, &num_seq_file_read_));
|
2019-07-17 01:18:07 +00:00
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
Status NewRandomAccessFile(const std::string& /*f*/,
|
|
|
|
std::unique_ptr<RandomAccessFile>* /*r*/,
|
|
|
|
const EnvOptions& /*options*/) override {
|
|
|
|
return Status::NotSupported();
|
|
|
|
}
|
|
|
|
Status NewWritableFile(const std::string& f,
|
|
|
|
std::unique_ptr<WritableFile>* r,
|
2018-03-05 21:08:17 +00:00
|
|
|
const EnvOptions& /*options*/) override {
|
2019-07-17 01:18:07 +00:00
|
|
|
auto iter = files_.find(f);
|
|
|
|
if (iter != files_.end()) {
|
|
|
|
return Status::IOError("The specified file already exists", f);
|
|
|
|
}
|
|
|
|
r->reset(new StringSink(&files_[f]));
|
|
|
|
return Status::OK();
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
}
|
2019-07-17 01:18:07 +00:00
|
|
|
virtual Status NewDirectory(
|
|
|
|
const std::string& /*name*/,
|
|
|
|
std::unique_ptr<Directory>* /*result*/) override {
|
|
|
|
return Status::NotSupported();
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
}
|
2019-07-17 01:18:07 +00:00
|
|
|
Status FileExists(const std::string& f) override {
|
|
|
|
if (files_.find(f) == files_.end()) {
|
|
|
|
return Status::NotFound();
|
|
|
|
}
|
|
|
|
return Status::OK();
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
}
|
2019-07-17 01:18:07 +00:00
|
|
|
Status GetChildren(const std::string& /*dir*/,
|
|
|
|
std::vector<std::string>* /*r*/) override {
|
|
|
|
return Status::NotSupported();
|
|
|
|
}
|
|
|
|
Status DeleteFile(const std::string& f) override {
|
|
|
|
files_.erase(f);
|
|
|
|
return Status::OK();
|
|
|
|
}
|
|
|
|
Status CreateDir(const std::string& /*d*/) override {
|
|
|
|
return Status::NotSupported();
|
|
|
|
}
|
|
|
|
Status CreateDirIfMissing(const std::string& /*d*/) override {
|
|
|
|
return Status::NotSupported();
|
|
|
|
}
|
|
|
|
Status DeleteDir(const std::string& /*d*/) override {
|
|
|
|
return Status::NotSupported();
|
|
|
|
}
|
|
|
|
Status GetFileSize(const std::string& f, uint64_t* s) override {
|
|
|
|
auto iter = files_.find(f);
|
|
|
|
if (iter == files_.end()) {
|
|
|
|
return Status::NotFound("The specified file does not exist:", f);
|
|
|
|
}
|
|
|
|
*s = iter->second.size();
|
|
|
|
return Status::OK();
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
}
|
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
Status GetFileModificationTime(const std::string& /*fname*/,
|
|
|
|
uint64_t* /*file_mtime*/) override {
|
|
|
|
return Status::NotSupported();
|
|
|
|
}
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
Status RenameFile(const std::string& /*s*/,
|
|
|
|
const std::string& /*t*/) override {
|
|
|
|
return Status::NotSupported();
|
|
|
|
}
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
Status LinkFile(const std::string& /*s*/,
|
|
|
|
const std::string& /*t*/) override {
|
|
|
|
return Status::NotSupported();
|
|
|
|
}
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
Status LockFile(const std::string& /*f*/, FileLock** /*l*/) override {
|
|
|
|
return Status::NotSupported();
|
|
|
|
}
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
Status UnlockFile(FileLock* /*l*/) override {
|
|
|
|
return Status::NotSupported();
|
|
|
|
}
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
2020-02-07 23:16:29 +00:00
|
|
|
std::atomic<int> num_seq_file_read_;
|
|
|
|
|
2019-07-17 01:18:07 +00:00
|
|
|
protected:
|
|
|
|
std::unordered_map<std::string, std::string> files_;
|
|
|
|
};
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
|
|
|
// Randomly initialize the given DBOptions
|
|
|
|
void RandomInitDBOptions(DBOptions* db_opt, Random* rnd);
|
|
|
|
|
|
|
|
// Randomly initialize the given ColumnFamilyOptions
|
|
|
|
// Note that the caller is responsible for releasing non-null
|
|
|
|
// cf_opt->compaction_filter.
|
2019-06-04 02:47:02 +00:00
|
|
|
void RandomInitCFOptions(ColumnFamilyOptions* cf_opt, DBOptions&, Random* rnd);
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
|
|
|
|
// A dummy merge operator which can change its name
|
|
|
|
class ChanglingMergeOperator : public MergeOperator {
|
|
|
|
public:
|
|
|
|
explicit ChanglingMergeOperator(const std::string& name)
|
|
|
|
: name_(name + "MergeOperator") {}
|
|
|
|
~ChanglingMergeOperator() {}
|
|
|
|
|
|
|
|
void SetName(const std::string& name) { name_ = name; }
|
|
|
|
|
2018-03-05 21:08:17 +00:00
|
|
|
virtual bool FullMergeV2(const MergeOperationInput& /*merge_in*/,
|
|
|
|
MergeOperationOutput* /*merge_out*/) const override {
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
return false;
|
|
|
|
}
|
2018-03-05 21:08:17 +00:00
|
|
|
virtual bool PartialMergeMulti(const Slice& /*key*/,
|
|
|
|
const std::deque<Slice>& /*operand_list*/,
|
|
|
|
std::string* /*new_value*/,
|
|
|
|
Logger* /*logger*/) const override {
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
virtual const char* Name() const override { return name_.c_str(); }
|
|
|
|
|
|
|
|
protected:
|
|
|
|
std::string name_;
|
|
|
|
};
|
|
|
|
|
|
|
|
// Returns a dummy merge operator with random name.
|
|
|
|
MergeOperator* RandomMergeOperator(Random* rnd);
|
|
|
|
|
|
|
|
// A dummy compaction filter which can change its name
|
|
|
|
class ChanglingCompactionFilter : public CompactionFilter {
|
|
|
|
public:
|
|
|
|
explicit ChanglingCompactionFilter(const std::string& name)
|
|
|
|
: name_(name + "CompactionFilter") {}
|
|
|
|
~ChanglingCompactionFilter() {}
|
|
|
|
|
|
|
|
void SetName(const std::string& name) { name_ = name; }
|
|
|
|
|
2018-03-05 21:08:17 +00:00
|
|
|
bool Filter(int /*level*/, const Slice& /*key*/,
|
|
|
|
const Slice& /*existing_value*/, std::string* /*new_value*/,
|
|
|
|
bool* /*value_changed*/) const override {
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
const char* Name() const override { return name_.c_str(); }
|
|
|
|
|
|
|
|
private:
|
|
|
|
std::string name_;
|
|
|
|
};
|
|
|
|
|
|
|
|
// Returns a dummy compaction filter with a random name.
|
|
|
|
CompactionFilter* RandomCompactionFilter(Random* rnd);
|
|
|
|
|
|
|
|
// A dummy compaction filter factory which can change its name
|
|
|
|
class ChanglingCompactionFilterFactory : public CompactionFilterFactory {
|
|
|
|
public:
|
|
|
|
explicit ChanglingCompactionFilterFactory(const std::string& name)
|
|
|
|
: name_(name + "CompactionFilterFactory") {}
|
|
|
|
~ChanglingCompactionFilterFactory() {}
|
|
|
|
|
|
|
|
void SetName(const std::string& name) { name_ = name; }
|
|
|
|
|
|
|
|
std::unique_ptr<CompactionFilter> CreateCompactionFilter(
|
2018-03-05 21:08:17 +00:00
|
|
|
const CompactionFilter::Context& /*context*/) override {
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
return std::unique_ptr<CompactionFilter>();
|
|
|
|
}
|
|
|
|
|
|
|
|
// Returns a name that identifies this compaction filter factory.
|
|
|
|
const char* Name() const override { return name_.c_str(); }
|
|
|
|
|
|
|
|
protected:
|
|
|
|
std::string name_;
|
|
|
|
};
|
|
|
|
|
2020-04-10 23:03:33 +00:00
|
|
|
extern const Comparator* ComparatorWithU64Ts();
|
|
|
|
|
Add OptionsUtil::LoadOptionsFromFile() API
Summary:
This patch adds OptionsUtil::LoadOptionsFromFile() and
OptionsUtil::LoadLatestOptionsFromDB(), which allow developers
to construct DBOptions and ColumnFamilyOptions from a RocksDB
options file. Note that most pointer-typed options such as
merge_operator will not be constructed.
With this API, developers no longer need to remember all the
options in order to reopen an existing rocksdb instance like
the following:
DBOptions db_options;
std::vector<std::string> cf_names;
std::vector<ColumnFamilyOptions> cf_opts;
// Load primitive-typed options from an existing DB
OptionsUtil::LoadLatestOptionsFromDB(
dbname, &db_options, &cf_names, &cf_opts);
// Initialize necessary pointer-typed options
cf_opts[0].merge_operator.reset(new MyMergeOperator());
...
// Construct the vector of ColumnFamilyDescriptor
std::vector<ColumnFamilyDescriptor> cf_descs;
for (size_t i = 0; i < cf_opts.size(); ++i) {
cf_descs.emplace_back(cf_names[i], cf_opts[i]);
}
// Open the DB
DB* db = nullptr;
std::vector<ColumnFamilyHandle*> cf_handles;
auto s = DB::Open(db_options, dbname, cf_descs,
&handles, &db);
Test Plan:
Augment existing tests in column_family_test
options_test
db_test
Reviewers: igor, IslamAbdelRahman, sdong, anthony
Reviewed By: anthony
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D49095
2015-11-12 14:52:43 +00:00
|
|
|
CompressionType RandomCompressionType(Random* rnd);
|
|
|
|
|
|
|
|
void RandomCompressionTypeVector(const size_t count,
|
|
|
|
std::vector<CompressionType>* types,
|
|
|
|
Random* rnd);
|
|
|
|
|
|
|
|
CompactionFilterFactory* RandomCompactionFilterFactory(Random* rnd);
|
|
|
|
|
|
|
|
const SliceTransform* RandomSliceTransform(Random* rnd, int pre_defined = -1);
|
|
|
|
|
|
|
|
TableFactory* RandomTableFactory(Random* rnd, int pre_defined = -1);
|
|
|
|
|
|
|
|
std::string RandomName(Random* rnd, const size_t len);
|
|
|
|
|
2018-11-13 19:14:41 +00:00
|
|
|
bool IsDirectIOSupported(Env* env, const std::string& dir);
|
|
|
|
|
2019-07-09 21:48:07 +00:00
|
|
|
// Return the number of lines where a given pattern was found in a file.
|
|
|
|
size_t GetLinesCount(const std::string& fname, const std::string& pattern);
|
|
|
|
|
2020-04-24 22:30:12 +00:00
|
|
|
// TEST_TMPDIR may be set to /dev/shm in Makefile,
|
|
|
|
// but /dev/shm does not support direct IO.
|
|
|
|
// Tries to set TEST_TMPDIR to a directory supporting direct IO.
|
|
|
|
void ResetTmpDirForDirectIO();
|
|
|
|
|
Fix many tests to run with MEM_ENV and ENCRYPTED_ENV; Introduce a MemoryFileSystem class (#7566)
Summary:
This PR does a few things:
1. The MockFileSystem class was split out from the MockEnv. This change would theoretically allow a MockFileSystem to be used by other Environments as well (if we created a means of constructing one). The MockFileSystem implements a FileSystem in its entirety and does not rely on any Wrapper implementation.
2. Make the RocksDB test suite work when MOCK_ENV=1 and ENCRYPTED_ENV=1 are set. To accomplish this, a few things were needed:
- The tests that tried to use the "wrong" environment (Env::Default() instead of env_) were updated
- The MockFileSystem was changed to support the features it was missing or mishandled (such as recursively deleting files in a directory or supporting renaming of a directory).
3. Updated the test framework to have a ROCKSDB_GTEST_SKIP macro. This can be used to flag tests that are skipped. Currently, this defaults to doing nothing (marks the test as SUCCESS) but will mark the tests as SKIPPED when RocksDB is upgraded to a version of gtest that supports this (gtest-1.10).
I have run a full "make check" with MEM_ENV, ENCRYPTED_ENV, both, and neither under both MacOS and RedHat. A few tests were disabled/skipped for the MEM/ENCRYPTED cases. The error_handler_fs_test fails/hangs for MEM_ENV (presumably a timing problem) and I will introduce another PR/issue to track that problem. (I will also push a change to disable those tests soon). There is one more test in DBTest2 that also fails which I need to investigate or skip before this PR is merged.
Theoretically, this PR should also allow the test suite to run against an Env loaded from the registry, though I do not have one to try it with currently.
Finally, once this is accepted, it would be nice if there was a CircleCI job to run these tests on a checkin so this effort does not become stale. I do not know how to do that, so if someone could write that job, it would be appreciated :)
Pull Request resolved: https://github.com/facebook/rocksdb/pull/7566
Reviewed By: zhichao-cao
Differential Revision: D24408980
Pulled By: jay-zhuang
fbshipit-source-id: 911b1554a4d0da06fd51feca0c090a4abdcb4a5f
2020-10-27 17:31:34 +00:00
|
|
|
Status CorruptFile(Env* env, const std::string& fname, int offset,
|
|
|
|
int bytes_to_corrupt, bool verify_checksum = true);
|
|
|
|
Status TruncateFile(Env* env, const std::string& fname, uint64_t length);
|
2020-06-05 18:06:26 +00:00
|
|
|
|
2011-10-31 17:22:06 +00:00
|
|
|
} // namespace test
|
2020-02-20 20:07:53 +00:00
|
|
|
} // namespace ROCKSDB_NAMESPACE
|