mirror of
https://github.com/facebook/rocksdb.git
synced 2024-11-25 22:44:05 +00:00
f383641a1d
Summary: Performing unordered writes in rocksdb when unordered_write option is set to true. When enabled the writes to memtable are done without joining any write thread. This offers much higher write throughput since the upcoming writes would not have to wait for the slowest memtable write to finish. The tradeoff is that the writes visible to a snapshot might change over time. If the application cannot tolerate that, it should implement its own mechanisms to work around that. Using TransactionDB with WRITE_PREPARED write policy is one way to achieve that. Doing so increases the max throughput by 2.2x without however compromising the snapshot guarantees. The patch is prepared based on an original by siying Existing unit tests are extended to include unordered_write option. Benchmark Results: ``` TEST_TMPDIR=/dev/shm/ ./db_bench_unordered --benchmarks=fillrandom --threads=32 --num=10000000 -max_write_buffer_number=16 --max_background_jobs=64 --batch_size=8 --writes=3000000 -level0_file_num_compaction_trigger=99999 --level0_slowdown_writes_trigger=99999 --level0_stop_writes_trigger=99999 -enable_pipelined_write=false -disable_auto_compactions --unordered_write=1 ``` With WAL - Vanilla RocksDB: 78.6 MB/s - WRITER_PREPARED with unordered_write: 177.8 MB/s (2.2x) - unordered_write: 368.9 MB/s (4.7x with relaxed snapshot guarantees) Without WAL - Vanilla RocksDB: 111.3 MB/s - WRITER_PREPARED with unordered_write: 259.3 MB/s MB/s (2.3x) - unordered_write: 645.6 MB/s (5.8x with relaxed snapshot guarantees) - WRITER_PREPARED with unordered_write disable concurrency control: 185.3 MB/s MB/s (2.35x) Limitations: - The feature is not yet extended to `max_successive_merges` > 0. The feature is also incompatible with `enable_pipelined_write` = true as well as with `allow_concurrent_memtable_write` = false. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5218 Differential Revision: D15219029 Pulled By: maysamyabandeh fbshipit-source-id: 38f2abc4af8780148c6128acdba2b3227bc81759
52 lines
1.4 KiB
C++
52 lines
1.4 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
|
|
#pragma once
|
|
|
|
#include <stdint.h>
|
|
#include <atomic>
|
|
#include <mutex>
|
|
#include <set>
|
|
|
|
namespace rocksdb {
|
|
|
|
class ColumnFamilyData;
|
|
|
|
// Unless otherwise noted, all methods on FlushScheduler should be called
|
|
// only with the DB mutex held or from a single-threaded recovery context.
|
|
class FlushScheduler {
|
|
public:
|
|
FlushScheduler() : head_(nullptr) {}
|
|
|
|
// May be called from multiple threads at once, but not concurrent with
|
|
// any other method calls on this instance
|
|
void ScheduleFlush(ColumnFamilyData* cfd);
|
|
|
|
// Removes and returns Ref()-ed column family. Client needs to Unref().
|
|
// Filters column families that have been dropped.
|
|
ColumnFamilyData* TakeNextColumnFamily();
|
|
|
|
// This can be called concurrently with ScheduleFlush but it would miss all
|
|
// the scheduled flushes after the last synchronization. This would result
|
|
// into less precise enforcement of memtable sizes but should not matter much.
|
|
bool Empty();
|
|
|
|
void Clear();
|
|
|
|
private:
|
|
struct Node {
|
|
ColumnFamilyData* column_family;
|
|
Node* next;
|
|
};
|
|
|
|
std::atomic<Node*> head_;
|
|
#ifndef NDEBUG
|
|
std::mutex checking_mutex_;
|
|
std::set<ColumnFamilyData*> checking_set_;
|
|
#endif // NDEBUG
|
|
};
|
|
|
|
} // namespace rocksdb
|