2016-02-09 23:12:00 +00:00
|
|
|
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
2017-07-15 23:03:42 +00:00
|
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
|
|
// (found in the LICENSE.Apache file in the root directory).
|
2014-10-28 18:54:33 +00:00
|
|
|
//
|
|
|
|
// Copyright (c) 2011 The LevelDB Authors. All rights reserved.
|
|
|
|
// Use of this source code is governed by a BSD-style license that can be
|
|
|
|
// found in the LICENSE file. See the AUTHORS file for names of contributors.
|
|
|
|
#pragma once
|
|
|
|
|
|
|
|
#include <atomic>
|
|
|
|
#include <deque>
|
|
|
|
#include <limits>
|
2019-10-16 17:39:00 +00:00
|
|
|
#include <list>
|
2014-10-28 18:54:33 +00:00
|
|
|
#include <set>
|
2019-10-16 17:39:00 +00:00
|
|
|
#include <string>
|
2014-10-28 18:54:33 +00:00
|
|
|
#include <utility>
|
|
|
|
#include <vector>
|
|
|
|
|
2021-03-18 03:43:22 +00:00
|
|
|
#include "db/blob/blob_file_completion_callback.h"
|
2014-10-28 18:54:33 +00:00
|
|
|
#include "db/column_family.h"
|
2016-08-04 00:42:06 +00:00
|
|
|
#include "db/flush_scheduler.h"
|
|
|
|
#include "db/internal_stats.h"
|
|
|
|
#include "db/job_context.h"
|
2015-08-07 00:59:05 +00:00
|
|
|
#include "db/log_writer.h"
|
Skip deleted WALs during recovery
Summary:
This patch record min log number to keep to the manifest while flushing SST files to ignore them and any WAL older than them during recovery. This is to avoid scenarios when we have a gap between the WAL files are fed to the recovery procedure. The gap could happen by for example out-of-order WAL deletion. Such gap could cause problems in 2PC recovery where the prepared and commit entry are placed into two separate WAL and gap in the WALs could result into not processing the WAL with the commit entry and hence breaking the 2PC recovery logic.
Before the commit, for 2PC case, we determined which log number to keep in FindObsoleteFiles(). We looked at the earliest logs with outstanding prepare entries, or prepare entries whose respective commit or abort are in memtable. With the commit, the same calculation is done while we apply the SST flush. Just before installing the flush file, we precompute the earliest log file to keep after the flush finishes using the same logic (but skipping the memtables just flushed), record this information to the manifest entry for this new flushed SST file. This pre-computed value is also remembered in memory, and will later be used to determine whether a log file can be deleted. This value is unlikely to change until next flush because the commit entry will stay in memtable. (In WritePrepared, we could have removed the older log files as soon as all prepared entries are committed. It's not yet done anyway. Even if we do it, the only thing we loss with this new approach is earlier log deletion between two flushes, which does not guarantee to happen anyway because the obsolete file clean-up function is only executed after flush or compaction)
This min log number to keep is stored in the manifest using the safely-ignore customized field of AddFile entry, in order to guarantee that the DB generated using newer release can be opened by previous releases no older than 4.2.
Closes https://github.com/facebook/rocksdb/pull/3765
Differential Revision: D7747618
Pulled By: siying
fbshipit-source-id: d00c92105b4f83852e9754a1b70d6b64cb590729
2018-05-03 22:35:11 +00:00
|
|
|
#include "db/logs_with_prep_tracker.h"
|
2014-10-28 18:54:33 +00:00
|
|
|
#include "db/memtable_list.h"
|
2022-07-15 04:49:34 +00:00
|
|
|
#include "db/seqno_to_time_mapping.h"
|
2015-08-07 00:59:05 +00:00
|
|
|
#include "db/snapshot_impl.h"
|
|
|
|
#include "db/version_edit.h"
|
2016-08-04 00:42:06 +00:00
|
|
|
#include "db/write_controller.h"
|
|
|
|
#include "db/write_thread.h"
|
2019-06-01 00:19:43 +00:00
|
|
|
#include "logging/event_logger.h"
|
2017-04-06 02:02:00 +00:00
|
|
|
#include "monitoring/instrumented_mutex.h"
|
|
|
|
#include "options/db_options.h"
|
2014-10-28 18:54:33 +00:00
|
|
|
#include "port/port.h"
|
|
|
|
#include "rocksdb/db.h"
|
|
|
|
#include "rocksdb/env.h"
|
2019-10-16 17:39:00 +00:00
|
|
|
#include "rocksdb/listener.h"
|
2014-10-28 18:54:33 +00:00
|
|
|
#include "rocksdb/memtablerep.h"
|
|
|
|
#include "rocksdb/transaction_log.h"
|
2015-10-12 22:06:38 +00:00
|
|
|
#include "table/scoped_arena_iterator.h"
|
2014-10-28 18:54:33 +00:00
|
|
|
#include "util/autovector.h"
|
|
|
|
#include "util/stop_watch.h"
|
|
|
|
#include "util/thread_local.h"
|
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
namespace ROCKSDB_NAMESPACE {
|
2014-10-28 18:54:33 +00:00
|
|
|
|
Skip deleted WALs during recovery
Summary:
This patch record min log number to keep to the manifest while flushing SST files to ignore them and any WAL older than them during recovery. This is to avoid scenarios when we have a gap between the WAL files are fed to the recovery procedure. The gap could happen by for example out-of-order WAL deletion. Such gap could cause problems in 2PC recovery where the prepared and commit entry are placed into two separate WAL and gap in the WALs could result into not processing the WAL with the commit entry and hence breaking the 2PC recovery logic.
Before the commit, for 2PC case, we determined which log number to keep in FindObsoleteFiles(). We looked at the earliest logs with outstanding prepare entries, or prepare entries whose respective commit or abort are in memtable. With the commit, the same calculation is done while we apply the SST flush. Just before installing the flush file, we precompute the earliest log file to keep after the flush finishes using the same logic (but skipping the memtables just flushed), record this information to the manifest entry for this new flushed SST file. This pre-computed value is also remembered in memory, and will later be used to determine whether a log file can be deleted. This value is unlikely to change until next flush because the commit entry will stay in memtable. (In WritePrepared, we could have removed the older log files as soon as all prepared entries are committed. It's not yet done anyway. Even if we do it, the only thing we loss with this new approach is earlier log deletion between two flushes, which does not guarantee to happen anyway because the obsolete file clean-up function is only executed after flush or compaction)
This min log number to keep is stored in the manifest using the safely-ignore customized field of AddFile entry, in order to guarantee that the DB generated using newer release can be opened by previous releases no older than 4.2.
Closes https://github.com/facebook/rocksdb/pull/3765
Differential Revision: D7747618
Pulled By: siying
fbshipit-source-id: d00c92105b4f83852e9754a1b70d6b64cb590729
2018-05-03 22:35:11 +00:00
|
|
|
class DBImpl;
|
2014-10-28 18:54:33 +00:00
|
|
|
class MemTable;
|
2017-10-06 17:26:38 +00:00
|
|
|
class SnapshotChecker;
|
2014-10-28 18:54:33 +00:00
|
|
|
class TableCache;
|
|
|
|
class Version;
|
|
|
|
class VersionEdit;
|
|
|
|
class VersionSet;
|
|
|
|
class Arena;
|
|
|
|
|
|
|
|
class FlushJob {
|
|
|
|
public:
|
|
|
|
// TODO(icanadi) make effort to reduce number of parameters here
|
|
|
|
// IMPORTANT: mutable_cf_options needs to be alive while FlushJob is alive
|
|
|
|
FlushJob(const std::string& dbname, ColumnFamilyData* cfd,
|
2016-09-23 23:34:04 +00:00
|
|
|
const ImmutableDBOptions& db_options,
|
2020-12-02 17:29:50 +00:00
|
|
|
const MutableCFOptions& mutable_cf_options, uint64_t max_memtable_id,
|
|
|
|
const FileOptions& file_options, VersionSet* versions,
|
|
|
|
InstrumentedMutex* db_mutex, std::atomic<bool>* shutting_down,
|
2015-08-24 18:11:12 +00:00
|
|
|
std::vector<SequenceNumber> existing_snapshots,
|
Use SST files for Transaction conflict detection
Summary:
Currently, transactions can fail even if there is no actual write conflict. This is due to relying on only the memtables to check for write-conflicts. Users have to tune memtable settings to try to avoid this, but it's hard to figure out exactly how to tune these settings.
With this diff, TransactionDB will use both memtables and SST files to determine if there are any write conflicts. This relies on the fact that BlockBasedTable stores sequence numbers for all writes that happen after any open snapshot. Also, D50295 is needed to prevent SingleDelete from disappearing writes (the TODOs in this test code will be fixed once the other diff is approved and merged).
Note that Optimistic transactions will still rely on tuning memtable settings as we do not want to read from SST while on the write thread. Also, memtable settings can still be used to reduce how often TransactionDB needs to read SST files.
Test Plan: unit tests, db bench
Reviewers: rven, yhchiang, kradhakrishnan, IslamAbdelRahman, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb, yoshinorim
Differential Revision: https://reviews.facebook.net/D50475
2015-10-15 23:37:15 +00:00
|
|
|
SequenceNumber earliest_write_conflict_snapshot,
|
2017-10-06 17:26:38 +00:00
|
|
|
SnapshotChecker* snapshot_checker, JobContext* job_context,
|
2023-01-24 17:54:04 +00:00
|
|
|
FlushReason flush_reason, LogBuffer* log_buffer,
|
|
|
|
FSDirectory* db_directory, FSDirectory* output_file_directory,
|
2020-03-03 00:14:00 +00:00
|
|
|
CompressionType output_compression, Statistics* stats,
|
|
|
|
EventLogger* event_logger, bool measure_io_stats,
|
2019-03-20 00:24:09 +00:00
|
|
|
const bool sync_output_directory, const bool write_manifest,
|
2020-09-08 17:49:01 +00:00
|
|
|
Env::Priority thread_pri, const std::shared_ptr<IOTracer>& io_tracer,
|
2022-07-15 04:49:34 +00:00
|
|
|
const SeqnoToTimeMapping& seq_time_mapping,
|
2020-11-13 02:43:30 +00:00
|
|
|
const std::string& db_id = "", const std::string& db_session_id = "",
|
2021-03-18 03:43:22 +00:00
|
|
|
std::string full_history_ts_low = "",
|
|
|
|
BlobFileCompletionCallback* blob_callback = nullptr);
|
2015-03-13 17:45:40 +00:00
|
|
|
|
|
|
|
~FlushJob();
|
2014-10-28 18:54:33 +00:00
|
|
|
|
2017-01-20 07:03:45 +00:00
|
|
|
// Require db_mutex held.
|
2017-06-28 00:02:20 +00:00
|
|
|
// Once PickMemTable() is called, either Run() or Cancel() has to be called.
|
2016-07-19 22:12:46 +00:00
|
|
|
void PickMemTable();
|
Rollback other pending memtable flushes when a flush fails (#11865)
Summary:
when atomic_flush=false, there are certain cases where we try to install memtable results with already deleted SST files. This can happen when the following sequence events happen:
```
Start Flush0 for memtable M0 to SST0
Start Flush1 for memtable M1 to SST1
Flush 1 returns OK, but don't install to MANIFEST and let whoever flushes M0 to take care of it
Flush0 finishes with a retryable IOError, it rollbacks M0, (incorrectly) does not rollback M1, and deletes SST0 and SST1
Starts Flush2 for M0, it does not pick up M1 since it thought M1 is flushed
Flush2 writes SST2 and finishes OK, tries to install SST2 and SST1
Error opening SST1 since it's already deleted with an error message like the following:
IO error: No such file or directory: While open a file for random read: /tmp/rocksdbtest-501/db_flush_test_3577_4230653031040984171/000011.sst: No such file or directory
```
This happens since:
1. We currently only rollback the memtables that we are flushing in a flush job when atomic_flush=false.
2. Pending output SSTs from previous flushes are deleted since a pending file number is released whenever a flush job is finished no matter of flush status: https://github.com/facebook/rocksdb/blob/f42e70bf561d4be9b6bbe7316d1c2c0c8a3818e6/db/db_impl/db_impl_compaction_flush.cc#L3161
This PR fixes the issue by rollback these pending flushes.
There is another issue where if a new flush for new memtable starts and finishes after Flush0 finishes. Its output may also be deleted (see more in unit test). It is fixed by checking bg error status before installing a memtable result, and rollback if there is an error.
There is a more efficient fix where we just don't release the pending file output number for flushes that delegate installation. It is more efficient since it does not have to rewrite the flush output file. With the fix in this PR, we can end up with a giant file if a lot of memtables are being flushed together. However, the more efficient fix is a bit more complicated to implement (requires associating such pending file numbers with flush job/memtables) and is more risky since it changes normal flush code path.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11865
Test Plan: * Added repro unit tests.
Reviewed By: anand1976
Differential Revision: D49484922
Pulled By: cbi42
fbshipit-source-id: 25b536c08f4e02e7f1d0f86571663737d2b5d53d
2023-09-21 22:31:29 +00:00
|
|
|
// @param skip_since_bg_error If not nullptr and if atomic_flush=false,
|
|
|
|
// then it is set to true if flush installation is skipped and memtable
|
|
|
|
// is rolled back due to existing background error.
|
Skip deleted WALs during recovery
Summary:
This patch record min log number to keep to the manifest while flushing SST files to ignore them and any WAL older than them during recovery. This is to avoid scenarios when we have a gap between the WAL files are fed to the recovery procedure. The gap could happen by for example out-of-order WAL deletion. Such gap could cause problems in 2PC recovery where the prepared and commit entry are placed into two separate WAL and gap in the WALs could result into not processing the WAL with the commit entry and hence breaking the 2PC recovery logic.
Before the commit, for 2PC case, we determined which log number to keep in FindObsoleteFiles(). We looked at the earliest logs with outstanding prepare entries, or prepare entries whose respective commit or abort are in memtable. With the commit, the same calculation is done while we apply the SST flush. Just before installing the flush file, we precompute the earliest log file to keep after the flush finishes using the same logic (but skipping the memtables just flushed), record this information to the manifest entry for this new flushed SST file. This pre-computed value is also remembered in memory, and will later be used to determine whether a log file can be deleted. This value is unlikely to change until next flush because the commit entry will stay in memtable. (In WritePrepared, we could have removed the older log files as soon as all prepared entries are committed. It's not yet done anyway. Even if we do it, the only thing we loss with this new approach is earlier log deletion between two flushes, which does not guarantee to happen anyway because the obsolete file clean-up function is only executed after flush or compaction)
This min log number to keep is stored in the manifest using the safely-ignore customized field of AddFile entry, in order to guarantee that the DB generated using newer release can be opened by previous releases no older than 4.2.
Closes https://github.com/facebook/rocksdb/pull/3765
Differential Revision: D7747618
Pulled By: siying
fbshipit-source-id: d00c92105b4f83852e9754a1b70d6b64cb590729
2018-05-03 22:35:11 +00:00
|
|
|
Status Run(LogsWithPrepTracker* prep_tracker = nullptr,
|
2021-08-19 00:39:00 +00:00
|
|
|
FileMetaData* file_meta = nullptr,
|
Rollback other pending memtable flushes when a flush fails (#11865)
Summary:
when atomic_flush=false, there are certain cases where we try to install memtable results with already deleted SST files. This can happen when the following sequence events happen:
```
Start Flush0 for memtable M0 to SST0
Start Flush1 for memtable M1 to SST1
Flush 1 returns OK, but don't install to MANIFEST and let whoever flushes M0 to take care of it
Flush0 finishes with a retryable IOError, it rollbacks M0, (incorrectly) does not rollback M1, and deletes SST0 and SST1
Starts Flush2 for M0, it does not pick up M1 since it thought M1 is flushed
Flush2 writes SST2 and finishes OK, tries to install SST2 and SST1
Error opening SST1 since it's already deleted with an error message like the following:
IO error: No such file or directory: While open a file for random read: /tmp/rocksdbtest-501/db_flush_test_3577_4230653031040984171/000011.sst: No such file or directory
```
This happens since:
1. We currently only rollback the memtables that we are flushing in a flush job when atomic_flush=false.
2. Pending output SSTs from previous flushes are deleted since a pending file number is released whenever a flush job is finished no matter of flush status: https://github.com/facebook/rocksdb/blob/f42e70bf561d4be9b6bbe7316d1c2c0c8a3818e6/db/db_impl/db_impl_compaction_flush.cc#L3161
This PR fixes the issue by rollback these pending flushes.
There is another issue where if a new flush for new memtable starts and finishes after Flush0 finishes. Its output may also be deleted (see more in unit test). It is fixed by checking bg error status before installing a memtable result, and rollback if there is an error.
There is a more efficient fix where we just don't release the pending file output number for flushes that delegate installation. It is more efficient since it does not have to rewrite the flush output file. With the fix in this PR, we can end up with a giant file if a lot of memtables are being flushed together. However, the more efficient fix is a bit more complicated to implement (requires associating such pending file numbers with flush job/memtables) and is more risky since it changes normal flush code path.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11865
Test Plan: * Added repro unit tests.
Reviewed By: anand1976
Differential Revision: D49484922
Pulled By: cbi42
fbshipit-source-id: 25b536c08f4e02e7f1d0f86571663737d2b5d53d
2023-09-21 22:31:29 +00:00
|
|
|
bool* switched_to_mempurge = nullptr,
|
|
|
|
bool* skipped_since_bg_error = nullptr,
|
|
|
|
ErrorHandler* error_handler = nullptr);
|
2017-01-20 07:03:45 +00:00
|
|
|
void Cancel();
|
2018-10-16 02:59:20 +00:00
|
|
|
const autovector<MemTable*>& GetMemTables() const { return mems_; }
|
2014-10-28 18:54:33 +00:00
|
|
|
|
2019-10-16 17:39:00 +00:00
|
|
|
std::list<std::unique_ptr<FlushJobInfo>>* GetCommittedFlushJobsInfo() {
|
|
|
|
return &committed_flush_jobs_info_;
|
|
|
|
}
|
|
|
|
|
2014-10-28 18:54:33 +00:00
|
|
|
private:
|
Set Write rate limiter priority dynamically and pass it to FS (#9988)
Summary:
### Context:
Background compactions and flush generate large reads and writes, and can be long running, especially for universal compaction. In some cases, this can impact foreground reads and writes by users.
From the RocksDB perspective, there can be two kinds of rate limiters, the internal (native) one and the external one.
- The internal (native) rate limiter is introduced in [the wiki](https://github.com/facebook/rocksdb/wiki/Rate-Limiter). Currently, only IO_LOW and IO_HIGH are used and they are set statically.
- For the external rate limiter, in FSWritableFile functions, IOOptions is open for end users to set and get rate_limiter_priority for their own rate limiter. Currently, RocksDB doesn’t pass the rate_limiter_priority through IOOptions to the file system.
### Solution
During the User Read, Flush write, Compaction read/write, the WriteController is used to determine whether DB writes are stalled or slowed down. The rate limiter priority (Env::IOPriority) can be determined accordingly. We decided to always pass the priority in IOOptions. What the file system does with it should be a contract between the user and the file system. We would like to set the rate limiter priority at file level, since the Flush/Compaction job level may be too coarse with multiple files and block IO level is too granular.
**This PR is for the Write path.** The **Write:** dynamic priority for different state are listed as follows:
| State | Normal | Delayed | Stalled |
| ----- | ------ | ------- | ------- |
| Flush | IO_HIGH | IO_USER | IO_USER |
| Compaction | IO_LOW | IO_USER | IO_USER |
Flush and Compaction writes share the same call path through BlockBaseTableWriter, WritableFileWriter, and FSWritableFile. When a new FSWritableFile object is created, its io_priority_ can be set dynamically based on the state of the WriteController. In WritableFileWriter, before the call sites of FSWritableFile functions, WritableFileWriter::DecideRateLimiterPriority() determines the rate_limiter_priority. The options (IOOptions) argument of FSWritableFile functions will be updated with the rate_limiter_priority.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9988
Test Plan: Add unit tests.
Reviewed By: anand1976
Differential Revision: D36395159
Pulled By: gitbw95
fbshipit-source-id: a7c82fc29759139a1a07ec46c37dbf7e753474cf
2022-05-18 07:41:41 +00:00
|
|
|
friend class FlushJobTest_GetRateLimiterPriorityForWrite_Test;
|
|
|
|
|
2015-05-16 06:22:22 +00:00
|
|
|
void ReportStartedFlush();
|
|
|
|
void ReportFlushInputSize(const autovector<MemTable*>& mems);
|
|
|
|
void RecordFlushIOStats();
|
2016-07-19 22:12:46 +00:00
|
|
|
Status WriteLevel0Table();
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
|
|
|
|
// Memtable Garbage Collection algorithm: a MemPurge takes the list
|
|
|
|
// of immutable memtables and filters out (or "purge") the outdated bytes
|
|
|
|
// out of it. The output (the filtered bytes, or "useful payload") is
|
|
|
|
// then transfered into a new memtable. If this memtable is filled, then
|
|
|
|
// the mempurge is aborted and rerouted to a regular flush process. Else,
|
|
|
|
// depending on the heuristics, placed onto the immutable memtable list.
|
|
|
|
// The addition to the imm list will not trigger a flush operation. The
|
|
|
|
// flush of the imm list will instead be triggered once the mutable memtable
|
|
|
|
// is added to the imm list.
|
|
|
|
// This process is typically intended for workloads with heavy overwrites
|
|
|
|
// when we want to avoid SSD writes (and reads) as much as possible.
|
|
|
|
// "MemPurge" is an experimental feature still at a very early stage
|
|
|
|
// of development. At the moment it is only compatible with the Get, Put,
|
|
|
|
// Delete operations as well as Iterators and CompactionFilters.
|
|
|
|
// For this early version, "MemPurge" is called by setting the
|
2021-08-11 01:07:48 +00:00
|
|
|
// options.experimental_mempurge_threshold value as >0.0. When this is
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
// the case, ALL automatic flush operations (kWRiteBufferManagerFull) will
|
2021-08-11 01:07:48 +00:00
|
|
|
// first go through the MemPurge process. Therefore, we strongly
|
Make mempurge a background process (equivalent to in-memory compaction). (#8505)
Summary:
In https://github.com/facebook/rocksdb/issues/8454, I introduced a new process baptized `MemPurge` (memtable garbage collection). This new PR is built upon this past mempurge prototype.
In this PR, I made the `mempurge` process a background task, which provides superior performance since the mempurge process does not cling on the db_mutex anymore, and addresses severe restrictions from the past iteration (including a scenario where the past mempurge was failling, when a memtable was mempurged but was still referred to by an iterator/snapshot/...).
Now the mempurge process ressembles an in-memory compaction process: the stack of immutable memtables is filtered out, and the useful payload is used to populate an output memtable. If the output memtable is filled at more than 60% capacity (arbitrary heuristic) the mempurge process is aborted and a regular flush process takes place, else the output memtable is kept in the immutable memtable stack. Note that adding this output memtable to the `imm()` memtable stack does not trigger another flush process, so that the flush thread can go to sleep at the end of a successful mempurge.
MemPurge is activated by making the `experimental_allow_mempurge` flag `true`. When activated, the `MemPurge` process will always happen when the flush reason is `kWriteBufferFull`.
The 3 unit tests confirm that this process supports `Put`, `Get`, `Delete`, `DeleteRange` operators and is compatible with `Iterators` and `CompactionFilters`.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/8505
Reviewed By: pdillinger
Differential Revision: D29619283
Pulled By: bjlemaire
fbshipit-source-id: 8a99bee76b63a8211bff1a00e0ae32360aaece95
2021-07-10 00:16:00 +00:00
|
|
|
// recommend all users not to set this flag as true given that the MemPurge
|
|
|
|
// process has not matured yet.
|
|
|
|
Status MemPurge();
|
2022-06-23 16:42:18 +00:00
|
|
|
bool MemPurgeDecider(double threshold);
|
Set Write rate limiter priority dynamically and pass it to FS (#9988)
Summary:
### Context:
Background compactions and flush generate large reads and writes, and can be long running, especially for universal compaction. In some cases, this can impact foreground reads and writes by users.
From the RocksDB perspective, there can be two kinds of rate limiters, the internal (native) one and the external one.
- The internal (native) rate limiter is introduced in [the wiki](https://github.com/facebook/rocksdb/wiki/Rate-Limiter). Currently, only IO_LOW and IO_HIGH are used and they are set statically.
- For the external rate limiter, in FSWritableFile functions, IOOptions is open for end users to set and get rate_limiter_priority for their own rate limiter. Currently, RocksDB doesn’t pass the rate_limiter_priority through IOOptions to the file system.
### Solution
During the User Read, Flush write, Compaction read/write, the WriteController is used to determine whether DB writes are stalled or slowed down. The rate limiter priority (Env::IOPriority) can be determined accordingly. We decided to always pass the priority in IOOptions. What the file system does with it should be a contract between the user and the file system. We would like to set the rate limiter priority at file level, since the Flush/Compaction job level may be too coarse with multiple files and block IO level is too granular.
**This PR is for the Write path.** The **Write:** dynamic priority for different state are listed as follows:
| State | Normal | Delayed | Stalled |
| ----- | ------ | ------- | ------- |
| Flush | IO_HIGH | IO_USER | IO_USER |
| Compaction | IO_LOW | IO_USER | IO_USER |
Flush and Compaction writes share the same call path through BlockBaseTableWriter, WritableFileWriter, and FSWritableFile. When a new FSWritableFile object is created, its io_priority_ can be set dynamically based on the state of the WriteController. In WritableFileWriter, before the call sites of FSWritableFile functions, WritableFileWriter::DecideRateLimiterPriority() determines the rate_limiter_priority. The options (IOOptions) argument of FSWritableFile functions will be updated with the rate_limiter_priority.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/9988
Test Plan: Add unit tests.
Reviewed By: anand1976
Differential Revision: D36395159
Pulled By: gitbw95
fbshipit-source-id: a7c82fc29759139a1a07ec46c37dbf7e753474cf
2022-05-18 07:41:41 +00:00
|
|
|
// The rate limiter priority (io_priority) is determined dynamically here.
|
|
|
|
Env::IOPriority GetRateLimiterPriorityForWrite();
|
2019-10-16 17:39:00 +00:00
|
|
|
std::unique_ptr<FlushJobInfo> GetFlushJobInfo() const;
|
2018-10-16 02:59:20 +00:00
|
|
|
|
2023-07-26 23:25:06 +00:00
|
|
|
// Require db_mutex held.
|
|
|
|
// Called only when UDT feature is enabled and
|
|
|
|
// `persist_user_defined_timestamps` flag is false. Because we will refrain
|
|
|
|
// from flushing as long as there are still UDTs in a memtable that hasn't
|
|
|
|
// expired w.r.t `full_history_ts_low`. However, flush is continued if there
|
|
|
|
// is risk of entering write stall mode. In that case, we need
|
|
|
|
// to track the effective cutoff timestamp below which all the udts are
|
|
|
|
// removed because of flush, and use it to increase `full_history_ts_low` if
|
|
|
|
// the effective cutoff timestamp is newer. See
|
|
|
|
// `MaybeIncreaseFullHistoryTsLowToAboveCutoffUDT` for details.
|
|
|
|
void GetEffectiveCutoffUDTForPickedMemTables();
|
|
|
|
|
|
|
|
Status MaybeIncreaseFullHistoryTsLowToAboveCutoffUDT();
|
|
|
|
|
2014-10-28 18:54:33 +00:00
|
|
|
const std::string& dbname_;
|
2020-06-17 17:55:42 +00:00
|
|
|
const std::string db_id_;
|
|
|
|
const std::string db_session_id_;
|
2014-10-28 18:54:33 +00:00
|
|
|
ColumnFamilyData* cfd_;
|
2016-09-23 23:34:04 +00:00
|
|
|
const ImmutableDBOptions& db_options_;
|
2014-10-28 18:54:33 +00:00
|
|
|
const MutableCFOptions& mutable_cf_options_;
|
2020-12-02 17:29:50 +00:00
|
|
|
// A variable storing the largest memtable id to flush in this
|
2018-10-16 02:59:20 +00:00
|
|
|
// flush job. RocksDB uses this variable to select the memtables to flush in
|
|
|
|
// this job. All memtables in this column family with an ID smaller than or
|
2020-12-02 17:29:50 +00:00
|
|
|
// equal to max_memtable_id_ will be selected for flush.
|
|
|
|
uint64_t max_memtable_id_;
|
Introduce a new storage specific Env API (#5761)
Summary:
The current Env API encompasses both storage/file operations, as well as OS related operations. Most of the APIs return a Status, which does not have enough metadata about an error, such as whether its retry-able or not, scope (i.e fault domain) of the error etc., that may be required in order to properly handle a storage error. The file APIs also do not provide enough control over the IO SLA, such as timeout, prioritization, hinting about placement and redundancy etc.
This PR separates out the file/storage APIs from Env into a new FileSystem class. The APIs are updated to return an IOStatus with metadata about the error, as well as to take an IOOptions structure as input in order to allow more control over the IO.
The user can set both ```options.env``` and ```options.file_system``` to specify that RocksDB should use the former for OS related operations and the latter for storage operations. Internally, a ```CompositeEnvWrapper``` has been introduced that inherits from ```Env``` and redirects individual methods to either an ```Env``` implementation or the ```FileSystem``` as appropriate. When options are sanitized during ```DB::Open```, ```options.env``` is replaced with a newly allocated ```CompositeEnvWrapper``` instance if both env and file_system have been specified. This way, the rest of the RocksDB code can continue to function as before.
This PR also ports PosixEnv to the new API by splitting it into two - PosixEnv and PosixFileSystem. PosixEnv is defined as a sub-class of CompositeEnvWrapper, and threading/time functions are overridden with Posix specific implementations in order to avoid an extra level of indirection.
The ```CompositeEnvWrapper``` translates ```IOStatus``` return code to ```Status```, and sets the severity to ```kSoftError``` if the io_status is retryable. The error handling code in RocksDB can then recover the DB automatically.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/5761
Differential Revision: D18868376
Pulled By: anand1976
fbshipit-source-id: 39efe18a162ea746fabac6360ff529baba48486f
2019-12-13 22:47:08 +00:00
|
|
|
const FileOptions file_options_;
|
2014-10-28 18:54:33 +00:00
|
|
|
VersionSet* versions_;
|
2015-02-05 05:39:45 +00:00
|
|
|
InstrumentedMutex* db_mutex_;
|
2014-10-28 18:54:33 +00:00
|
|
|
std::atomic<bool>* shutting_down_;
|
2015-08-24 18:11:12 +00:00
|
|
|
std::vector<SequenceNumber> existing_snapshots_;
|
Use SST files for Transaction conflict detection
Summary:
Currently, transactions can fail even if there is no actual write conflict. This is due to relying on only the memtables to check for write-conflicts. Users have to tune memtable settings to try to avoid this, but it's hard to figure out exactly how to tune these settings.
With this diff, TransactionDB will use both memtables and SST files to determine if there are any write conflicts. This relies on the fact that BlockBasedTable stores sequence numbers for all writes that happen after any open snapshot. Also, D50295 is needed to prevent SingleDelete from disappearing writes (the TODOs in this test code will be fixed once the other diff is approved and merged).
Note that Optimistic transactions will still rely on tuning memtable settings as we do not want to read from SST while on the write thread. Also, memtable settings can still be used to reduce how often TransactionDB needs to read SST files.
Test Plan: unit tests, db bench
Reviewers: rven, yhchiang, kradhakrishnan, IslamAbdelRahman, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb, yoshinorim
Differential Revision: https://reviews.facebook.net/D50475
2015-10-15 23:37:15 +00:00
|
|
|
SequenceNumber earliest_write_conflict_snapshot_;
|
2017-10-06 17:26:38 +00:00
|
|
|
SnapshotChecker* snapshot_checker_;
|
2014-10-28 18:54:33 +00:00
|
|
|
JobContext* job_context_;
|
2023-01-24 17:54:04 +00:00
|
|
|
FlushReason flush_reason_;
|
2014-10-28 18:54:33 +00:00
|
|
|
LogBuffer* log_buffer_;
|
2020-03-03 00:14:00 +00:00
|
|
|
FSDirectory* db_directory_;
|
|
|
|
FSDirectory* output_file_directory_;
|
2014-10-28 18:54:33 +00:00
|
|
|
CompressionType output_compression_;
|
|
|
|
Statistics* stats_;
|
EventLogger
Summary:
Here's my proposal for making our LOGs easier to read by machines.
The idea is to dump all events as JSON objects. JSON is easy to read by humans, but more importantly, it's easy to read by machines. That way, we can parse this, load into SQLite/mongo and then query or visualize.
I started with table_create and table_delete events, but if everybody agrees, I'll continue by adding more events (flush/compaction/etc etc)
Test Plan:
Ran db_bench. Observed:
2015/01/15-14:13:25.788019 1105ef000 EVENT_LOG_v1 {"time_micros": 1421360005788015, "event": "table_file_creation", "file_number": 12, "file_size": 1909699}
2015/01/15-14:13:25.956500 110740000 EVENT_LOG_v1 {"time_micros": 1421360005956498, "event": "table_file_deletion", "file_number": 12}
Reviewers: yhchiang, rven, dhruba, MarkCallaghan, lgalanis, sdong
Reviewed By: sdong
Subscribers: dhruba, leveldb
Differential Revision: https://reviews.facebook.net/D31647
2015-03-13 17:15:54 +00:00
|
|
|
EventLogger* event_logger_;
|
2015-09-15 16:03:08 +00:00
|
|
|
TableProperties table_properties_;
|
2016-04-14 20:56:29 +00:00
|
|
|
bool measure_io_stats_;
|
2018-10-16 02:59:20 +00:00
|
|
|
// True if this flush job should call fsync on the output directory. False
|
|
|
|
// otherwise.
|
|
|
|
// Usually sync_output_directory_ is true. A flush job needs to call sync on
|
|
|
|
// the output directory before committing to the MANIFEST.
|
|
|
|
// However, an individual flush job does not have to call sync on the output
|
|
|
|
// directory if it is part of an atomic flush. After all flush jobs in the
|
|
|
|
// atomic flush succeed, call sync once on each distinct output directory.
|
|
|
|
const bool sync_output_directory_;
|
|
|
|
// True if this flush job should write to MANIFEST after successfully
|
|
|
|
// flushing memtables. False otherwise.
|
|
|
|
// Usually write_manifest_ is true. A flush job commits to the MANIFEST after
|
|
|
|
// flushing the memtables.
|
|
|
|
// However, an individual flush job cannot rashly write to the MANIFEST
|
|
|
|
// immediately after it finishes the flush if it is part of an atomic flush.
|
|
|
|
// In this case, only after all flush jobs succeed in flush can RocksDB
|
|
|
|
// commit to the MANIFEST.
|
|
|
|
const bool write_manifest_;
|
2019-10-16 17:39:00 +00:00
|
|
|
// The current flush job can commit flush result of a concurrent flush job.
|
|
|
|
// We collect FlushJobInfo of all jobs committed by current job and fire
|
|
|
|
// OnFlushCompleted for them.
|
|
|
|
std::list<std::unique_ptr<FlushJobInfo>> committed_flush_jobs_info_;
|
2016-07-19 22:12:46 +00:00
|
|
|
|
|
|
|
// Variables below are set by PickMemTable():
|
|
|
|
FileMetaData meta_;
|
|
|
|
autovector<MemTable*> mems_;
|
|
|
|
VersionEdit* edit_;
|
|
|
|
Version* base_;
|
|
|
|
bool pick_memtable_called;
|
2019-03-20 00:24:09 +00:00
|
|
|
Env::Priority thread_pri_;
|
2020-09-08 17:49:01 +00:00
|
|
|
|
|
|
|
const std::shared_ptr<IOTracer> io_tracer_;
|
2021-03-15 11:32:24 +00:00
|
|
|
SystemClock* clock_;
|
2020-11-13 02:43:30 +00:00
|
|
|
|
|
|
|
const std::string full_history_ts_low_;
|
2021-03-18 03:43:22 +00:00
|
|
|
BlobFileCompletionCallback* blob_callback_;
|
2022-07-15 04:49:34 +00:00
|
|
|
|
Refactor, clean up, fixes, and more testing for SeqnoToTimeMapping (#11905)
Summary:
This change is before a planned DBImpl change to ensure all sufficiently recent sequence numbers since Open are covered by SeqnoToTimeMapping (bug fix with existing test work-arounds). **Intended follow-up**
However, I found enough issues with SeqnoToTimeMapping to warrant this PR first, including very small fixes in DB implementation related to API contract of SeqnoToTimeMapping.
Functional fixes / changes:
* This fixes some mishandling of boundary cases. For example, if the user decides to stop writing to DB, the last written sequence number would perpetually have its write time updated to "now" and would always be ineligible for migration to cold tier. Part of the problem is that the SeqnoToTimeMapping would return a seqno known to have been written before (immediately or otherwise) the requested time, but compaction_job.cc would include that seqno in the preserve/exclude set. That is fixed (in part) by adding one in compaction_job.cc
* That problem was worse because a whole range of seqnos could be updated perpetually with new times in SeqnoToTimeMapping::Append (if no writes to DB). That logic was apparently optimized for GetOldestApproximateTime (now GetProximalTimeBeforeSeqno), which is not used in production, to the detriment of GetOldestSequenceNum (now GetProximalSeqnoBeforeTime), which is used in production. (Perhaps plans changed during development?) This is fixed in Append to optimize for accuracy of GetProximalSeqnoBeforeTime. (Unit tests added and updated.)
* Related: SeqnoToTimeMapping did not have a clear contract about the relationships between seqnos and times, just the idea of a rough correspondence. Now the class description makes it clear that the write time of each recorded seqno comes before or at the associated time, to support getting best results for GetProximalSeqnoBeforeTime. And this makes it easier to make clear the contract of each API function.
* Update `DBImpl::RecordSeqnoToTimeMapping()` to follow this ordering in gathering samples.
Some part of these changes has required an expanded test work-around for the problem (see intended follow-up above) that the DB does not immediately ensure recent seqnos are covered by its mapping. These work-arounds will be removed with that planned work.
An apparent compaction bug is revealed in
PrecludeLastLevelTest::RangeDelsCauseFileEndpointsToOverlap, so that test is disabled. Filed GitHub issue #11909
Cosmetic / code safety things (not exhaustive):
* Fix some confusing names.
* `seqno_time_mapping` was used inconsistently in places. Now just `seqno_to_time_mapping` to correspond to class name.
* Rename confusing `GetOldestSequenceNum` -> `GetProximalSeqnoBeforeTime` and `GetOldestApproximateTime` -> `GetProximalTimeBeforeSeqno`. Part of the motivation is that our times and seqnos here have the same underlying type, so we want to be clear about which is expected where to avoid mixing.
* Rename `kUnknownSeqnoTime` to `kUnknownTimeBeforeAll` because the value is a bad choice for unknown if we ever add ProximalAfterBlah functions.
* Arithmetic on SeqnoTimePair doesn't make sense except for delta encoding, so use better names / APIs with that in mind.
* (OMG) Don't allow direct comparison between SeqnoTimePair and SequenceNumber. (There is no checking that it isn't compared against time by accident.)
* A field name essentially matching the containing class name is a confusing pattern (`seqno_time_mapping_`).
* Wrap calls to confusing (but useful) upper_bound and lower_bound functions to have clearer names and more code reuse.
Pull Request resolved: https://github.com/facebook/rocksdb/pull/11905
Test Plan: GetOldestSequenceNum (now GetProximalSeqnoBeforeTime) and TruncateOldEntries were lacking unit tests, despite both being used in production (experimental feature). Added those and expanded others.
Reviewed By: jowlyzhang
Differential Revision: D49755592
Pulled By: pdillinger
fbshipit-source-id: f72a3baac74d24b963c77e538bba89a7fc8dce51
2023-09-29 18:21:59 +00:00
|
|
|
// reference to the seqno_to_time_mapping_ in db_impl.h, not safe to read
|
|
|
|
// without db mutex
|
|
|
|
const SeqnoToTimeMapping& db_impl_seqno_to_time_mapping_;
|
2022-07-15 04:49:34 +00:00
|
|
|
SeqnoToTimeMapping seqno_to_time_mapping_;
|
2023-07-26 23:25:06 +00:00
|
|
|
|
|
|
|
// Keeps track of the newest user-defined timestamp for this flush job if
|
|
|
|
// `persist_user_defined_timestamps` flag is false.
|
|
|
|
std::string cutoff_udt_;
|
2014-10-28 18:54:33 +00:00
|
|
|
};
|
|
|
|
|
2020-02-20 20:07:53 +00:00
|
|
|
} // namespace ROCKSDB_NAMESPACE
|