mirror of
https://github.com/facebook/rocksdb.git
synced 2024-12-02 20:52:55 +00:00
ef3e289b2d
Summary: **Context/Summary:** A size amp compaction can select and prevent a large number of L0 files from being selected by other compaction. If such compaction is running long or being queued behind, these L0 files will exist for long. With a few more flushes, we can run into write stop triggered by # L0 files. We've seen this happen on a host with many DBs sharing same thread pool, each of these DBs submits a size amp compaction with (110-180)+ files to the pool upon reopen and with a few more flushes, they hit the 200 L0 write stop condition. The idea is to exclude some L0 input files in size amp compaction that are harmless to size amp reduction but improve the situation described above. The exclusion algorithm is in `MightExcludeNewL0sToReduceWriteStop()` with two elements: 1. #L0 to exclude + (level0_stop_writes_trigger - num_l0_input_pre_exclusion) should be in the range of [min_merge_width, max_merge_width]. - This is to ensure we are excluding enough L0 input files but not too many to be qualified to picked for another compaction along with the incoming future L0 files before write stop. 2. Based on (1), further constrain #L0 to exclude based on the post-exclusion compaction score. The goal is to ensure our exclusion will not disqualify the size amp compaction from being a size amp compaction after exclusion. **Tets plan:** New unit test Pull Request resolved: https://github.com/facebook/rocksdb/pull/11749 Reviewed By: ajkr Differential Revision: D48850631 Pulled By: hx235 fbshipit-source-id: 2c321036e164087c36319dd5645cbbf6b6152092 |
||
---|---|---|
.. | ||
.gitkeep | ||
buffered_io_compaction_readahead_size_zero.md | ||
exclude_some_l0_size_amp.md | ||
ldb_scan_command_output_change.md |