rocksdb/tools
Peter Dillinger 4d3518951a Option to decouple index and filter partitions (#12939)
Summary:
Partitioned metadata blocks were introduced back in 2017 to deal more gracefully with large DBs where RAM is relatively scarce and some data might be much colder than other data. The feature allows metadata blocks to compete for memory in the block cache against data blocks while alleviating tail latencies and thrash conditions that can arise with large metadata blocks (sometimes megabytes each) that can arise with large SST files. In general, the cost to partitioned metadata is more CPU in accesses (especially for filters where more binary search is needed before hashing can be used) and a bit more memory fragmentation and related overheads.

However the feature has always had a subtle limitation with a subtle effect on performance: index partitions and filter partitions must be cut at the same time, regardless of which wins the space race (hahaha) to metadata_block_size. Commonly filters will be a few times larger than indexes, so index partitions will be under-sized compared to filter (and data) blocks. While this does affect fragmentation and related overheads a bit, I suspect the bigger impact on performance is in the block cache. The coupling of the partition cuts would be defensible if the binary search done to find the filter block was used (on filter hit) to short-circuit binary search to an index partition, but that optimization has not been developed.

Consider two metadata blocks, an under-sized one and a normal-sized one, covering proportional sections of the key space with the same density of read queries. The under-sized one will be more prone to eviction from block cache because it is used less often. This is unfair because of its despite its proportionally smaller cost of keeping in block cache, and most of the cost of a miss to re-load it (random IO) is not proportional to the size (similar latency etc. up to ~32KB).

 ## This change

Adds a new table option decouple_partitioned_filters allows filter blocks and index blocks to be cut independently. To make this work, the partitioned filter block builder needs to know about the previous key, to generate an appropriate separator for the partition index. In most cases, BlockBasedTableBuilder already has easy access to the previous key to provide to the filter block builder.

This change includes refactoring to pass that previous key to the filter builder when available, with the filter building caching the previous key itself when unavailable, such as during compression dictionary training and some unit tests. Access to the previous key eliminates the need to track the previous prefix, which results in a small SST construction CPU win in prefix filtering cases, regardless of coupling, and possibly a small regression for some non-prefix cases, regardless of coupling, but still overall improvement especially with https://github.com/facebook/rocksdb/issues/12931.

Suggested follow-up:
* Update confusing use of "last key" to refer to "previous key"
* Expand unit test coverage with parallel compression and dictionary training
* Consider an option or enhancement to alleviate under-sized metadata blocks "at the end" of an SST file due to no coordination or awareness of when files are cut.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/12939

Test Plan:
unit tests updated. Also did some unit test runs with "hard wired" usage of parallel compression and dictionary training code paths to ensure they were working. Also ran blackbox_crash_test for a while with the new feature.

 ## SST write performance (CPU)

Using the same testing setup as in https://github.com/facebook/rocksdb/issues/12931 but with -decouple_partitioned_filters=1 in the "after" configuration, which benchmarking shows makes almost no difference in terms of SST write CPU. "After" vs. "before" this PR
```
-partition_index_and_filters=0 -prefix_size=0 -whole_key_filtering=1
923691 vs. 924851 (-0.13%)
-partition_index_and_filters=0 -prefix_size=8 -whole_key_filtering=0
921398 vs. 922973 (-0.17%)
-partition_index_and_filters=0 -prefix_size=8 -whole_key_filtering=1
902259 vs. 908756 (-0.71%)
-partition_index_and_filters=1 -prefix_size=8 -whole_key_filtering=0
917932 vs. 916901 (+0.60%)
-partition_index_and_filters=1 -prefix_size=8 -whole_key_filtering=0
912755 vs. 907298 (+0.60%)
-partition_index_and_filters=1 -prefix_size=8 -whole_key_filtering=1
899754 vs. 892433 (+0.82%)
```
I think this is a pretty good trade, especially in attracting more movement toward partitioned configurations.

 ## Read performance

Let's see how decoupling affects read performance across various degrees of memory constraint. To simplify LSM structure, we're using FIFO compaction. Since decoupling will overall increase metadata block size, we control for this somewhat with an extra "before" configuration with larger metadata block size setting (8k instead of 4k). Basic setup:

```
(for CS in 0300 1200; do TEST_TMPDIR=/dev/shm/rocksdb1 ./db_bench -benchmarks=fillrandom,flush,readrandom,block_cache_entry_stats -num=5000000 -duration=30 -disable_wal=1 -write_buffer_size=30000000 -bloom_bits=10 -compaction_style=2 -fifo_compaction_max_table_files_size_mb=10000 -fifo_compaction_allow_compaction=0 -partition_index_and_filters=1 -statistics=1 -cache_size=${CS}000000 -metadata_block_size=4096 -decouple_partitioned_filters=1 2>&1 | tee results-$CS; done)
```

And read ops/s results:

```CSV
Cache size MB,After/decoupled/4k,Before/4k,Before/8k
3,15593,15158,12826
6,16295,16693,14134
10,20427,20813,18459
20,27035,26836,27384
30,33250,31810,33846
60,35518,32585,35329
100,36612,31805,35292
300,35780,31492,35481
1000,34145,31551,35411
1100,35219,31380,34302
1200,35060,31037,34322
```

If you graph this with log scale on the X axis (internal link: https://pxl.cl/5qKRc), you see that the decoupled/4k configuration is essentially the best of both the before/4k and before/8k configurations: handles really tight memory closer to the old 4k configuration and handles generous memory closer to the old 8k configuration.

Reviewed By: jowlyzhang

Differential Revision: D61376772

Pulled By: pdillinger

fbshipit-source-id: fc2af2aee44290e2d9620f79651a30640799e01f
2024-08-16 15:34:31 -07:00
..
advisor Fix lint issues after enable BLACK (#10717) 2022-09-21 13:37:51 -07:00
block_cache_analyzer Block cache analyzer: Calculate miss ratio for each caller (#10823) 2024-01-10 14:02:14 -08:00
dump internal_repo_rocksdb (435146444452818992) (#12115) 2023-12-01 11:15:17 -08:00
CMakeLists.txt
Dockerfile
analyze_txn_stress_test.sh
auto_sanity_test.sh
backup_db.sh
benchmark.sh optimize file size statistics in benchmark script (#12363) 2024-02-21 15:45:18 -08:00
benchmark_ci.py Remove NUMA setting for benchmark-linux (#11180) 2023-02-02 15:15:09 -08:00
benchmark_compare.sh Fix file modes (#10815) 2022-10-13 09:00:37 -07:00
benchmark_leveldb.sh
blob_dump.cc Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
check_all_python.py Enable BLACK for internal_repo_rocksdb (#10710) 2022-09-20 17:47:52 -07:00
check_format_compatible.sh Update history and version for 9.5.fb release (#12880) 2024-07-22 13:15:09 -07:00
db_bench.cc Add (& fix) some simple source code checks (#8821) 2021-09-07 21:19:27 -07:00
db_bench_tool.cc Option to decouple index and filter partitions (#12939) 2024-08-16 15:34:31 -07:00
db_bench_tool_test.cc Group SST write in flush, compaction and db open with new stats (#11910) 2023-12-29 15:29:23 -08:00
db_crashtest.py Option to decouple index and filter partitions (#12939) 2024-08-16 15:34:31 -07:00
db_repl_stress.cc Prefer static_cast in place of most reinterpret_cast (#12308) 2024-02-07 10:44:11 -08:00
db_sanity_test.cc Remove 'virtual' when implied by 'override' (#12319) 2024-01-31 13:14:42 -08:00
dbench_monitor
generate_random_db.sh
ingest_external_sst.sh
io_tracer_parser.cc Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
io_tracer_parser_test.cc Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
io_tracer_parser_tool.cc Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
io_tracer_parser_tool.h Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
ldb.cc Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
ldb_cmd.cc Add user timestamp support into interactive query command (#12716) 2024-05-30 17:23:38 -07:00
ldb_cmd_impl.h Add LDB command and option for follower instances (#12682) 2024-05-28 23:21:32 -07:00
ldb_cmd_test.cc Remove `bottommost_temperature` (#12389) 2024-02-27 14:48:00 -08:00
ldb_test.py Multiget LDB Followup (#12332) 2024-02-05 20:11:35 -08:00
ldb_tool.cc Add LDB command and option for follower instances (#12682) 2024-05-28 23:21:32 -07:00
pflag
reduce_levels_test.cc Make option `level_compaction_dynamic_level_bytes` true by default (#11525) 2023-06-15 21:12:39 -07:00
regression_test.sh Fix regression script for async_io benchmarks (#11462) 2023-05-22 15:32:12 -07:00
restore_db.sh
rocksdb_dump_test.sh
run_blob_bench.sh add exe and script path check (#11621) 2023-07-19 12:05:24 -07:00
run_flash_bench.sh
run_leveldb.sh
sample-dump.dmp
simulated_hybrid_file_system.cc Group SST write in flush, compaction and db open with new stats (#11910) 2023-12-29 15:29:23 -08:00
simulated_hybrid_file_system.h Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
sst_dump.cc Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
sst_dump_test.cc Allow SstFileReader to verify number of entries in SST files (#12418) 2024-03-12 11:05:20 -07:00
sst_dump_tool.cc Augment sst_dump tool to verify num_entries in table property (#12322) 2024-02-01 14:35:03 -08:00
trace_analyzer.cc Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
trace_analyzer_test.cc internal_repo_rocksdb (435146444452818992) (#12115) 2023-12-01 11:15:17 -08:00
trace_analyzer_tool.cc Trace analyzer: replace number with enumeration type (#10827) 2023-12-27 10:38:53 -08:00
trace_analyzer_tool.h Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
verify_random_db.sh Fix some bugs in verify_random_db.sh (#10112) 2022-06-03 16:35:13 -07:00
write_external_sst.sh
write_stress.cc Remove RocksDB LITE (#11147) 2023-01-27 13:14:19 -08:00
write_stress_runner.py Enable BLACK for internal_repo_rocksdb (#10710) 2022-09-20 17:47:52 -07:00