mirror of
https://github.com/facebook/rocksdb.git
synced 2024-11-26 07:30:54 +00:00
a5d773e077
Summary: **Context/Summary:** https://github.com/facebook/rocksdb/pull/9424 added rate-limiting support for user reads, which does not include batched `MultiGet()`s that call `RandomAccessFileReader::MultiRead()`. The reason is that it's harder (compared with RandomAccessFileReader::Read()) to implement the ideal rate-limiting where we first call `RateLimiter::RequestToken()` for allowed bytes to multi-read and then consume those bytes by satisfying as many requests in `MultiRead()` as possible. For example, it can be tricky to decide whether we want partially fulfilled requests within one `MultiRead()` or not. However, due to a recent urgent user request, we decide to pursue an elementary (but a conditionally ineffective) solution where we accumulate enough rate limiter requests toward the total bytes needed by one `MultiRead()` before doing that `MultiRead()`. This is not ideal when the total bytes are huge as we will actually consume a huge bandwidth from rate-limiter causing a burst on disk. This is not what we ultimately want with rate limiter. Therefore a follow-up work is noted through TODO comments. Pull Request resolved: https://github.com/facebook/rocksdb/pull/10159 Test Plan: - Modified existing unit test `DBRateLimiterOnReadTest/DBRateLimiterOnReadTest.NewMultiGet` - Traced the underlying system calls `io_uring_enter` and verified they are 10 seconds apart from each other correctly under the setting of `strace -ftt -e trace=io_uring_enter ./db_bench -benchmarks=multireadrandom -db=/dev/shm/testdb2 -readonly -num=50 -threads=1 -multiread_batched=1 -batch_size=100 -duration=10 -rate_limiter_bytes_per_sec=200 -rate_limiter_refill_period_us=1000000 -rate_limit_bg_reads=1 -disable_auto_compactions=1 -rate_limit_user_ops=1` where each `MultiRead()` read about 2000 bytes (inspected by debugger) and the rate limiter grants 200 bytes per seconds. - Stress test: - Verified `./db_stress (-test_cf_consistency=1/test_batches_snapshots=1) -use_multiget=1 -cache_size=1048576 -rate_limiter_bytes_per_sec=10241024 -rate_limit_bg_reads=1 -rate_limit_user_ops=1` work Reviewed By: ajkr, anand1976 Differential Revision: D37135172 Pulled By: hx235 fbshipit-source-id: 73b8e8f14761e5d4b77235dfe5d41f4eea968bcd |
||
---|---|---|
.. | ||
advisor | ||
block_cache_analyzer | ||
dump | ||
analyze_txn_stress_test.sh | ||
auto_sanity_test.sh | ||
backup_db.sh | ||
benchmark.sh | ||
benchmark_leveldb.sh | ||
blob_dump.cc | ||
check_all_python.py | ||
check_format_compatible.sh | ||
CMakeLists.txt | ||
db_bench.cc | ||
db_bench_tool.cc | ||
db_bench_tool_test.cc | ||
db_crashtest.py | ||
db_repl_stress.cc | ||
db_sanity_test.cc | ||
dbench_monitor | ||
Dockerfile | ||
generate_random_db.sh | ||
ingest_external_sst.sh | ||
io_tracer_parser.cc | ||
io_tracer_parser_test.cc | ||
io_tracer_parser_tool.cc | ||
io_tracer_parser_tool.h | ||
ldb.cc | ||
ldb_cmd.cc | ||
ldb_cmd_impl.h | ||
ldb_cmd_test.cc | ||
ldb_test.py | ||
ldb_tool.cc | ||
pflag | ||
reduce_levels_test.cc | ||
regression_test.sh | ||
restore_db.sh | ||
rocksdb_dump_test.sh | ||
run_blob_bench.sh | ||
run_flash_bench.sh | ||
run_leveldb.sh | ||
sample-dump.dmp | ||
simulated_hybrid_file_system.cc | ||
simulated_hybrid_file_system.h | ||
sst_dump.cc | ||
sst_dump_test.cc | ||
sst_dump_tool.cc | ||
trace_analyzer.cc | ||
trace_analyzer_test.cc | ||
trace_analyzer_tool.cc | ||
trace_analyzer_tool.h | ||
verify_random_db.sh | ||
write_external_sst.sh | ||
write_stress.cc | ||
write_stress_runner.py |