mirror of https://github.com/facebook/rocksdb.git
8515bd50c9
Summary: Added rate limiter and read rate-limiting support to SequentialFileReader. I've updated call sites to SequentialFileReader::Read with appropriate IO priority (or left a TODO and specified IO_TOTAL for now). The PR is separated into four commits: the first one added the rate-limiting support, but with some fixes in the unit test since the number of request bytes from rate limiter in SequentialFileReader are not accurate (there is overcharge at EOF). The second commit fixed this by allowing SequentialFileReader to check file size and determine how many bytes are left in the file to read. The third commit added benchmark related code. The fourth commit moved the logic of using file size to avoid overcharging the rate limiter into backup engine (the main user of SequentialFileReader). Pull Request resolved: https://github.com/facebook/rocksdb/pull/9973 Test Plan: - `make check`, backup_engine_test covers usage of SequentialFileReader with rate limiter. - Run db_bench to check if rate limiting is throttling as expected: Verified that reads and writes are together throttled at 2MB/s, and at 0.2MB chunks that are 100ms apart. - Set up: `./db_bench --benchmarks=fillrandom -db=/dev/shm/test_rocksdb` - Benchmark: ``` strace -ttfe read,write ./db_bench --benchmarks=backup -db=/dev/shm/test_rocksdb --backup_rate_limit=2097152 --use_existing_db strace -ttfe read,write ./db_bench --benchmarks=restore -db=/dev/shm/test_rocksdb --restore_rate_limit=2097152 --use_existing_db ``` - db bench on backup and restore to ensure no performance regression. - backup (avg over 50 runs): pre-change: 1.90443e+06 micros/op; post-change: 1.8993e+06 micros/op (improve by 0.2%) - restore (avg over 50 runs): pre-change: 1.79105e+06 micros/op; post-change: 1.78192e+06 micros/op (improve by 0.5%) ``` # Set up ./db_bench --benchmarks=fillrandom -db=/tmp/test_rocksdb -num=10000000 # benchmark TEST_TMPDIR=/tmp/test_rocksdb NUM_RUN=50 for ((j=0;j<$NUM_RUN;j++)) do ./db_bench -db=$TEST_TMPDIR -num=10000000 -benchmarks=backup -use_existing_db | egrep 'backup' # Restore #./db_bench -db=$TEST_TMPDIR -num=10000000 -benchmarks=restore -use_existing_db done > rate_limit.txt && awk -v NUM_RUN=$NUM_RUN '{sum+=$3;sum_sqrt+=$3^2}END{print sum/NUM_RUN, sqrt(sum_sqrt/NUM_RUN-(sum/NUM_RUN)^2)}' rate_limit.txt >> rate_limit_2.txt ``` Reviewed By: hx235 Differential Revision: D36327418 Pulled By: cbi42 fbshipit-source-id: e75d4307cff815945482df5ba630c1e88d064691 |
||
---|---|---|
.. | ||
advisor | ||
block_cache_analyzer | ||
dump | ||
CMakeLists.txt | ||
Dockerfile | ||
analyze_txn_stress_test.sh | ||
auto_sanity_test.sh | ||
backup_db.sh | ||
benchmark.sh | ||
benchmark_leveldb.sh | ||
blob_dump.cc | ||
check_all_python.py | ||
check_format_compatible.sh | ||
db_bench.cc | ||
db_bench_tool.cc | ||
db_bench_tool_test.cc | ||
db_crashtest.py | ||
db_repl_stress.cc | ||
db_sanity_test.cc | ||
dbench_monitor | ||
generate_random_db.sh | ||
ingest_external_sst.sh | ||
io_tracer_parser.cc | ||
io_tracer_parser_test.cc | ||
io_tracer_parser_tool.cc | ||
io_tracer_parser_tool.h | ||
ldb.cc | ||
ldb_cmd.cc | ||
ldb_cmd_impl.h | ||
ldb_cmd_test.cc | ||
ldb_test.py | ||
ldb_tool.cc | ||
pflag | ||
reduce_levels_test.cc | ||
regression_test.sh | ||
restore_db.sh | ||
rocksdb_dump_test.sh | ||
run_blob_bench.sh | ||
run_flash_bench.sh | ||
run_leveldb.sh | ||
sample-dump.dmp | ||
simulated_hybrid_file_system.cc | ||
simulated_hybrid_file_system.h | ||
sst_dump.cc | ||
sst_dump_test.cc | ||
sst_dump_tool.cc | ||
trace_analyzer.cc | ||
trace_analyzer_test.cc | ||
trace_analyzer_tool.cc | ||
trace_analyzer_tool.h | ||
verify_random_db.sh | ||
write_external_sst.sh | ||
write_stress.cc | ||
write_stress_runner.py |