mirror of
https://github.com/facebook/rocksdb.git
synced 2024-11-26 16:30:56 +00:00
babe56ddba
Summary: Users can set the priority for file reads associated with their operation by setting `ReadOptions::rate_limiter_priority` to something other than `Env::IO_TOTAL`. Rate limiting `VerifyChecksum()` and `VerifyFileChecksums()` is the motivation for this PR, so it also includes benchmarks and minor bug fixes to get that working. `RandomAccessFileReader::Read()` already had support for rate limiting compaction reads. I changed that rate limiting to be non-specific to compaction, but rather performed according to the passed in `Env::IOPriority`. Now the compaction read rate limiting is supported by setting `rate_limiter_priority = Env::IO_LOW` on its `ReadOptions`. There is no default value for the new `Env::IOPriority` parameter to `RandomAccessFileReader::Read()`. That means this PR goes through all callers (in some cases multiple layers up the call stack) to find a `ReadOptions` to provide the priority. There are TODOs for cases I believe it would be good to let user control the priority some day (e.g., file footer reads), and no TODO in cases I believe it doesn't matter (e.g., trace file reads). The API doc only lists the missing cases where a file read associated with a provided `ReadOptions` cannot be rate limited. For cases like file ingestion checksum calculation, there is no API to provide `ReadOptions` or `Env::IOPriority`, so I didn't count that as missing. Pull Request resolved: https://github.com/facebook/rocksdb/pull/9424 Test Plan: - new unit tests - new benchmarks on ~50MB database with 1MB/s read rate limit and 100ms refill interval; verified with strace reads are chunked (at 0.1MB per chunk) and spaced roughly 100ms apart. - setup command: `./db_bench -benchmarks=fillrandom,compact -db=/tmp/testdb -target_file_size_base=1048576 -disable_auto_compactions=true -file_checksum=true` - benchmarks command: `strace -ttfe pread64 ./db_bench -benchmarks=verifychecksum,verifyfilechecksums -use_existing_db=true -db=/tmp/testdb -rate_limiter_bytes_per_sec=1048576 -rate_limit_bg_reads=1 -rate_limit_user_ops=true -file_checksum=true` - crash test using IO_USER priority on non-validation reads with https://github.com/facebook/rocksdb/issues/9567 reverted: `python3 tools/db_crashtest.py blackbox --max_key=1000000 --write_buffer_size=524288 --target_file_size_base=524288 --level_compaction_dynamic_level_bytes=true --duration=3600 --rate_limit_bg_reads=true --rate_limit_user_ops=true --rate_limiter_bytes_per_sec=10485760 --interval=10` Reviewed By: hx235 Differential Revision: D33747386 Pulled By: ajkr fbshipit-source-id: a2d985e97912fba8c54763798e04f006ccc56e0c
81 lines
3.3 KiB
C++
81 lines
3.3 KiB
C++
// Copyright (c) 2011-present, Facebook, Inc. All rights reserved.
|
|
// This source code is licensed under both the GPLv2 (found in the
|
|
// COPYING file in the root directory) and Apache 2.0 License
|
|
// (found in the LICENSE.Apache file in the root directory).
|
|
//
|
|
#pragma once
|
|
#include <string>
|
|
|
|
#include "file/filename.h"
|
|
#include "options/db_options.h"
|
|
#include "rocksdb/env.h"
|
|
#include "rocksdb/file_system.h"
|
|
#include "rocksdb/sst_file_writer.h"
|
|
#include "rocksdb/status.h"
|
|
#include "rocksdb/system_clock.h"
|
|
#include "rocksdb/types.h"
|
|
#include "trace_replay/io_tracer.h"
|
|
|
|
namespace ROCKSDB_NAMESPACE {
|
|
// use_fsync maps to options.use_fsync, which determines the way that
|
|
// the file is synced after copying.
|
|
extern IOStatus CopyFile(FileSystem* fs, const std::string& source,
|
|
const std::string& destination, uint64_t size,
|
|
bool use_fsync,
|
|
const std::shared_ptr<IOTracer>& io_tracer = nullptr);
|
|
inline IOStatus CopyFile(const std::shared_ptr<FileSystem>& fs,
|
|
const std::string& source,
|
|
const std::string& destination, uint64_t size,
|
|
bool use_fsync,
|
|
const std::shared_ptr<IOTracer>& io_tracer = nullptr) {
|
|
return CopyFile(fs.get(), source, destination, size, use_fsync, io_tracer);
|
|
}
|
|
|
|
extern IOStatus CreateFile(FileSystem* fs, const std::string& destination,
|
|
const std::string& contents, bool use_fsync);
|
|
|
|
inline IOStatus CreateFile(const std::shared_ptr<FileSystem>& fs,
|
|
const std::string& destination,
|
|
const std::string& contents, bool use_fsync) {
|
|
return CreateFile(fs.get(), destination, contents, use_fsync);
|
|
}
|
|
|
|
extern Status DeleteDBFile(const ImmutableDBOptions* db_options,
|
|
const std::string& fname,
|
|
const std::string& path_to_sync, const bool force_bg,
|
|
const bool force_fg);
|
|
|
|
extern IOStatus GenerateOneFileChecksum(
|
|
FileSystem* fs, const std::string& file_path,
|
|
FileChecksumGenFactory* checksum_factory,
|
|
const std::string& requested_checksum_func_name, std::string* file_checksum,
|
|
std::string* file_checksum_func_name,
|
|
size_t verify_checksums_readahead_size, bool allow_mmap_reads,
|
|
std::shared_ptr<IOTracer>& io_tracer, RateLimiter* rate_limiter,
|
|
Env::IOPriority rate_limiter_priority);
|
|
|
|
inline IOStatus PrepareIOFromReadOptions(const ReadOptions& ro,
|
|
SystemClock* clock, IOOptions& opts) {
|
|
if (ro.deadline.count()) {
|
|
std::chrono::microseconds now =
|
|
std::chrono::microseconds(clock->NowMicros());
|
|
// Ensure there is atleast 1us available. We don't want to pass a value of
|
|
// 0 as that means no timeout
|
|
if (now >= ro.deadline) {
|
|
return IOStatus::TimedOut("Deadline exceeded");
|
|
}
|
|
opts.timeout = ro.deadline - now;
|
|
}
|
|
|
|
if (ro.io_timeout.count() &&
|
|
(!opts.timeout.count() || ro.io_timeout < opts.timeout)) {
|
|
opts.timeout = ro.io_timeout;
|
|
}
|
|
return IOStatus::OK();
|
|
}
|
|
|
|
// Test method to delete the input directory and all of its contents.
|
|
// This method is destructive and is meant for use only in tests!!!
|
|
Status DestroyDir(Env* env, const std::string& dir);
|
|
} // namespace ROCKSDB_NAMESPACE
|