mirror of https://github.com/facebook/rocksdb.git
Clarify comment about compaction_readahead_size's sanitization change (#11755)
Summary: Pull Request resolved: https://github.com/facebook/rocksdb/pull/11755 Reviewed By: anand1976 Differential Revision: D48656627 Pulled By: hx235 fbshipit-source-id: 568fa7749cbf6ecf65102b4513fa3af975fd91b8
This commit is contained in:
parent
bc448e9c89
commit
451316597f
|
@ -22,6 +22,7 @@
|
|||
### Behavior Changes
|
||||
* Statistics `rocksdb.sst.read.micros` now includes time spent on multi read and async read into the file
|
||||
* For Universal Compaction users, periodic compaction (option `periodic_compaction_seconds`) will be set to 30 days by default if block based table is used.
|
||||
* `Options::compaction_readahead_size` will be sanitized to 2MB when set to 0 under non-direct IO since we have moved prefetching responsibility to page cache for compaction read with readhead size equal to `Options::compaction_readahead_size` under non-direct IO (#11631)
|
||||
|
||||
### Bug Fixes
|
||||
* Fix a bug in FileTTLBooster that can cause users with a large number of levels (more than 65) to see errors like "runtime error: shift exponent .. is too large.." (#11673).
|
||||
|
|
|
@ -951,10 +951,13 @@ struct DBOptions {
|
|||
enum AccessHint { NONE, NORMAL, SEQUENTIAL, WILLNEED };
|
||||
AccessHint access_hint_on_compaction_start = NORMAL;
|
||||
|
||||
// If non-zero, we perform bigger reads when doing compaction. If you're
|
||||
// The size RocksDB uses to perform readahead during compaction read.
|
||||
// If set zero, RocksDB will sanitize it to be 2MB during db open.
|
||||
// If you're
|
||||
// running RocksDB on spinning disks, you should set this to at least 2MB.
|
||||
// That way RocksDB's compaction is doing sequential instead of random reads.
|
||||
//
|
||||
//
|
||||
// Default: 0
|
||||
//
|
||||
// Dynamically changeable through SetDBOptions() API.
|
||||
|
|
Loading…
Reference in New Issue