mirror of
https://github.com/facebook/rocksdb.git
synced 2024-12-02 01:16:16 +00:00
4834dab578
Summary: In some cases, we don't have to get really accurate number. Something like 10% off is fine, we can create a new option for that use case. In this case, we can calculate size for full files first, and avoid estimation inside SST files if full files got us a huge number. For example, if we already covered 100GB of data, we should be able to skip partial dives into 10 SST files of 30MB. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5609 Differential Revision: D16433481 Pulled By: elipoz fbshipit-source-id: 5830b31e1c656d0fd3a00d7fd2678ddc8f6e601b |
||
---|---|---|
.. | ||
compaction.cc | ||
compaction.h | ||
compaction_iteration_stats.h | ||
compaction_iterator.cc | ||
compaction_iterator.h | ||
compaction_iterator_test.cc | ||
compaction_job.cc | ||
compaction_job.h | ||
compaction_job_stats_test.cc | ||
compaction_job_test.cc | ||
compaction_picker.cc | ||
compaction_picker.h | ||
compaction_picker_fifo.cc | ||
compaction_picker_fifo.h | ||
compaction_picker_level.cc | ||
compaction_picker_level.h | ||
compaction_picker_test.cc | ||
compaction_picker_universal.cc | ||
compaction_picker_universal.h |