compute compaction score once for a batch of range file deletes (#10744)

Summary:
Only re-calculate compaction score once for a batch of deletions. Fix performance regression brought by https://github.com/facebook/rocksdb/pull/8434.

Pull Request resolved: https://github.com/facebook/rocksdb/pull/10744

Test Plan:
In one of our production cluster that recently upgraded to RocksDB 6.29, it takes more than 10 minutes to delete files in 30,000 ranges. The RocksDB instance contains approximately 80,000 files. After this patch, the duration reduces to 100+ ms, which is on par with RocksDB 6.4.

Cherry-picking downstream PR: https://github.com/tikv/rocksdb/pull/316

Signed-off-by: tabokie <xy.tao@outlook.com>

Reviewed By: cbi42

Differential Revision: D48002581

Pulled By: ajkr

fbshipit-source-id: 7245607ee3ad79c53b648a6396c9159f166b9437
This commit is contained in:
Xinye Tao 2023-08-07 12:29:31 -07:00 committed by Facebook GitHub Bot
parent cdb11f5ce6
commit d2b0652b32
1 changed files with 4 additions and 2 deletions

View File

@ -4447,10 +4447,12 @@ Status DBImpl::DeleteFilesInRanges(ColumnFamilyHandle* column_family,
deleted_files.insert(level_file); deleted_files.insert(level_file);
level_file->being_compacted = true; level_file->being_compacted = true;
} }
vstorage->ComputeCompactionScore(*cfd->ioptions(),
*cfd->GetLatestMutableCFOptions());
} }
} }
if (!deleted_files.empty()) {
vstorage->ComputeCompactionScore(*cfd->ioptions(),
*cfd->GetLatestMutableCFOptions());
}
if (edit.GetDeletedFiles().empty()) { if (edit.GetDeletedFiles().empty()) {
job_context.Clean(); job_context.Clean();
return status; return status;