mirror of
https://github.com/facebook/rocksdb.git
synced 2024-11-29 00:34:03 +00:00
b00761eea6
Summary: Since we do not evict a file's blocks from block cache before that file is deleted, we require a file's cache ID prefix is both unique and non-reusable. However, the Windows functionality we were relying on only guaranteed uniqueness. That meant a newly created file could be assigned the same cache ID prefix as a deleted file. If the newly created file had block offsets matching the deleted file, full cache keys could be exactly the same, resulting in obsolete data blocks returned from cache when trying to read from the new file. We noticed this when running on FAT32 where compaction was writing out of order keys due to reading obsolete blocks from its input files. The functionality is documented as behaving the same on NTFS, although I wasn't able to repro it there. Pull Request resolved: https://github.com/facebook/rocksdb/pull/5844 Test Plan: we had a reliable repro of out-of-order keys on FAT32 that was fixed by this change Differential Revision: D17752442 fbshipit-source-id: 95d983f9196cf415f269e19293b97341edbf7e00 |
||
---|---|---|
.. | ||
backupable | ||
blob_db | ||
cassandra | ||
checkpoint | ||
compaction_filters | ||
convenience | ||
leveldb_options | ||
memory | ||
merge_operators | ||
option_change_migration | ||
options | ||
persistent_cache | ||
simulator_cache | ||
table_properties_collectors | ||
trace | ||
transactions | ||
ttl | ||
write_batch_with_index | ||
debug.cc | ||
env_librados.cc | ||
env_librados.md | ||
env_librados_test.cc | ||
env_mirror.cc | ||
env_mirror_test.cc | ||
env_timed.cc | ||
env_timed_test.cc | ||
merge_operators.h | ||
object_registry.cc | ||
object_registry_test.cc | ||
util_merge_operators_test.cc |